Governance

The set of policies, procedures, roles, and responsibilities that guide the ethical, legal, and effective development and deployment of AI systems.

Definition

An overarching organizational discipline that defines how AI initiatives are proposed, approved, developed, monitored, and retired. It assigns clear ownership (data stewards, model owners, compliance officers), codifies approval workflows, mandates documentation (impact assessments, audit logs), and establishes feedback loops for continuous improvement—ensuring AI aligns with corporate strategy and regulatory requirements.

Real-World Example

A global bank’s AI Governance Office issues a policy requiring every new model to pass through stages: ideation, risk assessment, ethics review, security testing, and board approval. Each stage has designated roles and sign-off checklists, preventing unauthorized or non-compliant AI deployments.