Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

ISO 42001: A Practical Implementation Guide for Enterprise Teams

AI Regulations

ISO 42001: A Practical Implementation Guide for Enterprise Teams

AI Regulations

ISO 42001: A Practical Implementation Guide for Enterprise Teams

A step-by-step guide to ISO 42001 implementation - gap analysis, Annex A controls, certification audit, and EU AI Act alignment for enterprise teams.

Belfast

Belfast

14 min read time

Topics

AI Governance
ISO 42001
Compliance
AI Management System

Topics

By April 2026, the list of organisations certified to ISO/IEC 42001 reads like a who's who of enterprise AI: IBM for its Granite models, Anthropic for Claude, Microsoft for 365 Copilot, KPMG Australia across its advisory practice, and Singapore's Changi Airport for its operational AI systems.[1] The standard, published in December 2023, has moved faster from publication to enterprise adoption than most observers predicted. And the momentum is accelerating, driven in large part by the approaching August 2026 deadline for EU AI Act high-risk system obligations.

Yet most guidance on ISO 42001 implementation stops at "what is it" and "why does it matter." Enterprise teams tasked with achieving ISO 42001 compliance face a more practical question: how? What does a gap analysis look like? What documentation is required? How do the 38 Annex A controls translate into operational practice? And how does certification support - without replacing - EU AI Act compliance?

This guide answers those questions. It is written for compliance officers, AI governance leads, and engineering teams who have been asked to implement ISO 42001 and need to know what that actually involves.

What ISO 42001 Requires

ISO/IEC 42001 is the first international standard for an Artificial Intelligence Management System (AIMS). It follows the same Harmonised Structure (formerly Annex SL) used by ISO 27001 for information security and ISO 9001 for quality management, which means organisations already certified to those standards will find the management system skeleton familiar.[2]

The standard's ten clauses establish mandatory requirements across seven areas:

  • Context (Clause 4): Define the organisation's AI role (provider, producer, customer or partner), identify stakeholders, and determine the scope of the AIMS

  • Leadership (Clause 5): Establish an AI policy, assign roles and responsibilities, and secure top management commitment

  • Planning (Clause 6): Conduct AI risk assessments, perform AI system impact assessments, and define objectives

  • Support (Clause 7): Allocate resources, build competence, and maintain documented information

  • Operation (Clause 8): Implement controls, execute risk treatment plans, and manage operational processes

  • Performance evaluation (Clause 9): Monitor and measure AIMS effectiveness, conduct internal audits, and hold management reviews

  • Improvement (Clause 10): Address nonconformities, take corrective action, and drive continual improvement

What makes it different from ISO 27001

Organisations familiar with ISO 27001 will recognise the structure. The differences are in the AI-specific substance layered on top.

Clause 4.1 requires organisations to define their role in the AI ecosystem - a concept with no ISO 27001 equivalent. A large enterprise might simultaneously be an AI provider (offering AI-powered products to customers), an AI customer (using third-party AI tools internally), and an AI partner (supplying data into another organisation's AI system). The AIMS scope must reflect every role the organisation occupies.

Clause 6.1.4 introduces the AI system impact assessment - a formal, documented evaluation of the potential consequences of AI deployment on individuals, groups and society. This goes beyond organisational risk assessment (which ISO 27001 practitioners will be familiar with) to consider external harms: algorithmic bias affecting hiring decisions, automated systems denying financial services, or surveillance technologies infringing civil liberties. It is closer in spirit to a GDPR Data Protection Impact Assessment than to a traditional risk register, though the standard is less prescriptive about methodology.

And Annex A is entirely new. Where ISO 27001's Annex A contains 93 information security controls, ISO 42001's Annex A contains 38 controls across nine domains focused specifically on AI governance.[3]

The 38 Controls: What Annex A Actually Requires

Annex A is the operational backbone of the standard. Its 38 controls are organised into nine domains, each targeting a different aspect of responsible AI management.


Domain

Focus

Key controls

A.2 - AI Policies

Existence and appropriateness of AI policies

AI policy aligned with organisational purpose; regular review and update

A.3 - Internal Organisation

Accountability and governance structures

Defined roles for AI governance; cross-functional coordination mechanisms

A.4 - Resources for AI Systems

Adequacy of data, tooling, compute, and human competence

Data quality assessment; infrastructure adequacy; skills and competence requirements

A.5 - Impact Assessment

Methodology for assessing AI consequences

Documented impact assessment process; assessment of effects on individuals and society

A.6 - AI System Lifecycle

Controls across design, development, testing, deployment, and decommissioning

Development standards; testing and validation; change management; model retirement

A.7 - Data Management

Data quality, provenance, and protection

Data lineage documentation; data quality controls; data protection measures

A.8 - Transparency

Explainability and stakeholder disclosure

Documentation of AI capabilities and limitations; stakeholder-appropriate explanations

A.9 - Use of AI Systems

Human oversight and acceptable use

Defined triggers for human intervention; acceptable use policies; monitoring of AI in operation

Not all 38 controls are mandatory for every organisation. The standard requires a Statement of Applicability (SoA) - a document that lists each Annex A control, states whether it is included or excluded from the AIMS, and provides justification for any exclusions. The SoA is one of the first documents an auditor will ask for, and its quality often determines the tone of the entire audit. For organisations with large AI portfolios, the SoA also enables tiered governance - applying the full depth of Annex A controls to high-risk systems whilst taking a lighter approach to lower-risk deployments, provided the risk-based rationale is documented.

Annex B provides implementation guidance for each control. Annex C maps AI risk sources. Annex D cross-references domain-specific standards. Together, the four annexes form a comprehensive implementation reference.

ISO 42001 Implementation: Seven Steps to Certification

Step 1: Establish scope and build the AI inventory

Before anything else, define what falls within the AIMS boundary. This means inventorying every AI system the organisation builds, buys, deploys, or contributes to - including third-party tools, embedded AI in software products, and AI services consumed through APIs.

This step is consistently harder than it sounds. Shadow AI - employees using ChatGPT, Copilot, or other AI tools without IT visibility - is discovered at this stage more often than not. The inventory should capture, at minimum: system name and purpose, AI role (provider, customer, partner), data inputs and outputs, deployment status, risk classification, and system owner.

The scope decision also determines the scale of the implementation. Some organisations scope the AIMS to a single business unit or product line initially, then expand. Others pursue enterprise-wide scope from the start. The right answer depends on organisational complexity, but starting narrower and expanding is generally more practical than attempting comprehensive coverage on the first pass.

Step 2: Conduct a gap analysis

With the scope defined, conduct a structured comparison of current practices against each clause requirement (Clauses 4-10) and each applicable Annex A control. Assemble a cross-functional team spanning compliance, legal, data science, engineering, product, and risk management.

For each gap, document what is missing, assess severity (prioritise gaps affecting high-risk AI systems or core governance requirements), and estimate remediation effort. Organisations already certified to ISO 27001 will find many Clauses 4-10 gaps are minor - the management system infrastructure carries over directly. The significant new work will be concentrated in Clause 6.1.4 (AI impact assessment), Annex A controls, and the AI-specific evidence requirements.

Step 3: Design the AIMS

Build or adapt the management system components:

  • AI policy and sub-policies: The overarching AI policy (Clause 5.2) must address responsible AI values - fairness, transparency, accountability, safety, privacy. Sub-policies for acceptable use, data governance, and third-party AI may be needed depending on scope

  • Risk assessment methodology: Adapt existing risk assessment processes for AI-specific risks - algorithmic bias, model drift, misuse, unexplainable outputs, security vulnerabilities. The methodology must produce consistent, comparable results

  • AI system impact assessment methodology: Develop templates and processes for evaluating consequences to individuals, groups and society. Define triggers for reassessment: major system changes, new data sources, new use cases, regulatory changes, adverse incidents

  • Roles and responsibilities matrix: Define who owns the AIMS, who owns individual AI systems, who conducts risk and impact assessments, and who is accountable for each Annex A control domain

  • Training and competence programme: Identify competence requirements for each role and plan training accordingly

Step 4: Implement controls and gather evidence

Roll out controls operationally. This is where most implementations slow down, because the standard requires verifiable evidence - not just policies on paper.

Evidence artefacts include: model cards documenting system capabilities and limitations, testing and validation logs, bias assessment records, data lineage documentation, incident response records, change management logs, and human oversight intervention records. Enzai's experience supporting enterprise implementations confirms what early adopters consistently report: evidence collection and documentation discipline are the hardest operational challenge - not because the evidence does not exist, but because it is scattered across tools, teams, and systems with no centralised collection mechanism.

Step 5: Internal audit

Conduct at least one full internal audit against all in-scope clauses and Annex A controls before seeking certification. The internal auditor must be independent of the controls being audited (this can be internal staff from a different function, or a qualified third party). The audit must produce findings, and nonconformities must be addressed through the corrective action process before proceeding.

Step 6: Management review

Hold a formal management review (Clause 9.3) covering: AIMS performance data, internal audit results, risk and impact assessment outputs, nonconformities and corrective actions, stakeholder feedback, and any changes affecting the AIMS. The output is documented decisions and actions for continual improvement. This review must involve top management - it cannot be delegated entirely to the governance team.

Step 7: Certification audit

The external audit follows a two-stage model. Stage 1 is a documentation review (typically 1-2 days) where the auditor assesses whether the AIMS documentation is adequate and the organisation is ready for a full assessment. Stage 2 is the implementation audit (3-9+ days depending on scope and complexity) where the auditor interviews personnel, reviews evidence, and walks through Annex A controls in operation.

Certificates are valid for three years, with annual surveillance audits and a full recertification at the end of the cycle.

A critical note on certification bodies: select a CB that holds accreditation specifically scoped to ISO 42001 from a recognised accreditation body (ANAB, UKAS, DAkkS or equivalent). ISO/IEC 42006, the standard for bodies auditing against 42001, is still being finalised, and auditor competence varies. An accredited certification from a CB with demonstrable AI domain expertise carries materially more weight than an unaccredited one - though the certificates may look similar at first glance.[4]

ISO 42001 and the EU AI Act: Complementary, Not Equivalent

The relationship between ISO 42001 and the EU AI Act is frequently misunderstood. The most important thing to know: ISO 42001 certification does not constitute EU AI Act compliance. As of April 2026, the standard has not been listed in the EU Official Journal as a harmonised standard, which means the legal "presumption of conformity" mechanism does not apply.[5]

That said, the overlap is substantial. CEN-CENELEC's Joint Technical Committee 21 is actively working to adapt ISO 42001 into a European Norm (the draft designated prEN ISO/IEC 42001 was circulated for public enquiry from November 2025 to February 2026). Separately, prEN 18286 - a purpose-built harmonised standard for EU AI Act regulatory purposes - entered public enquiry in October 2025.[6] When these standards are finalised and published in the Official Journal, ISO 42001 certification will move from "helpful preparation" to "direct compliance pathway."

In the meantime, the practical overlap includes:

  • Risk management: EU AI Act Article 9 requires risk management systems for high-risk AI. ISO 42001 Clause 6 provides the methodology and evidence framework

  • Human oversight: Article 14 requires oversight measures. Annex A domain A.9 defines human intervention controls

  • Transparency and documentation: Articles 11 and 13 require technical documentation and transparency. Annex A domains A.7 and A.8 address data management, explainability, and stakeholder disclosure

  • Data governance: Article 10 requires data quality and governance. Annex A domain A.7 provides the control framework

  • Post-market monitoring: Article 72 requires ongoing monitoring. ISO 42001 Clauses 9 and 10 establish the performance evaluation and improvement cycle

The practical guidance is clear: ISO 42001 builds the governance infrastructure, evidence base, and management discipline that EU AI Act compliance will require. Organisations that implement the standard now will be materially better prepared when the high-risk obligations take full effect in August 2026 - regardless of whether formal harmonisation has been completed by that date.

Common ISO 42001 Implementation Pitfalls

Having reviewed the experiences of early adopters, several patterns consistently emerge.

Underestimating the AI inventory. Organisations routinely undercount their AI systems at the scoping stage. The discovery of shadow AI usage, embedded AI in third-party software, and AI components buried in vendor platforms typically expands the initial scope estimate by 30-50%. Build time into the project plan for a thorough discovery process.

Treating it as a documentation exercise. ISO 42001 is a management system standard, not a documentation standard. Auditors are trained to look beyond policies to operational evidence. An organisation with a beautifully written AI policy but no evidence of risk assessments being conducted, model testing being performed, or human oversight being exercised will fail the Stage 2 audit.

Neglecting the AI impact assessment. Clause 6.1.4 is novel for most organisations and receives less attention than it deserves during implementation. The AI system impact assessment requires thinking through societal and population-level consequences - not just organisational risk. Few organisations have established methodologies for this before beginning ISO 42001 work, and developing one takes longer than anticipated.

Siloed implementation. ISO 42001 cuts across legal, product, engineering, data science, security, and compliance functions. Implementations that are owned entirely by one function - typically compliance or IT - tend to produce governance frameworks that the rest of the organisation does not use. Cross-functional steering groups with executive sponsorship are not optional; they are a prerequisite for success.

Confusing certification with compliance. Certification confirms that a management system meets the standard's requirements at a point in time. It does not confirm that every AI system the organisation operates is free from bias, fully transparent, or compliant with every applicable regulation. The AIMS must be a living system that drives continual improvement, not a one-time certification exercise.

Timeline and Investment

Realistic timelines for ISO 42001 implementation vary by organisational complexity:


Organisation profile

Typical timeline

Key variables

Small organisation (1-10 AI systems), ISO 27001 already in place

4-6 months

Scope complexity, evidence readiness

Mid-market (10-50 AI systems), some management system maturity

9-12 months

Cross-functional coordination, AI inventory completeness

Large enterprise (50+ AI systems), multi-business-unit scope

12-18+ months

Organisational complexity, shadow AI discovery, global coordination

Organisations already certified to ISO 27001 have a material head start. The Harmonised Structure means risk management frameworks, internal audit processes, documented information controls, and continual improvement cycles carry over directly. Several certification bodies offer combined ISO 27001 + ISO 42001 audit programmes that share evidence and reduce audit days.

The investment is not only in time. Budget considerations include: CB audit fees (varying by scope and complexity), internal resource allocation across the cross-functional team, potential tooling for evidence collection and AI inventory management, and training for roles that are new to AI governance. The largest cost driver, in most implementations, is the human effort required for evidence collection and documentation - particularly for organisations that do not have centralised systems for tracking AI system metadata, testing records, and risk assessments.

Implementing ISO 42001 across a large AI portfolio is as much an infrastructure problem as a governance one. The volume of evidence required - model cards, testing logs, bias assessments, data lineage records, impact assessments, and change management artefacts across dozens of AI systems - exceeds what spreadsheets and shared drives can sustain. At Enzai, our platform is purpose-built for this challenge: centralised AI inventory, structured Annex A control mapping, automated evidence collection, and continuous monitoring aligned to ISO 42001's management system cycle. Organisations preparing for certification can book a demo to see how it works in practice.

References

[1] IBM, "IBM Becomes First Major Open-Source AI Model Developer to Earn ISO 42001 Certification," September 2024; Anthropic, "Anthropic Achieves ISO 42001 Certification," January 2025; Microsoft Learn, "ISO/IEC 42001:2023 Compliance"; KPMG Australia, certified by BSI; Changi Airport Group, certified by SGS, February 2025.

[2] ISO/IEC 42001:2023, Information technology - Artificial intelligence - Management system. International Organization for Standardization, December 2023.

[3] ISO/IEC 42001:2023, Annex A (Reference control objectives and controls). For implementation guidance, see Annex B.

[4] ANAB maintains a registry of certification bodies accredited for ISO 42001 at anab.ansi.org. ISO/IEC 42006 (requirements for bodies providing audit and certification of AIMS) is in development.

[5] As of April 2026, no AI-specific harmonised standard has been published in the Official Journal of the EU with presumption-of-conformity status for the EU AI Act. See ISMS.online, "Presumption of Conformity: Why ISO 42001 Isn't Your AI Act Legal Shield - Yet," 2025.

[6] CEN-CENELEC, "Update on AI Standardization," October 2025. prEN ISO/IEC 42001 public enquiry November 2025 - February 2026; prEN 18286 public enquiry from 30 October 2025.

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.