Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

How to Build an AI Inventory: From Zero to Complete Catalog

AI Regulations

How to Build an AI Inventory: From Zero to Complete Catalog

AI Regulations

How to Build an AI Inventory: From Zero to Complete Catalog

A practical guide to building an AI inventory from scratch - discovery methods, what to capture for each system, and how to maintain it.

Belfast

Belfast

16 min read time

Topics

AI inventory
AI governance
AI system inventory
EU AI Act
ISO 42001
NIST AI RMF
compliance

Topics

Most organisations cannot answer a deceptively simple question: how many AI systems are currently operating across the business? The inability to produce a reliable answer - and the absence of a comprehensive AI inventory - is not merely an administrative gap. It is a compliance liability, a risk management blind spot, and an increasingly untenable position as regulatory frameworks worldwide move from voluntary guidance to binding obligation.

The EU AI Act, which entered into force in August 2024 and whose obligations are phasing in through 2027, requires deployers of high-risk AI systems to register those systems in the EU database (Article 71) and maintain documentation that presupposes a complete accounting of every system in scope [1]. ISO/IEC 42001, the international standard for AI management systems, requires organisations to document the scope and context of their AI systems as a prerequisite for certification, making systematic cataloguing a practical necessity [2]. The NIST AI Risk Management Framework treats identification and classification of AI systems as the foundational activity within its Map function, which feeds all downstream risk measurement and management [3]. Without a comprehensive AI inventory, compliance with any of these frameworks is structurally impossible.

Yet the task of cataloguing every AI system within an enterprise is far more complex than it first appears. This guide provides a systematic approach to building an AI inventory from nothing, including what to capture, how to discover systems that may be invisible to central governance, and how to keep the inventory accurate over time.

Why an AI Inventory Is the Foundation of Governance

Regulatory mandates aside, a complete AI system inventory serves as the prerequisite for virtually every downstream governance activity. Risk assessments cannot be conducted on systems that are unknown. Bias audits cannot reach AI tools that procurement never logged. Incident response plans cannot account for AI-driven processes that IT has no visibility into.

The practical consequences of operating without an inventory are already visible. Organisations subject to the EU AI Act face a registration requirement for high-risk systems with the European database [1]. Those pursuing ISO 42001 certification must demonstrate a systematic process for identifying and managing AI systems across the organisation [2]. Financial regulators, including the Bank of England, FCA, and European Central Bank, have begun issuing supervisory expectations that assume firms know precisely which decisions are influenced by algorithmic or AI-based tools [4].

The challenge is compounded by the pace of AI adoption. McKinsey's 2024 Global Survey on AI found that 72% of organisations had adopted AI in at least one business function, with adoption nearly doubling year on year and much of it occurring at the departmental level without central oversight [5]. Platforms such as Enzai have emerged specifically to address the complexity of maintaining a living AI inventory at enterprise scale, connecting discovery, risk classification, and ongoing monitoring in a single governance layer.

An AI inventory is not a compliance checkbox. It is the single artefact upon which every other governance activity depends.

The Discovery Challenge

Understanding why AI systems are difficult to catalogue is essential before attempting to build the inventory itself. AI does not behave like traditional software in terms of visibility. It embeds itself into existing tools, operates through third-party APIs, and proliferates through individual employee decisions that never cross a procurement desk.

Shadow AI

The most significant discovery challenge is shadow AI: systems adopted by employees or teams without formal approval. A marketing analyst subscribing to an AI-powered copywriting tool on a personal credit card, a sales team using an AI meeting transcription service, a finance team experimenting with a large language model through a browser extension. Each of these represents an AI system processing organisational data outside of any governance framework.

Embedded AI in SaaS Platforms

Major enterprise software vendors have integrated AI features into existing products at extraordinary speed. A CRM platform that introduced predictive lead scoring, an HR system that added resume screening, a customer support tool that deployed automated response generation. These are AI systems, but they often arrive as feature updates rather than new procurements, making them invisible to traditional software auditing.

Vendor and Third-Party AI

When an organisation engages a vendor that uses AI in its service delivery, the organisation may become a deployer of that AI system under regulatory frameworks such as the EU AI Act. A background check provider using AI to screen candidates, a claims processing outsourcer using AI to triage submissions, a logistics partner using AI for route optimisation. Each creates a governance obligation that begins with knowing the system exists.

Internally Built AI

Data science teams, innovation labs, and software engineering departments may build and deploy AI models that never pass through a formal release management process. Jupyter notebooks promoted to production, machine learning models running on departmental servers, automated decision scripts that evolved from proof-of-concept to business-critical without anyone formally commissioning them.

The discovery challenge is fundamentally one of visibility. Traditional IT asset management was not designed for AI, and the tools and processes most organisations rely on will miss the majority of their AI footprint.

What to Capture in Your AI Inventory

A useful AI inventory must balance comprehensiveness with practicality. Capturing too little renders the inventory inadequate for risk assessment and compliance. Capturing too much creates a maintenance burden that causes the inventory to decay. The following template framework covers the fields that regulatory requirements, industry standards, and practical governance needs demand.

Core Identification Fields


Field

Description

Example

System ID

Unique identifier for the AI system

AI-2026-0042

System Name

Descriptive name

Customer Churn Prediction Model

System Description

Plain-language summary of what the system does

Predicts likelihood of customer contract non-renewal based on usage patterns and support ticket history

System Category

Classification of the AI type

Machine learning model, rule-based system, generative AI, robotic process automation

Deployment Type

How the system is deployed

Internal build, third-party SaaS, embedded vendor feature, API service

Ownership and Accountability


Field

Description

Example

Business Owner

Individual accountable for the system's use

VP of Customer Success

Technical Owner

Individual responsible for technical operation

Lead ML Engineer, Data Science Team

Vendor (if applicable)

Third-party provider

Acme Analytics Ltd

Department

Organisational unit using the system

Customer Success

Contract Reference

Link to relevant procurement or licence agreement

PO-2025-8831

Risk and Compliance Classification


Field

Description

Example

EU AI Act Risk Category

Unacceptable, High, Limited, Minimal

Limited risk

Data Types Processed

Categories of data the system ingests

Customer usage data, support ticket text, contract metadata

Personal Data Involved

Whether personal data is processed and under what basis

Yes - legitimate interest, DPIA reference DP-2025-019

Decision Impact

Nature of decisions influenced by the system

Advisory input to renewal team, no automated decisions

Affected Populations

Who is affected by system outputs

Enterprise customers (B2B), approximately 2,400 accounts

Technical and Operational Details


Field

Description

Example

Model Type / Algorithm

Technical approach

Gradient boosted decision tree (XGBoost)

Training Data Summary

Description of training data sources and vintage

36 months of historical customer data, last retrained March 2026

Infrastructure

Where the system runs

AWS eu-west-2, SageMaker endpoint

Integration Points

Systems that feed data to or receive outputs from this AI

Salesforce CRM, internal dashboards, renewal workflow

Performance Metrics

How the system's effectiveness is measured

AUC-ROC 0.87, precision 0.79, reviewed quarterly

Last Review Date

Date of most recent governance review

2026-02-15

Lifecycle and Status


Field

Description

Example

Status

Current operational state

Active, pilot, decommissioned, under review

Date Deployed

When the system entered production

2025-06-01

Next Review Date

Scheduled date for next governance review

2026-08-15

Decommission Plan

Whether an exit or sunset plan exists

Documented in run-book RB-2025-044

Not every field will be populated for every system on the first pass. A practical approach is to define which fields are mandatory at intake (system name, owner, deployment type, risk classification) and which can be deferred to the first governance review cycle. Enzai's AI inventory schema implements this tiered approach, distinguishing required intake fields from progressive enrichment fields to prevent teams from either skipping everything or getting paralysed by a 22-field form for each of 200 systems. The goal is to establish the structure and fill gaps systematically, rather than to delay the inventory until every field can be completed perfectly.

Discovery Methods

With a clear picture of what to capture, the next question is how to find every AI system operating within the organisation. No single method will achieve complete coverage. An effective discovery programme uses multiple approaches in combination.

Automated Scanning and Technical Discovery

Network traffic analysis can identify API calls to known AI service providers, including major cloud AI endpoints from providers such as OpenAI, Google, Anthropic, and AWS. DNS logs, proxy server records, and cloud access security broker (CASB) tools can flag connections to AI services that have not been formally sanctioned. Software composition analysis can identify AI libraries and frameworks within internally developed applications.

Organisations using platforms like Enzai can integrate automated discovery capabilities directly into their governance workflow, reducing the manual effort required and ensuring that newly detected systems are immediately flagged for classification and review.

Procurement and Vendor Audit

A systematic review of procurement records, software licence agreements, and vendor contracts will surface AI systems acquired through formal channels. This should include a retrospective review of existing contracts, since many vendors have added AI features to previously non-AI products. Procurement teams should be briefed to flag any new acquisition that includes AI or machine learning capabilities.

Employee Surveys and Self-Declaration

Direct outreach to business units remains one of the most effective discovery methods for shadow AI. A structured survey asking teams to identify any tools, services, or models they use that involve AI, machine learning, natural language processing, or automated decision-making will consistently reveal systems that no technical scanning method would detect. The survey should be framed constructively, emphasising governance support rather than enforcement, to encourage honest disclosure.

Vendor Questionnaires

For existing third-party relationships, a targeted questionnaire asking vendors whether AI is used in service delivery and, if so, what type and for what purpose, will identify embedded and third-party AI. This is particularly important for outsourced business processes where AI may be introduced by the vendor without explicit notification to the client.

Network and API Traffic Analysis

Beyond scanning for known AI endpoints, deeper analysis of API traffic patterns can reveal AI systems that communicate through non-obvious channels. Monitoring for patterns consistent with model inference calls, such as structured JSON payloads sent to external endpoints with latency profiles typical of ML inference, can surface systems that other methods miss.

The most robust discovery programmes run these methods in parallel and repeat them on a regular cycle. A single sweep will capture the majority of systems, but ongoing discovery is necessary to keep pace with the rate at which new AI tools enter the organisation.

Building the Inventory Process

An AI inventory is only as durable as the process and governance structure that sustains it. Without clear ownership, cross-functional involvement, and defined workflows, even a thorough initial catalogue will degrade within months.

Establishing Ownership

A single function must own the AI inventory as a corporate asset. In most organisations, this falls to one of three roles: the Chief AI Officer (where the role exists), the Chief Information Security Officer, or the Chief Compliance Officer. The critical requirement is that the owner has sufficient authority to compel disclosure from all business units and sufficient technical credibility to engage with engineering teams on classification and risk assessment.

Cross-Functional Involvement

The inventory process requires active participation from multiple functions:

  • IT and Engineering provide technical discovery capabilities and can identify internally built AI systems, infrastructure details, and integration points.

  • Procurement flags new AI acquisitions and retrospectively reviews existing vendor relationships.

  • Legal and Compliance classifies systems against regulatory requirements, including EU AI Act risk categories and data protection obligations.

  • Business Unit Leaders identify AI tools in use within their teams and assign business ownership for each system.

  • Data Protection / Privacy assesses personal data processing and ensures alignment with GDPR and equivalent frameworks.

  • Internal Audit validates the completeness and accuracy of the inventory on a periodic basis.

Defining the Intake Workflow

Every new AI system, whether procured, built, or discovered through scanning, should pass through a standardised intake workflow. This workflow should include initial registration (populating the core identification fields), preliminary risk classification, assignment of business and technical owners, and scheduling of a full governance review. The intake process should be lightweight enough that it does not create an incentive to circumvent it, whilst rigorous enough that no system enters production without basic governance documentation.

Setting the Governance Cadence

The inventory should be formally reviewed on a quarterly basis at minimum. Each review should assess completeness (have new systems been added since the last cycle?), accuracy (are the details for existing systems still correct?), and compliance posture (are any systems operating outside of their approved risk parameters?).

A process without clear ownership is an aspiration. A process with ownership, cross-functional buy-in, and a defined cadence is a governance programme.

Maintaining the Inventory Over Time

The initial build is the easier half of the challenge. Maintaining an accurate, living inventory requires triggers, automation, and integration with broader organisational change management.

Triggers for Inventory Updates

The inventory should be updated whenever any of the following events occur:

  • A new AI system is procured, built, or deployed

  • An existing system is materially modified (new data sources, changed decision scope, model retraining)

  • A system is decommissioned or suspended

  • A vendor notifies the organisation of AI feature additions to an existing product

  • A regulatory change alters the risk classification of an existing system

  • An incident involving an AI system is reported

  • Organisational restructuring changes the business ownership of a system

Continuous Discovery

Technical discovery methods should run continuously rather than as periodic exercises. Automated scanning for AI API traffic, CASB alerts for new AI SaaS tools, and integration with software deployment pipelines to flag AI components in new releases all contribute to reducing the lag between a system entering the organisation and appearing in the inventory.

Integration with Change Management

The AI inventory should be embedded into existing change management and IT service management processes. Change advisory boards should include AI inventory impact as a standard assessment criterion. Software deployment processes should include a check for AI components. Vendor management workflows should include AI disclosure as a standard contractual and review requirement.

Version History and Audit Trail

Every change to an inventory entry should be logged with a timestamp, the identity of the person making the change, and the reason for the update. This audit trail is not merely good practice. It is an explicit requirement under several regulatory frameworks and will be expected by auditors and supervisory authorities.

Maintenance is where most AI inventories fail. The organisations that succeed are those that treat the inventory as a living system integrated into operational workflows, not as a static document revisited annually.

Common Pitfalls

Even well-resourced organisations encounter predictable failure modes when building and maintaining an AI inventory. Recognising these patterns in advance significantly improves the likelihood of a durable outcome.

Defining AI too narrowly. Organisations that limit their inventory to "machine learning models" will miss rule-based automated decision systems, robotic process automation with cognitive elements, and generative AI tools used informally across the business. The definition of what constitutes an AI system for inventory purposes should be deliberately broad and aligned with regulatory definitions, such as the EU AI Act's expansive framing [1].

Treating the inventory as an IT project. If the inventory is owned exclusively by IT, business-side AI adoption will be systematically under-reported. Conversely, if it is owned exclusively by compliance, technical details will be sparse and unreliable. Cross-functional ownership is not optional.

Pursuing perfection on the first pass. Organisations that insist on completing every field for every system before publishing the inventory will never publish the inventory. A pragmatic approach accepts partial records in the initial build and implements a structured programme to fill gaps over subsequent review cycles.

Neglecting vendor AI. The AI systems with the largest impact on an organisation are frequently those operated by third parties. A background screening provider's AI, a credit scoring bureau's model, a cloud platform's automated security tooling. These require the same governance attention as internally built systems, and often more, given the reduced visibility.

Failing to connect the inventory to action. An AI inventory that exists as a spreadsheet, reviewed annually, disconnected from risk assessment, incident management, and regulatory reporting processes, provides negligible governance value. The inventory must be the operational backbone of the AI governance programme, not an appendix to it.

Underestimating the maintenance burden. The rate of AI adoption in most organisations means that an inventory completed today will be materially incomplete within three to six months without active maintenance processes. Organisations should budget ongoing effort for inventory upkeep, not merely for the initial build.

The difference between an AI inventory that delivers governance value and one that gathers dust is not the sophistication of the initial catalogue. It is the rigour of the process that keeps it current and connected to decision-making.

Practical Implications

The regulatory trajectory is unambiguous. The EU AI Act, ISO 42001, the NIST AI RMF, and an expanding set of sector-specific supervisory expectations all converge on the same foundational requirement: organisations must know what AI they have, where it operates, who is accountable for it, and what risks it presents. The cost of building this capability only increases with delay, as the volume of AI systems in any enterprise grows faster than the capacity to retrospectively catalogue them.

Organisations that begin now, even with an imperfect first pass, will be materially better positioned than those that wait for a regulatory deadline to force action. The template framework, discovery methods, and process structures outlined in this guide provide a concrete starting point.

For organisations seeking to operationalise their AI inventory within a platform purpose-built for AI governance, risk, and compliance, Enzai offers a structured approach to discovery, classification, and ongoing maintenance. Book a demo to see how it works in practice.

References

[1] European Parliament and Council of the European Union, "Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)," Official Journal of the European Union, August 2024.

[2] International Organization for Standardization, "ISO/IEC 42001:2023 - Information technology - Artificial intelligence - Management system," December 2023.

[3] National Institute of Standards and Technology, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," NIST AI 100-1, January 2023.

[4] Bank of England, FCA, PRA, and PSR, "Artificial intelligence and machine learning," DP5/22, October 2022; PRA, Supervisory Statement SS1/23, 2023.

[5] McKinsey and Company, "The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value," May 2024.

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

ISO 27001

Enzai is ISO 27001 certified, and has been since 2023. We commit to annual audits which are performed by NQA, and work closely with our security consultant partners Instil to continually update and enhance our security posture.

GDPR

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.