How to discover and govern shadow AI across your enterprise - discovery methods, acceptable use policies, and building governance without blocking innovation.
•
•
13 min read time
Topics
Shadow AI is already inside every large organisation. Right now, somewhere in yours, an employee is pasting confidential contract language into ChatGPT. A product manager is feeding customer data into an AI summarisation tool. A finance analyst is using a Copilot plugin that nobody in IT approved, trained on, or even knows exists. None of this is hypothetical. A 2024 study by Salesforce found that more than half of generative AI users at work were using tools their employer had not sanctioned [1]. By some estimates, the true figure is considerably higher, because the very nature of unsanctioned usage means it resists measurement.
This phenomenon - employees adopting AI tools outside the view of IT, security, and compliance teams - has acquired a name borrowed from its predecessor problem: shadow AI. But whilst the label echoes the familiar concept of shadow IT, the risks it introduces are materially different, the regulatory stakes are higher, and the window for getting governance right is narrowing fast.
What Shadow AI Looks Like in Practice
The first challenge in governing shadow AI is recognising how pervasive and varied it has become. The most visible form is direct use of consumer-facing AI services: ChatGPT, Google Gemini, Anthropic's Claude, Perplexity, and dozens of smaller tools. Employees sign up with personal email addresses, use free tiers, and begin processing work data within minutes. No procurement process is triggered. No security review occurs.
But direct usage is only one layer. AI capabilities are now embedded inside tools that organisations already pay for. Notion, Canva, Grammarly, Slack, Zoom, Microsoft 365, and Google Workspace all ship AI features that activate automatically or with a single click. When a marketing team member uses Canva's AI image generator, or a salesperson clicks "AI Summary" on a Zoom call recording, they are invoking AI models with enterprise data - often without realising this constitutes AI usage at all.
A third layer involves browser extensions and plugins. Chrome Web Store and similar marketplaces host thousands of AI-powered extensions for writing assistance, email drafting, data extraction, and code generation. These extensions can read page content, intercept form data, and transmit information to external servers. Most organisations have no inventory of what extensions their employees have installed.
The cumulative effect is an organisation running dozens or hundreds of AI systems that its governance, risk, and compliance functions cannot see, cannot assess, and cannot control.
Why Shadow AI Is Not Simply Shadow IT by Another Name
It would be tempting to treat shadow AI as a subcategory of the shadow IT problem that security teams have managed for years. After all, the pattern is similar: employees adopt technology faster than governance can keep pace. But the risk profile diverges in several critical ways.
Data Persistence and Model Training
When an employee uploads a document to an unapproved file-sharing service, the data risk is containment: who can access that file, and can it be deleted? With many AI services, the risk extends further. Depending on the provider's terms of service and the specific plan an employee is using, input data may be used to train or fine-tune models [2]. Once data enters a training pipeline, there is no retrieval mechanism. The information becomes part of the model's parametric knowledge, diffused across billions of weights. Traditional data loss prevention approaches assume data can be located and removed. With AI training ingestion, that assumption fails.
Output Risk
Shadow IT typically involves tools that store, move, or display data. AI tools generate new content, and that content can be wrong. When an employee uses an unsanctioned AI tool to draft a regulatory filing, summarise a legal contract, or generate financial projections, hallucinated outputs can propagate into formal business decisions. The organisation bears responsibility for outputs it did not know were AI-generated, created by tools it did not know were in use.
Bias and Discrimination Exposure
AI systems can produce outputs that discriminate on the basis of protected characteristics. If an HR team is using an unapproved AI tool to screen CVs or draft job descriptions, the organisation may be introducing bias into employment decisions without any audit trail. The liability attaches to the organisation regardless of whether anyone authorised the tool's use.
Velocity of Adoption
Traditional shadow IT spread at the pace of software downloads and account creation. Shadow AI spreads at the pace of a browser tab. Many AI tools require no installation, no account, and no payment. An employee can go from curiosity to processing sensitive data in under sixty seconds.
Shadow AI does not merely extend the shadow IT problem. It introduces a qualitatively different category of risk that demands its own governance response.
The Regulatory Case for Visibility
Even organisations that accept a degree of unmanaged technology risk are finding that regulation now demands a higher standard of AI visibility.
The EU AI Act, which entered phased application from 2024, places obligations on both providers and deployers of AI systems [3]. Providers must classify systems by risk tier and ensure compliance with Articles 9-15 for high-risk systems. Deployers of high-risk AI systems bear their own obligations under Article 26 - including using systems according to provider instructions, assigning human oversight, retaining logs, and reporting incidents. Even deployers of lower-risk systems face transparency obligations under Article 50. None of these obligations can be met for systems that exist outside the organisation's knowledge.
ISO/IEC 42001, the international standard for AI management systems, makes an AI inventory a foundational requirement [4]. An organisation cannot claim conformance with the standard whilst operating AI systems it has not identified, assessed, or documented.
In the United States, the NIST AI Risk Management Framework similarly emphasises the need to "map" AI systems as a prerequisite for managing their risks [5]. Executive orders and sector-specific guidance from regulators in financial services, healthcare, and government contracting are converging on the same expectation: organisations must know what AI they are using.
The regulatory logic is straightforward. Risk classification is meaningless without discovery. Compliance obligations cannot apply to invisible systems. An organisation that cannot produce an inventory of its AI usage is not merely ungoverned - it is ungovernable. Platforms such as Enzai exist precisely to close this gap, providing the continuous discovery and classification capabilities that regulation now demands.
Discovering What You Do Not Know
Accepting that shadow AI requires governance is the easier step. The harder one is finding it. Effective discovery requires multiple complementary methods, because no single technique provides complete visibility.
Network Traffic and DNS Analysis
AI services generate distinctive network traffic patterns. Monitoring DNS queries and HTTP/HTTPS traffic for connections to known AI service domains (api.openai.com, generativelanguage.googleapis.com, api.anthropic.com, and so on) provides a baseline view of which services employees are reaching. This method is effective for direct consumer AI usage but less so for AI embedded within approved SaaS tools, where API calls may be made server-side.
SSO and Authentication Log Analysis
Many AI services support single sign-on. Even where employees use personal accounts, authentication logs from identity providers can reveal OAuth consent grants to AI services. Reviewing OAuth application permissions granted through Google Workspace or Microsoft Entra ID often surfaces AI tools that employees have connected to corporate accounts.
Procurement and Expense Audit
Some shadow AI usage generates a financial trail. Employees or departmental budget holders may expense subscriptions to AI tools, purchase premium tiers on corporate credit cards, or submit invoices for AI services. A targeted review of expense reports and procurement records, specifically searching for known AI vendor names, can identify paid usage that bypassed formal procurement.
Employee Disclosure Programmes
Technical discovery methods will always have blind spots. Voluntary disclosure programmes, where employees are invited to report AI tools they use without fear of penalty, fill gaps that monitoring cannot reach. The design of these programmes matters: if employees fear disciplinary action, disclosure rates will be negligible. Framing the exercise as an inventory effort rather than an enforcement action produces better results.
Browser Extension Audits
For organisations using managed devices or endpoint management platforms, auditing installed browser extensions provides visibility into AI-powered plugins. Many endpoint detection and response tools can enumerate extensions. Cross-referencing installed extensions against a database of known AI-powered tools identifies shadow usage that generates no network traffic to obviously AI-associated domains.
API Traffic and Data Flow Analysis
More mature organisations can instrument API gateways and data loss prevention tools to detect patterns consistent with AI service usage: large text payloads sent to external endpoints, responses containing generated content markers, or traffic to IP ranges associated with major AI providers. This approach requires investment but catches usage that simpler methods miss.
No discovery programme should rely on a single method. The most effective approaches layer multiple techniques and treat discovery as a continuous process rather than a one-time audit. An AI governance platform like Enzai can automate this layered discovery, correlating signals across network, identity, procurement, and endpoint data to maintain a living inventory.
Shadow AI Governance Without Prohibition
Discovery is necessary but insufficient. The question that follows is what to do with what you find, and here, many organisations make a strategic error. Faced with the scale of unsanctioned AI usage, the instinct is to prohibit: block domains, revoke access, issue blanket bans. This approach fails for three reasons.
First, prohibition drives usage further underground. Employees who find AI tools genuinely useful will find workarounds - personal devices, mobile hotspots, home networks. The result is less visibility, not less usage.
Second, blanket bans carry a competitive cost. Organisations that prevent employees from using AI tools sacrifice productivity gains that their competitors are capturing. McKinsey estimated in 2023 that generative AI could contribute $2.6 trillion to $4.4 trillion annually to global corporate profits across industries [6]. Forgoing that value is itself a risk.
Third, prohibition signals to employees that the organisation views AI as a threat rather than a capability, which poisons the cultural ground needed for responsible AI adoption in the longer term.
The alternative is structured governance that channels AI usage rather than blocking it.
Amnesty and Baseline
An effective starting point is a time-limited amnesty period during which employees are invited to disclose all AI tools they currently use, with an explicit guarantee of no disciplinary consequences. This establishes a comprehensive baseline that technical discovery alone cannot achieve. The amnesty should be paired with a clear communication that post-amnesty, undisclosed usage will be treated as a policy violation.
Acceptable Use Policies
Rather than a binary approved/prohibited distinction, mature organisations develop acceptable use policies that define categories of AI usage: what data classifications can be processed with AI tools, what types of outputs require human review before use, and what disclosure obligations apply when AI-generated content is used in formal deliverables.
Approved Tool Lists and the Paved Road
The "paved road" concept, borrowed from platform engineering, is particularly effective for shadow AI governance. Instead of erecting barriers, the organisation builds a well-lit, well-maintained path that is easier to follow than the alternative. This means providing approved AI tools that meet security and compliance requirements, pre-configured with appropriate data handling settings, integrated with corporate identity, and supported by training and documentation. When the sanctioned option is genuinely good, the incentive to seek unsanctioned alternatives diminishes.
Sandboxed Environments
For experimentation with new AI tools or capabilities not yet on the approved list, organisations can provide sandboxed environments where employees can test tools with synthetic or non-sensitive data. This preserves the innovation benefit of exploration whilst containing the data risk.
Continuous Review and Feedback
Governance frameworks that remain static become obstacles. Establishing a regular cadence for reviewing and updating the approved tool list, incorporating employee feedback, and evaluating new tools ensures the governance framework evolves at something closer to the pace of AI development.
The goal is not to eliminate all risk from AI usage but to make the governed path so clearly superior that ungoverned usage becomes unnecessary.
Building a Sustainable Discovery Process
One-time discovery efforts produce a snapshot. Shadow AI is a continuous phenomenon - new tools launch weekly, existing tools add AI features, employees change roles and adopt new workflows. A sustainable discovery process must be continuous and integrated into broader organisational processes.
Integration with Procurement
AI discovery should be embedded in procurement workflows. Every new SaaS tool evaluation should include an assessment of embedded AI capabilities, data handling for AI features, and model training policies. Procurement teams need training and checklists to ask the right questions, because vendors do not always foreground AI functionality in their sales materials.
Integration with Onboarding and Role Changes
New employees bring AI habits from previous employers. Onboarding processes should include an AI usage disclosure step and an introduction to the organisation's AI governance framework and approved tools. Similarly, when employees change roles and gain access to new data classifications, their AI tool authorisations should be reviewed.
Change Management, Not Just Compliance
Sustainable shadow AI governance requires cultural change, not merely policy enforcement. Employees need to understand why AI governance exists - not as bureaucratic overhead but as protection for the organisation, its customers, and themselves. Training programmes should be practical, scenario-based, and updated regularly to reflect the evolving tool landscape.
Metrics and Reporting
What gets measured gets managed. Organisations should track shadow AI discovery rates over time, time-to-governance for newly discovered tools, employee satisfaction with approved AI tools, and policy exception request volumes. These metrics provide early warning when governance is falling behind adoption and when approved tools are failing to meet employee needs.
Executive Accountability
Shadow AI governance cannot live solely within IT or information security. It requires executive sponsorship, ideally from a Chief AI Officer or equivalent role, with clear accountability for maintaining AI inventory completeness and governance coverage. Board-level reporting on AI governance posture is becoming standard practice among organisations that take the risk seriously.
A discovery process that is continuous, integrated, measured, and sponsored will catch what a one-time audit misses and adapt to what next quarter's AI landscape brings.
Practical Implications
Shadow AI is not a problem that resolves itself. Left unaddressed, it compounds: more tools, more data exposure, more regulatory risk, more decisions informed by unvetted AI outputs. The organisations that manage this well will be those that treat discovery as a continuous operational capability rather than a project, that govern through enablement rather than prohibition, and that invest in the cultural and procedural infrastructure to sustain both.
The practical path forward is clear. Establish visibility through layered discovery. Build governance that employees want to follow. Integrate AI oversight into the rhythms of procurement, onboarding, and change management. Measure progress and hold leadership accountable.
For organisations ready to move from ad hoc response to structured shadow AI governance, Enzai provides the platform to bring unsanctioned AI usage under control - from continuous discovery and inventory through to policy management and compliance reporting. Book a demo to see how it works in practice.
References
[1] Salesforce, "The Promises and Pitfalls of AI at Work," Salesforce Research, 2024.
[2] OpenAI, "How your data is used to improve model performance," OpenAI Help Centre, updated 2025.
[3] European Parliament and Council, Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (EU AI Act), 2024.
[4] International Organization for Standardization, ISO/IEC 42001:2023 Information technology - Artificial intelligence - Management system, 2023.
[5] National Institute of Standards and Technology, "AI Risk Management Framework (AI RMF 1.0)," NIST AI 100-1, January 2023.
[6] McKinsey Global Institute, "The economic potential of generative AI: The next productivity frontier," McKinsey & Company, June 2023.
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.







