Shadow AI
The unsanctioned use of AI models, agents, or tools by employees without IT approval, creating hidden security vulnerabilities through data leakage and unauthorized autonomous actions.
Définition
Shadow AI represents a significant evolution of "Shadow IT." While traditional Shadow IT involved unapproved SaaS applications (creating data silos), Shadow AI introduces active risks: the leakage of proprietary data into public model training sets and the deployment of autonomous agents that execute actions outside corporate governance. Effective management requires network-level detection (monitoring API traffic to known AI providers), employee education, and the provision of approved, secure alternatives rather than blanket blocking.
Exemple concret
A software engineering team uses a free, public coding assistant to debug a proprietary algorithm. They paste sensitive source code into the prompt window, unaware that the tool’s terms of service allow the provider to use inputs for model training. The proprietary code is absorbed into the public model, potentially allowing competitors to reproduce the algorithm or exposing security vulnerabilities to the wider public.