Black Box Model
An AI system whose internal workings are not transparent or interpretable, making it difficult to understand how decisions are made.
Definition
High-performance but opaque models (e.g., deep neural networks, ensemble methods) that deliver accurate results without clear insight into their logic. Black-box models pose governance challenges: it’s hard to trace errors, justify decisions to stakeholders, or ensure compliance. Organizations often pair them with external explainers or restrict them to low-risk use cases.
Real-World Example
A hospital’s diagnostic AI uses a deep ensemble network that accurately identifies tumors on scans but cannot explain its reasoning. To govern its use, radiologists only deploy it as a second opinion—always reviewing the output alongside human interpretation—and regulators require the vendor to supply an external explainer tool for audit trails.