Transparency
The practice of making AI system processes, decision logic, and data usage clear and understandable to stakeholders for accountability.
Definition
Involves public or stakeholder-facing disclosures—model cards, data sheets, API documentation—that describe how data are collected, how models are trained, what assumptions they embed, and how decisions are made. Transparency also includes user-friendly explanations of individual decisions and clear version histories. Governance embeds transparency requirements into project charters and mandates regular updates to documentation as systems evolve.
Real-World Example
A public health agency publishes a “Model Card” for its COVID-19 hospitalization predictor, detailing training data sources, performance metrics by region, known limitations, and update logs—allowing clinicians and policymakers to understand and trust the model’s outputs.