XAI (Explainable AI)
Techniques and methods that make an AI model’s decision process transparent and understandable to humans, supporting accountability and compliance.
Definition
A suite of algorithmic and presentation approaches—such as surrogate models (decision trees approximating black-box behavior), feature-attribution methods (SHAP, LIME), counterfactual explanations, and saliency maps—that reveal which inputs drove a particular prediction. XAI emphasizes fidelity (accuracy of explanations), comprehensibility (clarity for target audiences), and actionable insight, and is integrated into both model development and user-facing tools.
Real-World Example
A loan-scoring AI uses SHAP to produce per-applicant breakdowns (“35% weight: low income; 30% weight: short credit history; 20% weight: high debt ratio”). Underwriters review these explanations alongside model scores to ensure fair decisions and can provide clear justifications to applicants and regulators.