Risk Assessment

The process of identifying, analyzing, and prioritizing potential harms or failures in AI systems to determine appropriate mitigation strategies.

Definition

A systematic activity that catalogs threats (e.g., bias, privacy breaches, security exploits), evaluates their likelihood and potential impact (financial, reputational, safety), and ranks them by risk score. The output is a risk register that drives mitigation planning. Risk assessments are revisited whenever models, data, or operational contexts change, ensuring that emerging threats are captured throughout the AI lifecycle.

Real-World Example

Before deploying a credit-decision AI, a bank’s risk team maps risks (e.g., false positives denying credit, data leaks), assigns likelihood/impact ratings, and identifies top risks. They then develop targeted controls—enhanced human review for denied applications and encrypted logging—to address the highest-priority items.