Impact Assessment

A structured evaluation to identify, analyze, and mitigate potential ethical, legal, and societal impacts of an AI system before deployment.

Definition

A formal process—often modeled after environmental-impact studies—where multidisciplinary teams map AI use cases to stakeholder groups, enumerate possible harms (privacy, bias, safety), score their severity and likelihood, and prescribe mitigation measures. It produces an “impact register” and action plan that must be approved by governance bodies before any production release.

Real-World Example

Before rolling out an automated hiring screener, an HR department conducts an impact assessment: they simulate candidate pipelines, identify risks of gender bias, score them as high-impact, design a bias-mitigation plan (diverse training data, human review), and secure sign-off from legal and ethics committees.