Surveillance Risk
The threat that AI systems may be exploited for invasive monitoring of individuals or groups, infringing on privacy and civil liberties.
Definition
The potential that AI—especially computer-vision and behavior-analysis tools—could be used to track, profile, or discriminate against people without consent. Mitigation involves strict use-case definitions, anonymization requirements, access controls, and transparency measures (e.g., public notice of surveillance zones). Governance includes regular audits of surveillance use, impact assessments on civil-rights, and legal reviews to ensure compliance with privacy laws.
Real-World Example
A city council debated installing smart-camera object-detection AI. To address surveillance risk, they mandated that cameras blur faces in real time, store only aggregate pedestrian counts, and display public signage explaining AI usage—ensuring compliance with privacy regulations and civic expectations.