False Positive
When an AI model incorrectly predicts a positive class for an instance that is actually negative (Type I error).
Definition
A scenario where the model flags benign cases as malicious—common in security, fraud, or medical screening. High false-positive rates can overwhelm human reviewers, erode trust, and incur unnecessary costs. Governance requires monitoring precision, setting acceptable thresholds, and implementing human-review triggers to batch or prioritize alerts effectively.
Real-World Example
A credit-card fraud-detection AI has a 10% false-positive rate, meaning 1 in 10 legitimate transactions is blocked. The bank sets up a rapid-response team to review false positives within 5 minutes, adjusts model thresholds to reduce false alarms during peak shopping hours, and communicates pre-authorization rules to customers to minimize inconvenience.