Discrimination
In AI, refers to unfair treatment of individuals or groups based on biases in data or algorithms, leading to unequal outcomes.
Occurs when model predictions or decisions systematically disadvantage protected classes (race, gender, age). Governance requires defining discrimination metrics (e.g., equal opportunity, demographic parity), embedding fairness constraints in training, and auditing deployed models for disparate impact, with remediation plans when thresholds are violated.
A university’s admissions-prediction tool yields lower admission probabilities for first-generation students. A fairness audit reveals features correlated with legacy applicants driving the disparity. The admissions office removes those proxies and retrains the model to achieve equal true-positive rates across student backgrounds.

We help you find answers
What problem does Enzai solve?
Enzai provides enterprise-grade infrastructure to manage AI risk and compliance. It creates a centralized system of record where AI systems, models, datasets, and governance decisions are documented, assessed, and auditable.
Who is Enzai built for?
How is Enzai different from other governance tools?
Can we start if we have no existing AI governance process?
Does AI governance slow down innovation?
How does Enzai stay aligned with evolving AI regulations?
Research, insights, and updates
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.





