AI Glossary/Fairness Metrics (AI)

What Is Fairness Metrics (AI)?

Definition

Fairness metrics are quantitative measurements used to evaluate whether an AI model's predictions and performance are equitable across different demographic groups, helping identify and quantify discriminatory patterns in model behavior.

How Fairness Metrics (AI) Works

There are many ways to define and measure fairness, and they often conflict with each other. Common fairness metrics include demographic parity (equal positive prediction rates across groups), equalized odds (equal true positive and false positive rates), and calibration (equal accuracy of predictions across groups). Mathematically, it is impossible to satisfy all fairness criteria simultaneously except in trivial cases — practitioners must choose which fairness definition best fits their context. For example, a hiring model might prioritize equal selection rates (demographic parity), while a medical model might prioritize equal detection rates for each group (equalized odds). Fairness metrics are increasingly required by AI regulations and are part of responsible AI development practices.

Real-World Examples

1

A lending AI being audited for demographic parity to ensure approval rates are similar across racial groups

2

A healthcare AI measuring equalized odds to verify that it detects disease equally well for all age groups

3

A company's AI ethics board requiring fairness metric reports before any model is deployed to production

Recommended Tools

Related Terms