What Is Precision (AI Metric)?
Precision is a classification metric that measures the proportion of positive predictions that are actually correct — calculated as true positives / (true positives + false positives) — indicating how trustworthy the model's positive predictions are.
How Precision (AI Metric) Works
Precision answers the question: 'When the model says yes, how often is it right?' A precision of 0.95 means that 95% of the items the model flags as positive truly are positive, with only 5% being false alarms. High precision is critical in scenarios where false positives are costly or dangerous — for example, spam filters (you do not want legitimate emails in spam), content moderation (incorrectly flagging harmless content), or medical screening (unnecessary procedures from false diagnoses). Precision often trades off against recall: increasing precision tends to decrease recall, and vice versa, requiring practitioners to find the right balance for their use case.
Real-World Examples
A spam filter with 0.99 precision, meaning only 1% of emails it marks as spam are actually legitimate
A content moderation system optimized for high precision to avoid wrongly censoring legitimate user posts
A recruiting AI with precision of 0.85 for 'qualified candidate' predictions, meaning 15% of its recommendations are not suitable