What Is Recall (AI Metric)?
Recall (also called sensitivity) is a classification metric that measures the proportion of actual positive cases that the model correctly identifies — calculated as true positives / (true positives + false negatives) — indicating how completely the model detects positive instances.
How Recall (AI Metric) Works
Recall answers the question: 'Of all the actual positives, how many did the model catch?' A recall of 0.95 means the model identifies 95% of all true positive cases, missing only 5%. High recall is critical when missing a positive case is costly or dangerous — for example, cancer detection (missing a tumor could be fatal), fraud detection (missed fraud costs money), or security screening (missed threats are unacceptable). Like precision, recall often trades off with its counterpart: you can achieve near-perfect recall by predicting everything as positive, but precision would suffer dramatically. The F1 score balances both metrics.
Real-World Examples
A cancer detection AI with recall of 0.98, catching 98% of all tumors, even if some false positives require additional testing
A fraud detection system with recall of 0.92, catching 92% of fraudulent transactions while accepting some false alarms
A search engine optimizing for recall to ensure no relevant documents are missed, even if some irrelevant results appear