Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. IoU (Intersection Over Union)

IoU (Intersection Over Union)

A metric measuring overlap between predicted and ground-truth bounding boxes in object detection.

Year: 2010Generality: 694
Back to Vocab

Intersection over Union (IoU) is a standard evaluation metric in computer vision that quantifies how well a predicted bounding box aligns with a ground-truth bounding box. It is computed by dividing the area of overlap between the two boxes by the area of their union — producing a value between 0 and 1, where 0 indicates no overlap and 1 indicates a perfect match. This simple geometric ratio gives practitioners an intuitive, threshold-based way to decide whether a detection should count as a true positive or a false positive.

In practice, IoU is applied by setting a threshold — commonly 0.5 — above which a predicted box is considered a correct detection. This threshold-based judgment feeds directly into precision and recall calculations, and ultimately into the mean Average Precision (mAP) scores that dominate object detection leaderboards. Datasets and benchmarks such as PASCAL VOC and MS COCO popularized IoU as the canonical evaluation protocol, and their widespread adoption cemented it as the community standard throughout the 2010s.

Beyond evaluation, IoU has become embedded in the training process itself. Loss functions such as GIoU (Generalized IoU), DIoU, and CIoU extend the basic overlap ratio to provide smoother gradients and better geometric guidance during optimization, addressing cases where standard IoU produces zero gradient for non-overlapping boxes. These IoU-based losses have improved convergence and localization accuracy in modern detectors like YOLO variants and DETR.

IoU also generalizes to segmentation tasks, where it is applied pixel-wise rather than box-wise — often called the Jaccard index — to measure how well a predicted segmentation mask covers the ground-truth region. Whether used for bounding boxes or masks, IoU's appeal lies in its geometric interpretability and its direct correspondence to the localization quality that matters in real-world applications such as autonomous driving, medical imaging, and surveillance. Its simplicity has made it one of the most durable and universally adopted metrics in all of computer vision.

Related

Related

AUCPR (Area Under the Precision-Recall Curve)
AUCPR (Area Under the Precision-Recall Curve)

A classification metric summarizing precision-recall trade-offs across decision thresholds.

Generality: 595
Object Detection
Object Detection

A computer vision task that identifies and localizes multiple objects within images.

Generality: 838
Average Precision
Average Precision

A single-score metric summarizing model performance across all precision-recall thresholds.

Generality: 700
Precision-Recall Curve
Precision-Recall Curve

A plot evaluating classifier performance by trading off precision against recall across thresholds.

Generality: 729
Inter-Rater Agreement
Inter-Rater Agreement

A statistical measure of consistency among multiple human annotators labeling the same data.

Generality: 581
Evaluation Overtime Function
Evaluation Overtime Function

A function measuring how model performance changes or degrades over extended time periods.

Generality: 293