11-26x faster
Keypoints in 0.19s, bbox in 0.74s on COCO val2017. Faster than faster-coco-eval across every eval type.
Same numbers, every time
Verified against pycocotools with a 10,000+ case parity test suite. Your AP scores won't change.
No compiler required
Pure Rust, prebuilt wheels for Linux, macOS, and Windows. No Cython, no C extensions, no Microsoft Build Tools.
One line to migrate
Call init_as_pycocotools() and your existing Detectron2, YOLO, mmdetection, or RF-DETR code works unchanged.
Quick start¶
from hotcoco import COCO, COCOeval
coco_gt = COCO("instances_val2017.json")
coco_dt = coco_gt.load_res("detections.json")
ev = COCOeval(coco_gt, coco_dt, "bbox")
ev.evaluate()
ev.accumulate()
ev.summarize()
use hotcoco::{COCO, COCOeval};
use hotcoco::params::IouType;
use std::path::Path;
let coco_gt = COCO::new(Path::new("instances_val2017.json"))?;
let coco_dt = coco_gt.load_res(Path::new("detections.json"))?;
let mut ev = COCOeval::new(coco_gt, coco_dt, IouType::Bbox);
ev.evaluate();
ev.accumulate();
ev.summarize();
coco-eval --gt instances_val2017.json --dt detections.json --iou-type bbox
Performance¶
Benchmarked on COCO val2017 (5,000 images, 36,781 GT annotations, ~43,700 detections), Apple M1 MacBook Air:
| Eval Type | pycocotools | faster-coco-eval | hotcoco |
|---|---|---|---|
| bbox | 11.79s | 3.47s (3.4x) | 0.74s (15.9x) |
| segm | 19.49s | 10.52s (1.9x) | 1.58s (12.3x) |
| keypoints | 4.79s | 3.08s (1.6x) | 0.19s (25.0x) |
Speedups in parentheses are vs pycocotools. Verified on COCO val2017 with a 10,000+ case parity test suite — your AP scores won't change.
Zero-code migration¶
Already using pycocotools? You don't need to touch your existing code:
from hotcoco import init_as_pycocotools
init_as_pycocotools()
# All pycocotools imports now use hotcoco
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
See Migrating from pycocotools for the full guide.
Rust API¶
For Rust users, the hotcoco crate is available on crates.io. Full API documentation is on docs.rs.
License¶
MIT