Skip to content

Installation

Python

pip install hotcoco

Verify the installation:

from hotcoco import COCO
print("hotcoco installed successfully")

numpy

hotcoco requires numpy, which is installed automatically. If you need a specific numpy version, install it first.

IDE support

hotcoco ships with type stubs (.pyi) and a py.typed marker. Autocomplete, hover docs, and type checking work out of the box in VS Code, PyCharm, and other editors.

Build from source

Install prerequisites if you don't have them:

curl -LsSf https://astral.sh/uv/install.sh | sh   # uv
cargo install just                                  # just (requires Rust)

Then clone and build:

git clone https://github.com/derekallman/hotcoco.git
cd hotcoco
uv sync --all-extras
just build
This builds the hotcoco Python module and installs it into the repo's .venv.

CLI

cargo install hotcoco-cli

This installs the coco-eval binary.

Build from source
git clone https://github.com/derekallman/hotcoco.git
cd hotcoco
cargo build --release
# Binary is at target/release/coco-eval

Rust library

cargo add hotcoco

Or add it manually to your Cargo.toml:

[dependencies]
hotcoco = "0.3"

Full API documentation is on docs.rs.

Benchmark data

hotcoco's parity checks and benchmarks run against COCO val2017. A single command downloads the annotations and generates synthetic detection files:

just download-coco   # ~240 MB — val2017 annotations + parity result files

After that, just parity and just bench work out of the box. For Objects365 scale benchmarks:

# Requires polars: uv pip install polars
just download-o365   # ~220 MB — Objects365 validation annotations from HuggingFace

Expected layout after just download-coco:

data/
├── annotations/
│   ├── instances_val2017.json
│   └── person_keypoints_val2017.json
├── bbox_val2017_results.json
├── segm_val2017_results.json
└── kpt_val2017_results.json

Quick sanity check:

from hotcoco import COCO, COCOeval

coco_gt = COCO("data/annotations/instances_val2017.json")
coco_dt = coco_gt.load_res("data/bbox_val2017_results.json")

ev = COCOeval(coco_gt, coco_dt, "bbox")
ev.run()

Tip

Images are never needed for evaluation — only the JSON annotation and result files.