arXiv Open Access 2026

annbatch unlocks terabyte-scale training of biological data in anndata

Ilan Gold Felix Fischer Lucas Arnoldt F. Alexander Wolf Fabian J. Theis
Lihat Sumber

Abstrak

The scale of biological datasets now routinely exceeds system memory, making data access rather than model computation the primary bottleneck in training machine-learning models. This bottleneck is particularly acute in biology, where widely used community data formats must support heterogeneous metadata, sparse and dense assays, and downstream analysis within established computational ecosystems. Here we present annbatch, a mini-batch loader native to anndata that enables out-of-core training directly on disk-backed datasets. Across single-cell transcriptomics, microscopy and whole-genome sequencing benchmarks, annbatch increases loading throughput by up to an order of magnitude and shortens training from days to hours, while remaining fully compatible with the scverse ecosystem. Annbatch establishes a practical data-loading infrastructure for scalable biological AI, allowing increasingly large and diverse datasets to be used without abandoning standard biological data formats. Github: https://github.com/scverse/annbatch

Topik & Kata Kunci

Penulis (5)

I

Ilan Gold

F

Felix Fischer

L

Lucas Arnoldt

F

F. Alexander Wolf

F

Fabian J. Theis

Format Sitasi

Gold, I., Fischer, F., Arnoldt, L., Wolf, F.A., Theis, F.J. (2026). annbatch unlocks terabyte-scale training of biological data in anndata. https://arxiv.org/abs/2604.01949

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓