arXiv Open Access 2025

MedicoSAM: Robust Improvement of SAM for Medical Imaging

Anwai Archit Luca Freckmann Constantin Pape
Lihat Sumber

Abstrak

Medical image segmentation is an important analysis task in clinical practice and research. Deep learning has massively advanced the field, but current approaches are mostly based on models trained for a specific task. Training such models or adapting them to a new condition is costly due to the need for (manually) labeled data. The emergence of vision foundation models, especially Segment Anything, offers a path to universal segmentation for medical images, overcoming these issues. Here, we study how to improve Segment Anything for medical images by comparing different finetuning strategies on a large and diverse dataset. We evaluate the finetuned models on a wide range of interactive and (automatic) semantic segmentation tasks. We find that the performance can be clearly improved for interactive segmentation. However, semantic segmentation does not benefit from pretraining on medical images. Our best model, MedicoSAM, is publicly available at https://github.com/computational-cell-analytics/medico-sam. We show that it is compatible with existing tools for data annotation and believe that it will be of great practical value.

Topik & Kata Kunci

Penulis (3)

A

Anwai Archit

L

Luca Freckmann

C

Constantin Pape

Format Sitasi

Archit, A., Freckmann, L., Pape, C. (2025). MedicoSAM: Robust Improvement of SAM for Medical Imaging. https://arxiv.org/abs/2501.11734

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓