arXiv Open Access 2022

LViT: Language meets Vision Transformer in Medical Image Segmentation

Zihan Li Yunxiang Li Qingde Li Puyang Wang Dazhou Guo +4 lainnya
Lihat Sumber

Abstrak

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.

Topik & Kata Kunci

Penulis (9)

Z

Zihan Li

Y

Yunxiang Li

Q

Qingde Li

P

Puyang Wang

D

Dazhou Guo

L

Le Lu

D

Dakai Jin

Y

You Zhang

Q

Qingqi Hong

Format Sitasi

Li, Z., Li, Y., Li, Q., Wang, P., Guo, D., Lu, L. et al. (2022). LViT: Language meets Vision Transformer in Medical Image Segmentation. https://arxiv.org/abs/2206.14718

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2022
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓