arXiv Open Access 2024

InfMAE: A Foundation Model in the Infrared Modality

Fangcen Liu Chenqiang Gao Yaming Zhang Junjie Guo Jinhao Wang +1 lainnya
Lihat Sumber

Abstrak

In recent years, the foundation models have swept the computer vision field and facilitated the development of various tasks within different modalities. However, it remains an open question on how to design an infrared foundation model. In this paper, we propose InfMAE, a foundation model in infrared modality. We release an infrared dataset, called Inf30 to address the problem of lacking large-scale data for self-supervised learning in the infrared vision community. Besides, we design an information-aware masking strategy, which is suitable for infrared images. This masking strategy allows for a greater emphasis on the regions with richer information in infrared images during the self-supervised learning process, which is conducive to learning the generalized representation. In addition, we adopt a multi-scale encoder to enhance the performance of the pre-trained encoders in downstream tasks. Finally, based on the fact that infrared images do not have a lot of details and texture information, we design an infrared decoder module, which further improves the performance of downstream tasks. Extensive experiments show that our proposed method InfMAE outperforms other supervised methods and self-supervised learning methods in three downstream tasks.

Topik & Kata Kunci

Penulis (6)

F

Fangcen Liu

C

Chenqiang Gao

Y

Yaming Zhang

J

Junjie Guo

J

Jinhao Wang

D

Deyu Meng

Format Sitasi

Liu, F., Gao, C., Zhang, Y., Guo, J., Wang, J., Meng, D. (2024). InfMAE: A Foundation Model in the Infrared Modality. https://arxiv.org/abs/2402.00407

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓