Semantic Scholar Open Access 2021 227 sitasi

Efficient Self-supervised Vision Transformers for Representation Learning

Chunyuan Li Jianwei Yang Pengchuan Zhang Mei Gao Bin Xiao +3 lainnya

Abstrak

This paper investigates two techniques for developing efficient self-supervised vision transformers (EsViT) for visual representation learning. First, we show through a comprehensive empirical study that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity but with a cost of losing the ability to capture fine-grained correspondences between image regions. Second, we propose a new pre-training task of region matching which allows the model to capture fine-grained region dependencies and as a result significantly improves the quality of the learned vision representations. Our results show that combining the two techniques, EsViT achieves 81.3% top-1 on the ImageNet linear probe evaluation, outperforming prior arts with around an order magnitude of higher throughput. When transferring to downstream linear classification tasks, EsViT outperforms its supervised counterpart on 17 out of 18 datasets. The code and models are publicly available: https://github.com/microsoft/esvit

Topik & Kata Kunci

Penulis (8)

C

Chunyuan Li

J

Jianwei Yang

P

Pengchuan Zhang

M

Mei Gao

B

Bin Xiao

X

Xiyang Dai

L

Lu Yuan

J

Jianfeng Gao

Format Sitasi

Li, C., Yang, J., Zhang, P., Gao, M., Xiao, B., Dai, X. et al. (2021). Efficient Self-supervised Vision Transformers for Representation Learning. https://www.semanticscholar.org/paper/b70bb1855e217edffb5dfa0632e8216860821870

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Total Sitasi
227×
Sumber Database
Semantic Scholar
Akses
Open Access ✓