Semantic Scholar Open Access 2021 1552 sitasi

CoAtNet: Marrying Convolution and Attention for All Data Sizes

Zihang Dai Hanxiao Liu Quoc V. Le Mingxing Tan

Abstrak

Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets(pronounced"coat"nets), a family of hybrid models built from two key insights: (1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets: Without extra data, CoAtNet achieves 86.0% ImageNet top-1 accuracy; When pre-trained with 13M images from ImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT-300M while using 23x less data; Notably, when we further scale up CoAtNet with JFT-3B, it achieves 90.88% top-1 accuracy on ImageNet, establishing a new state-of-the-art result.

Topik & Kata Kunci

Penulis (4)

Z

Zihang Dai

H

Hanxiao Liu

Q

Quoc V. Le

M

Mingxing Tan

Format Sitasi

Dai, Z., Liu, H., Le, Q.V., Tan, M. (2021). CoAtNet: Marrying Convolution and Attention for All Data Sizes. https://www.semanticscholar.org/paper/9f4b69762ffb1ba42b573fd4ced996f3153e21c0

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Total Sitasi
1552×
Sumber Database
Semantic Scholar
Akses
Open Access ✓