Semantic Scholar Open Access 2023 503 sitasi

Comparing Vision Transformers and Convolutional Neural Networks for Image Classification: A Literature Review

J. Maurício Inês Domingues Jorge Bernardino

Abstrak

Transformers are models that implement a mechanism of self-attention, individually weighting the importance of each part of the input data. Their use in image classification tasks is still somewhat limited since researchers have so far chosen Convolutional Neural Networks for image classification and transformers were more targeted to Natural Language Processing (NLP) tasks. Therefore, this paper presents a literature review that shows the differences between Vision Transformers (ViT) and Convolutional Neural Networks. The state of the art that used the two architectures for image classification was reviewed and an attempt was made to understand what factors may influence the performance of the two deep learning architectures based on the datasets used, image size, number of target classes (for the classification problems), hardware, and evaluated architectures and top results. The objective of this work is to identify which of the architectures is the best for image classification and under what conditions. This paper also describes the importance of the Multi-Head Attention mechanism for improving the performance of ViT in image classification.

Penulis (3)

J

J. Maurício

I

Inês Domingues

J

Jorge Bernardino

Format Sitasi

Maurício, J., Domingues, I., Bernardino, J. (2023). Comparing Vision Transformers and Convolutional Neural Networks for Image Classification: A Literature Review. https://doi.org/10.3390/app13095521

Akses Cepat

Lihat di Sumber doi.org/10.3390/app13095521
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Total Sitasi
503×
Sumber Database
Semantic Scholar
DOI
10.3390/app13095521
Akses
Open Access ✓