arXiv Open Access 2025

Decomposing Complex Visual Comprehension into Atomic Visual Skills for Vision Language Models

Hyunsik Chae Seungwoo Yoon Jaden Park Chloe Yewon Chun Yongin Cho +3 lainnya
Lihat Sumber

Abstrak

Recent Vision-Language Models (VLMs) have demonstrated impressive multimodal comprehension and reasoning capabilities, yet they often struggle with trivially simple visual tasks. In this work, we focus on the domain of basic 2D Euclidean geometry and systematically categorize the fundamental, indivisible visual perception skills, which we refer to as atomic visual skills. We then introduce the Atomic Visual Skills Dataset (AVSD) for evaluating VLMs on the atomic visual skills. Using AVSD, we benchmark state-of-the-art VLMs and find that they struggle with these tasks, despite being trivial for adult humans. Our findings highlight the need for purpose-built datasets to train and evaluate VLMs on atomic, rather than composite, visual perception tasks.

Topik & Kata Kunci

Penulis (8)

H

Hyunsik Chae

S

Seungwoo Yoon

J

Jaden Park

C

Chloe Yewon Chun

Y

Yongin Cho

M

Mu Cai

Y

Yong Jae Lee

E

Ernest K. Ryu

Format Sitasi

Chae, H., Yoon, S., Park, J., Chun, C.Y., Cho, Y., Cai, M. et al. (2025). Decomposing Complex Visual Comprehension into Atomic Visual Skills for Vision Language Models. https://arxiv.org/abs/2505.20021

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓