arXiv Open Access 2026

Sign Language Recognition in the Age of LLMs

Vaclav Javorek Jakub Honzik Ivan Gruber Tomas Zelezny Marek Hruz
Lihat Sumber

Abstrak

Recent Vision Language Models (VLMs) have demonstrated strong performance across a wide range of multimodal reasoning tasks. This raises the question of whether such general-purpose models can also address specialized visual recognition problems such as isolated sign language recognition (ISLR) without task-specific training. In this work, we investigate the capability of modern VLMs to perform ISLR in a zero-shot setting. We evaluate several open-source and proprietary VLMs on the WLASL300 benchmark. Our experiments show that, under prompt-only zero-shot inference, current open-source VLMs remain far behind classic supervised ISLR classifiers by a wide margin. However, follow-up experiments reveal that these models capture partial visual-semantic alignment between signs and text descriptions. Larger proprietary models achieve substantially higher accuracy, highlighting the importance of model scale and training data diversity. All our code is publicly available on GitHub.

Topik & Kata Kunci

Penulis (5)

V

Vaclav Javorek

J

Jakub Honzik

I

Ivan Gruber

T

Tomas Zelezny

M

Marek Hruz

Format Sitasi

Javorek, V., Honzik, J., Gruber, I., Zelezny, T., Hruz, M. (2026). Sign Language Recognition in the Age of LLMs. https://arxiv.org/abs/2604.11225

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓