arXiv Open Access 2024

ElectroVizQA: How well do Multi-modal LLMs perform in Electronics Visual Question Answering?

Pragati Shuddhodhan Meshram Swetha Karthikeyan Bhavya Bhavya Suma Bhat
Lihat Sumber

Abstrak

Multi-modal Large Language Models (MLLMs) are gaining significant attention for their ability to process multi-modal data, providing enhanced contextual understanding of complex problems. MLLMs have demonstrated exceptional capabilities in tasks such as Visual Question Answering (VQA); however, they often struggle with fundamental engineering problems, and there is a scarcity of specialized datasets for training on topics like digital electronics. To address this gap, we propose a benchmark dataset called ElectroVizQA specifically designed to evaluate MLLMs' performance on digital electronic circuit problems commonly found in undergraduate curricula. This dataset, the first of its kind tailored for the VQA task in digital electronics, comprises approximately 626 visual questions, offering a comprehensive overview of digital electronics topics. This paper rigorously assesses the extent to which MLLMs can understand and solve digital electronic circuit questions, providing insights into their capabilities and limitations within this specialized domain. By introducing this benchmark dataset, we aim to motivate further research and development in the application of MLLMs to engineering education, ultimately bridging the performance gap and enhancing the efficacy of these models in technical fields.

Topik & Kata Kunci

Penulis (4)

P

Pragati Shuddhodhan Meshram

S

Swetha Karthikeyan

B

Bhavya Bhavya

S

Suma Bhat

Format Sitasi

Meshram, P.S., Karthikeyan, S., Bhavya, B., Bhat, S. (2024). ElectroVizQA: How well do Multi-modal LLMs perform in Electronics Visual Question Answering?. https://arxiv.org/abs/2412.00102

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓