arXiv Open Access 2024

Targeted Visual Prompting for Medical Visual Question Answering

Sergio Tascon-Morales Pablo Márquez-Neila Raphael Sznitman
Lihat Sumber

Abstrak

With growing interest in recent years, medical visual question answering (Med-VQA) has rapidly evolved, with multimodal large language models (MLLMs) emerging as an alternative to classical model architectures. Specifically, their ability to add visual information to the input of pre-trained LLMs brings new capabilities for image interpretation. However, simple visual errors cast doubt on the actual visual understanding abilities of these models. To address this, region-based questions have been proposed as a means to assess and enhance actual visual understanding through compositional evaluation. To combine these two perspectives, this paper introduces targeted visual prompting to equip MLLMs with region-based questioning capabilities. By presenting the model with both the isolated region and the region in its context in a customized visual prompt, we show the effectiveness of our method across multiple datasets while comparing it to several baseline models. Our code and data are available at https://github.com/sergiotasconmorales/locvqallm.

Topik & Kata Kunci

Penulis (3)

S

Sergio Tascon-Morales

P

Pablo Márquez-Neila

R

Raphael Sznitman

Format Sitasi

Tascon-Morales, S., Márquez-Neila, P., Sznitman, R. (2024). Targeted Visual Prompting for Medical Visual Question Answering. https://arxiv.org/abs/2408.03043

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓