arXiv Open Access 2025

Evaluating LLMs in Medicine: A Call for Rigor, Transparency

Mahmoud Alwakeel Aditya Nagori Vijay Krishnamoorthy Rishikesan Kamaleswaran
Lihat Sumber

Abstrak

Objectives: To evaluate the current limitations of large language models (LLMs) in medical question answering, focusing on the quality of datasets used for their evaluation. Materials and Methods: Widely-used benchmark datasets, including MedQA, MedMCQA, PubMedQA, and MMLU, were reviewed for their rigor, transparency, and relevance to clinical scenarios. Alternatives, such as challenge questions in medical journals, were also analyzed to identify their potential as unbiased evaluation tools. Results: Most existing datasets lack clinical realism, transparency, and robust validation processes. Publicly available challenge questions offer some benefits but are limited by their small size, narrow scope, and exposure to LLM training. These gaps highlight the need for secure, comprehensive, and representative datasets. Conclusion: A standardized framework is critical for evaluating LLMs in medicine. Collaborative efforts among institutions and policymakers are needed to ensure datasets and methodologies are rigorous, unbiased, and reflective of clinical complexities.

Topik & Kata Kunci

Penulis (4)

M

Mahmoud Alwakeel

A

Aditya Nagori

V

Vijay Krishnamoorthy

R

Rishikesan Kamaleswaran

Format Sitasi

Alwakeel, M., Nagori, A., Krishnamoorthy, V., Kamaleswaran, R. (2025). Evaluating LLMs in Medicine: A Call for Rigor, Transparency. https://arxiv.org/abs/2507.08916

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓