arXiv Open Access 2024

Large Language Model Benchmarks in Medical Tasks

Lawrence K. Q. Yan Qian Niu Ming Li Yichao Zhang Caitlyn Heqi Yin +14 lainnya
Lihat Sumber

Abstrak

With the increasing application of large language models (LLMs) in the medical domain, evaluating these models' performance using benchmark datasets has become crucial. This paper presents a comprehensive survey of various benchmark datasets employed in medical LLM tasks. These datasets span multiple modalities including text, image, and multimodal benchmarks, focusing on different aspects of medical knowledge such as electronic health records (EHRs), doctor-patient dialogues, medical question-answering, and medical image captioning. The survey categorizes the datasets by modality, discussing their significance, data structure, and impact on the development of LLMs for clinical tasks such as diagnosis, report generation, and predictive decision support. Key benchmarks include MIMIC-III, MIMIC-IV, BioASQ, PubMedQA, and CheXpert, which have facilitated advancements in tasks like medical report generation, clinical summarization, and synthetic data generation. The paper summarizes the challenges and opportunities in leveraging these benchmarks for advancing multimodal medical intelligence, emphasizing the need for datasets with a greater degree of language diversity, structured omics data, and innovative approaches to synthesis. This work also provides a foundation for future research in the application of LLMs in medicine, contributing to the evolving field of medical artificial intelligence.

Topik & Kata Kunci

Penulis (19)

L

Lawrence K. Q. Yan

Q

Qian Niu

M

Ming Li

Y

Yichao Zhang

C

Caitlyn Heqi Yin

C

Cheng Fei

B

Benji Peng

Z

Ziqian Bi

P

Pohsun Feng

K

Keyu Chen

T

Tianyang Wang

Y

Yunze Wang

S

Silin Chen

M

Ming Liu

J

Junyu Liu

X

Xinyuan Song

R

Riyang Bao

Z

Zekun Jiang

Z

Ziyuan Qin

Format Sitasi

Yan, L.K.Q., Niu, Q., Li, M., Zhang, Y., Yin, C.H., Fei, C. et al. (2024). Large Language Model Benchmarks in Medical Tasks. https://arxiv.org/abs/2410.21348

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓