Asking the Right Questions: Benchmarking Large Language Models in the Development of Clinical Consultation Templates
Abstrak
This study evaluates the capacity of large language models (LLMs) to generate structured clinical consultation templates for electronic consultation. Using 145 expert-crafted templates developed and routinely used by Stanford's eConsult team, we assess frontier models -- including o3, GPT-4o, Kimi K2, Claude 4 Sonnet, Llama 3 70B, and Gemini 2.5 Pro -- for their ability to produce clinically coherent, concise, and prioritized clinical question schemas. Through a multi-agent pipeline combining prompt optimization, semantic autograding, and prioritization analysis, we show that while models like o3 achieve high comprehensiveness (up to 92.2\%), they consistently generate excessively long templates and fail to correctly prioritize the most clinically important questions under length constraints. Performance varies across specialties, with significant degradation in narrative-driven fields such as psychiatry and pain medicine. Our findings demonstrate that LLMs can enhance structured clinical information exchange between physicians, while highlighting the need for more robust evaluation methods that capture a model's ability to prioritize clinically salient information within the time constraints of real-world physician communication.
Penulis (18)
Liam G. McCoy
Fateme Nateghi Haredasht
Kanav Chopra
David Wu
David JH Wu
Abass Conteh
Sarita Khemani
Saloni Kumar Maharaj
Vishnu Ravi
Arth Pahwa
Yingjie Weng
Leah Rosengaus
Lena Giang
Kelvin Zhenghao Li
Olivia Jee
Daniel Shirvani
Ethan Goh
Jonathan H. Chen
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Sumber Database
- arXiv
- Akses
- Open Access ✓