Large language models provide unsafe answers to patient-posed medical questions
Abstrak
Millions of patients are already using large language model (LLM) chatbots for medical advice on a regular basis, raising patient safety concerns. This physician-led red-teaming study compares the safety of four publicly available chatbots--Claude by Anthropic, Gemini by Google, GPT-4o by OpenAI, and Llama3-70B by Meta--on a new dataset, HealthAdvice, using an evaluation framework that enables quantitative and qualitative analysis. In total, 888 chatbot responses are evaluated for 222 patient-posed advice-seeking medical questions on primary care topics spanning internal medicine, women's health, and pediatrics. We find statistically significant differences between chatbots. The rate of problematic responses varies from 21.6 percent (Claude) to 43.2 percent (Llama), with unsafe responses varying from 5 percent (Claude) to 13 percent (GPT-4o, Llama). Qualitative results reveal chatbot responses with the potential to lead to serious patient harm. This study suggests that millions of patients could be receiving unsafe medical advice from publicly available chatbots, and further work is needed to improve the clinical safety of these powerful tools.
Penulis (17)
Rachel L. Draelos
Samina Afreen
Barbara Blasko
Tiffany L. Brazile
Natasha Chase
Dimple Patel Desai
Jessica Evert
Heather L. Gardner
Lauren Herrmann
Aswathy Vaikom House
Stephanie Kass
Marianne Kavan
Kirshma Khemani
Amanda Koire
Lauren M. McDonald
Zahraa Rabeeah
Amy Shah
Akses Cepat
- Tahun Terbit
- 2025
- Bahasa
- en
- Sumber Database
- arXiv
- Akses
- Open Access ✓