arXiv Open Access 2026

ART: Action-based Reasoning Task Benchmarking for Medical AI Agents

Ananya Mantravadi Shivali Dalmia Abhishek Mukherji
Lihat Sumber

Abstrak

Reliable clinical decision support requires medical AI agents capable of safe, multi-step reasoning over structured electronic health records (EHRs). While large language models (LLMs) show promise in healthcare, existing benchmarks inadequately assess performance on action-based tasks involving threshold evaluation, temporal aggregation, and conditional logic. We introduce ART, an Action-based Reasoning clinical Task benchmark for medical AI agents, which mines real-world EHR data to create challenging tasks targeting known reasoning weaknesses. Through analysis of existing benchmarks, we identify three dominant error categories: retrieval failures, aggregation errors, and conditional logic misjudgments. Our four-stage pipeline -- scenario identification, task generation, quality audit, and evaluation -- produces diverse, clinically validated tasks grounded in real patient data. Evaluating GPT-4o-mini and Claude 3.5 Sonnet on 600 tasks shows near-perfect retrieval after prompt refinement, but substantial gaps in aggregation (28--64%) and threshold reasoning (32--38%). By exposing failure modes in action-oriented EHR reasoning, ART advances toward more reliable clinical agents, an essential step for AI systems that reduce cognitive load and administrative burden, supporting workforce capacity in high-demand care settings

Topik & Kata Kunci

Penulis (3)

A

Ananya Mantravadi

S

Shivali Dalmia

A

Abhishek Mukherji

Format Sitasi

Mantravadi, A., Dalmia, S., Mukherji, A. (2026). ART: Action-based Reasoning Task Benchmarking for Medical AI Agents. https://arxiv.org/abs/2601.08988

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓