arXiv Open Access 2025

TCM-Eval: An Expert-Level Dynamic and Extensible Benchmark for Traditional Chinese Medicine

Zihao Cheng Yuheng Lu Huaiqian Ye Zeming Liu Minqi Wang +8 lainnya
Lihat Sumber

Abstrak

Large Language Models (LLMs) have demonstrated remarkable capabilities in modern medicine, yet their application in Traditional Chinese Medicine (TCM) remains severely limited by the absence of standardized benchmarks and the scarcity of high-quality training data. To address these challenges, we introduce TCM-Eval, the first dynamic and extensible benchmark for TCM, meticulously curated from national medical licensing examinations and validated by TCM experts. Furthermore, we construct a large-scale training corpus and propose Self-Iterative Chain-of-Thought Enhancement (SI-CoTE) to autonomously enrich question-answer pairs with validated reasoning chains through rejection sampling, establishing a virtuous cycle of data and model co-evolution. Using this enriched training data, we develop ZhiMingTang (ZMT), a state-of-the-art LLM specifically designed for TCM, which significantly exceeds the passing threshold for human practitioners. To encourage future research and development, we release a public leaderboard, fostering community engagement and continuous improvement.

Topik & Kata Kunci

Penulis (13)

Z

Zihao Cheng

Y

Yuheng Lu

H

Huaiqian Ye

Z

Zeming Liu

M

Minqi Wang

J

Jingjing Liu

Z

Zihan Li

W

Wei Fan

Y

Yuanfang Guo

R

Ruiji Fu

S

Shifeng She

G

Gang Wang

Y

Yunhong Wang

Format Sitasi

Cheng, Z., Lu, Y., Ye, H., Liu, Z., Wang, M., Liu, J. et al. (2025). TCM-Eval: An Expert-Level Dynamic and Extensible Benchmark for Traditional Chinese Medicine. https://arxiv.org/abs/2511.07148

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓