arXiv Open Access 2023

PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain

Wei Zhu Xiaoling Wang Huanran Zheng Mosha Chen Buzhou Tang
Lihat Sumber

Abstrak

Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends. However, most current benchmarks: (a) are limited to English which makes it challenging to replicate many of the successes in English for other languages, or (b) focus on knowledge probing of LLMs and neglect to evaluate how LLMs apply these knowledge to perform on a wide range of bio-medical tasks, or (c) have become a publicly available corpus and are leaked to LLMs during pre-training. To facilitate the research in medical LLMs, we re-build the Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark into a large scale prompt-tuning benchmark, PromptCBLUE. Our benchmark is a suitable test-bed and an online platform for evaluating Chinese LLMs' multi-task capabilities on a wide range bio-medical tasks including medical entity recognition, medical text classification, medical natural language inference, medical dialogue understanding and medical content/dialogue generation. To establish evaluation on these tasks, we have experimented and report the results with the current 9 Chinese LLMs fine-tuned with differtent fine-tuning techniques.

Topik & Kata Kunci

Penulis (5)

W

Wei Zhu

X

Xiaoling Wang

H

Huanran Zheng

M

Mosha Chen

B

Buzhou Tang

Format Sitasi

Zhu, W., Wang, X., Zheng, H., Chen, M., Tang, B. (2023). PromptCBLUE: A Chinese Prompt Tuning Benchmark for the Medical Domain. https://arxiv.org/abs/2310.14151

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓