arXiv Open Access 2025

EssayBench: Evaluating Large Language Models in Multi-Genre Chinese Essay Writing

Fan Gao Dongyuan Li Ding Xia Fei Mi Yasheng Wang +2 lainnya
Lihat Sumber

Abstrak

Chinese essay writing and its evaluation are critical in educational contexts, yet the capabilities of Large Language Models (LLMs) in this domain remain largely underexplored. Existing benchmarks often rely on coarse-grained text quality metrics, largely overlooking the structural and rhetorical complexities of Chinese essays, particularly across diverse genres. To address this gap, we propose \benchName, a multi-genre benchmark specifically designed for Chinese essay writing across four major genres: Argumentative, Narrative, Descriptive, and Expository. We curate and refine a total of 728 real-world prompts to ensure authenticity and meticulously categorize them into the \textit{Open-Ended} and \textit{Constrained} sets to capture diverse writing scenarios. To reliably evaluate generated essays, we develop a fine-grained, genre-specific scoring framework that hierarchically aggregates scores. We further validate our evaluation protocol through a comprehensive human agreement study. Finally, we benchmark 15 large-sized LLMs, analyzing their strengths and limitations across genres and instruction types. With \benchName, we aim to advance LLM-based Chinese essay evaluation and inspire future research on improving essay generation in educational settings.

Topik & Kata Kunci

Penulis (7)

F

Fan Gao

D

Dongyuan Li

D

Ding Xia

F

Fei Mi

Y

Yasheng Wang

L

Lifeng Shang

B

Baojun Wang

Format Sitasi

Gao, F., Li, D., Xia, D., Mi, F., Wang, Y., Shang, L. et al. (2025). EssayBench: Evaluating Large Language Models in Multi-Genre Chinese Essay Writing. https://arxiv.org/abs/2506.02596

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓