arXiv Open Access 2025

New Evaluation Paradigm for Lexical Simplification

Jipeng Qiang Minjiang Huang Yi Zhu Yunhao Yuan Chaowei Zhang +1 lainnya
Lihat Sumber

Abstrak

Lexical Simplification (LS) methods use a three-step pipeline: complex word identification, substitute generation, and substitute ranking, each with separate evaluation datasets. We found large language models (LLMs) can simplify sentences directly with a single prompt, bypassing the traditional pipeline. However, existing LS datasets are not suitable for evaluating these LLM-generated simplified sentences, as they focus on providing substitutes for single complex words without identifying all complex words in a sentence. To address this gap, we propose a new annotation method for constructing an all-in-one LS dataset through human-machine collaboration. Automated methods generate a pool of potential substitutes, which human annotators then assess, suggesting additional alternatives as needed. Additionally, we explore LLM-based methods with single prompts, in-context learning, and chain-of-thought techniques. We introduce a multi-LLMs collaboration approach to simulate each step of the LS task. Experimental results demonstrate that LS based on multi-LLMs approaches significantly outperforms existing baselines.

Topik & Kata Kunci

Penulis (6)

J

Jipeng Qiang

M

Minjiang Huang

Y

Yi Zhu

Y

Yunhao Yuan

C

Chaowei Zhang

X

Xiaoye Ouyang

Format Sitasi

Qiang, J., Huang, M., Zhu, Y., Yuan, Y., Zhang, C., Ouyang, X. (2025). New Evaluation Paradigm for Lexical Simplification. https://arxiv.org/abs/2501.15268

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓