DOAJ Open Access 2025

Automatic generation of ESL learning materials based on CEFR levels using reinforcement-tuned LLMs

Yi Zuo

Abstrak

Abstract The automatic generation of CEFR-aligned learning materials remains a challenging task due to the difficulty of balancing linguistic accuracy, scalability, and adaptability. Existing approaches, from rule-based templates to large language models, often fail to achieve strict CEFR alignment while maintaining readability across multi-paragraph content. To address this gap, we propose a reinforcement-learning-tuned LLM framework that integrates CEFR feature extraction, multi-objective reward shaping, and constrained decoding into a unified architecture. The framework enables dynamic adjustment of text complexity while ensuring level consistency. Experimental results show that our method improves CEFR-level classification accuracy by up to 12.3% at B2-C1 levels compared with state-of-the-art baselines, and reduces misalignment errors by 15.6%. Furthermore, attention visualization confirms that the policy network effectively focuses on complex syntactic structures during intermediate-level generation. These findings highlight not only the effectiveness of reinforcement learning in structured text generation but also the potential of constrained optimization as a scalable methodology for fine-grained linguistic control.

Penulis (1)

Y

Yi Zuo

Format Sitasi

Zuo, Y. (2025). Automatic generation of ESL learning materials based on CEFR levels using reinforcement-tuned LLMs. https://doi.org/10.1007/s44163-025-00762-3

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1007/s44163-025-00762-3
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.1007/s44163-025-00762-3
Akses
Open Access ✓