arXiv Open Access 2026

Improving Variable-Length Generation in Diffusion Language Models via Length Regularization

Zicong Cheng Ruixuan Jia Jia Li Guo-Wei Yang Meng-Hao Guo +1 lainnya
Lihat Sumber

Abstrak

Diffusion Large Language Models (DLLMs) are inherently ill-suited for variable-length generation, as their inference is defined on a fixed-length canvas and implicitly assumes a known target length. When the length is unknown, as in realistic completion and infilling, naively comparing confidence across mask lengths becomes systematically biased, leading to under-generation or redundant continuations. In this paper, we show that this failure arises from an intrinsic lengthinduced bias in generation confidence estimates, leaving existing DLLMs without a robust way to determine generation length and making variablelength inference unreliable. To address this issue, we propose LR-DLLM, a length-regularized inference framework for DLLMs that treats generation length as an explicit variable and achieves reliable length determination at inference time. It decouples semantic compatibility from lengthinduced uncertainty through an explicit length regularization that corrects biased confidence estimates. Based on this, LR-DLLM enables dynamic expansion or contraction of the generation span without modifying the underlying DLLM or its training procedure. Experiments show that LRDLLM achieves 51.3% Pass@1 on HumanEvalInfilling under fully unknown lengths (+13.4% vs. DreamOn) and 51.5% average Pass@1 on four-language McEval (+14.3% vs. DreamOn).

Topik & Kata Kunci

Penulis (6)

Z

Zicong Cheng

R

Ruixuan Jia

J

Jia Li

G

Guo-Wei Yang

M

Meng-Hao Guo

S

Shi-Min Hu

Format Sitasi

Cheng, Z., Jia, R., Li, J., Yang, G., Guo, M., Hu, S. (2026). Improving Variable-Length Generation in Diffusion Language Models via Length Regularization. https://arxiv.org/abs/2602.07546

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓