arXiv Open Access 2025

From Reasoning to Code: GRPO Optimization for Underrepresented Languages

Federico Pennino Bianca Raimondi Massimo Rondelli Andrea Gurioli Maurizio Gabbrielli
Lihat Sumber

Abstrak

Generating accurate and executable code using large language models (LLMs) is challenging for languages with limited public training data compared to popular languages such as Python. This paper introduces a generalizable approach that uses small-scale code versions of the Qwen 2.5 model combined with Group Relative Policy Optimization (GRPO) to enable effective code generation through explicit reasoning steps, which is particularly beneficial for languages with smaller source code databases. Using Prolog as a representative use case -- given its limited online presence -- the initial model faced challenges in generating executable code. After some training steps, the model successfully produces logically consistent and syntactically accurate code by directly integrating reasoning-driven feedback into the reinforcement learning loop. Experimental evaluations using mathematical logic problem benchmarks illustrate significant improvements in reasoning quality, code accuracy, and logical correctness, underscoring the potential of this approach to benefit a wide range of programming languages lacking extensive training resources.

Topik & Kata Kunci

Penulis (5)

F

Federico Pennino

B

Bianca Raimondi

M

Massimo Rondelli

A

Andrea Gurioli

M

Maurizio Gabbrielli

Format Sitasi

Pennino, F., Raimondi, B., Rondelli, M., Gurioli, A., Gabbrielli, M. (2025). From Reasoning to Code: GRPO Optimization for Underrepresented Languages. https://arxiv.org/abs/2506.11027

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓