arXiv Open Access 2024

Learning to Refine with Fine-Grained Natural Language Feedback

Manya Wadhwa Xinyu Zhao Junyi Jessy Li Greg Durrett
Lihat Sumber

Abstrak

Recent work has explored the capability of large language models (LLMs) to identify and correct errors in LLM-generated responses. These refinement approaches frequently evaluate what sizes of models are able to do refinement for what problems, but less attention is paid to what effective feedback for refinement looks like. In this work, we propose looking at refinement with feedback as a composition of three distinct LLM competencies: (1) detection of bad generations; (2) fine-grained natural language critique generation; (3) refining with fine-grained feedback. The first step can be implemented with a high-performing discriminative model and steps 2 and 3 can be implemented either via prompted or fine-tuned LLMs. A key property of the proposed Detect, Critique, Refine ("DCR") method is that the step 2 critique model can give fine-grained feedback about errors, made possible by offloading the discrimination to a separate model in step 1. We show that models of different capabilities benefit from refining with DCR on the task of improving factual consistency of document grounded summaries. Overall, DCR consistently outperforms existing end-to-end refinement approaches and current trained models not fine-tuned for factuality critiquing.

Topik & Kata Kunci

Penulis (4)

M

Manya Wadhwa

X

Xinyu Zhao

J

Junyi Jessy Li

G

Greg Durrett

Format Sitasi

Wadhwa, M., Zhao, X., Li, J.J., Durrett, G. (2024). Learning to Refine with Fine-Grained Natural Language Feedback. https://arxiv.org/abs/2407.02397

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓