arXiv Open Access 2024

Finetuning Language Models to Emit Linguistic Expressions of Uncertainty

Arslan Chaudhry Sridhar Thiagarajan Dilan Gorur
Lihat Sumber

Abstrak

Large language models (LLMs) are increasingly employed in information-seeking and decision-making tasks. Despite their broad utility, LLMs tend to generate information that conflicts with real-world facts, and their persuasive style can make these inaccuracies appear confident and convincing. As a result, end-users struggle to consistently align the confidence expressed by LLMs with the accuracy of their predictions, often leading to either blind trust in all outputs or a complete disregard for their reliability. In this work, we explore supervised finetuning on uncertainty-augmented predictions as a method to develop models that produce linguistic expressions of uncertainty. Specifically, we measure the calibration of pre-trained models and then fine-tune language models to generate calibrated linguistic expressions of uncertainty. Through experiments on various question-answering datasets, we demonstrate that LLMs are well-calibrated in assessing their predictions, and supervised finetuning based on the model's own confidence leads to well-calibrated expressions of uncertainty, particularly for single-claim answers.

Topik & Kata Kunci

Penulis (3)

A

Arslan Chaudhry

S

Sridhar Thiagarajan

D

Dilan Gorur

Format Sitasi

Chaudhry, A., Thiagarajan, S., Gorur, D. (2024). Finetuning Language Models to Emit Linguistic Expressions of Uncertainty. https://arxiv.org/abs/2409.12180

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓