arXiv Open Access 2024

THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models

Mengfei Liang Archish Arun Zekun Wu Cristian Munoz Jonathan Lutch +3 lainnya
Lihat Sumber

Abstrak

Hallucination, the generation of factually incorrect content, is a growing challenge in Large Language Models (LLMs). Existing detection and mitigation methods are often isolated and insufficient for domain-specific needs, lacking a standardized pipeline. This paper introduces THaMES (Tool for Hallucination Mitigations and EvaluationS), an integrated framework and library addressing this gap. THaMES offers an end-to-end solution for evaluating and mitigating hallucinations in LLMs, featuring automated test set generation, multifaceted benchmarking, and adaptable mitigation strategies. It automates test set creation from any corpus, ensuring high data quality, diversity, and cost-efficiency through techniques like batch processing, weighted sampling, and counterfactual validation. THaMES assesses a model's ability to detect and reduce hallucinations across various tasks, including text generation and binary classification, applying optimal mitigation strategies like In-Context Learning (ICL), Retrieval Augmented Generation (RAG), and Parameter-Efficient Fine-tuning (PEFT). Evaluations of state-of-the-art LLMs using a knowledge base of academic papers, political news, and Wikipedia reveal that commercial models like GPT-4o benefit more from RAG than ICL, while open-weight models like Llama-3.1-8B-Instruct and Mistral-Nemo gain more from ICL. Additionally, PEFT significantly enhances the performance of Llama-3.1-8B-Instruct in both evaluation tasks.

Topik & Kata Kunci

Penulis (8)

M

Mengfei Liang

A

Archish Arun

Z

Zekun Wu

C

Cristian Munoz

J

Jonathan Lutch

E

Emre Kazim

A

Adriano Koshiyama

P

Philip Treleaven

Format Sitasi

Liang, M., Arun, A., Wu, Z., Munoz, C., Lutch, J., Kazim, E. et al. (2024). THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models. https://arxiv.org/abs/2409.11353

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓