arXiv Open Access 2024

Measuring Moral Inconsistencies in Large Language Models

Vamshi Krishna Bonagiri Sreeram Vennam Manas Gaur Ponnurangam Kumaraguru
Lihat Sumber

Abstrak

A Large Language Model (LLM) is considered consistent if semantically equivalent prompts produce semantically equivalent responses. Despite recent advancements showcasing the impressive capabilities of LLMs in conversational systems, we show that even state-of-the-art LLMs are highly inconsistent in their generations, questioning their reliability. Prior research has tried to measure this with task-specific accuracy. However, this approach is unsuitable for moral scenarios, such as the trolley problem, with no "correct" answer. To address this issue, we propose a novel information-theoretic measure called Semantic Graph Entropy (SGE) to measure the consistency of an LLM in moral scenarios. We leverage "Rules of Thumb" (RoTs) to explain a model's decision-making strategies and further enhance our metric. Compared to existing consistency metrics, SGE correlates better with human judgments across five LLMs. In the future, we aim to investigate the root causes of LLM inconsistencies and propose improvements.

Topik & Kata Kunci

Penulis (4)

V

Vamshi Krishna Bonagiri

S

Sreeram Vennam

M

Manas Gaur

P

Ponnurangam Kumaraguru

Format Sitasi

Bonagiri, V.K., Vennam, S., Gaur, M., Kumaraguru, P. (2024). Measuring Moral Inconsistencies in Large Language Models. https://arxiv.org/abs/2402.01719

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2024
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓