arXiv Open Access 2025

The Pluralistic Moral Gap: Understanding Judgment and Value Differences between Humans and Large Language Models

Giuseppe Russo Debora Nozza Paul Röttger Dirk Hovy
Lihat Sumber

Abstrak

People increasingly rely on Large Language Models (LLMs) for moral advice, which may influence humans' decisions. Yet, little is known about how closely LLMs align with human moral judgments. To address this, we introduce the Moral Dilemma Dataset, a benchmark of 1,618 real-world moral dilemmas paired with a distribution of human moral judgments consisting of a binary evaluation and a free-text rationale. We treat this problem as a pluralistic distributional alignment task, comparing the distributions of LLM and human judgments across dilemmas. We find that models reproduce human judgments only under high consensus; alignment deteriorates sharply when human disagreement increases. In parallel, using a 60-value taxonomy built from 3,783 value expressions extracted from rationales, we show that LLMs rely on a narrower set of moral values than humans. These findings reveal a pluralistic moral gap: a mismatch in both the distribution and diversity of values expressed. To close this gap, we introduce Dynamic Moral Profiling (DMP), a Dirichlet-based sampling method that conditions model outputs on human-derived value profiles. DMP improves alignment by 64.3% and enhances value diversity, offering a step toward more pluralistic and human-aligned moral guidance from LLMs.

Topik & Kata Kunci

Penulis (4)

G

Giuseppe Russo

D

Debora Nozza

P

Paul Röttger

D

Dirk Hovy

Format Sitasi

Russo, G., Nozza, D., Röttger, P., Hovy, D. (2025). The Pluralistic Moral Gap: Understanding Judgment and Value Differences between Humans and Large Language Models. https://arxiv.org/abs/2507.17216

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓