arXiv Open Access 2023

Everyone Deserves A Reward: Learning Customized Human Preferences

Pengyu Cheng Jiawen Xie Ke Bai Yong Dai Nan Du
Lihat Sumber

Abstrak

Reward models (RMs) are essential for aligning large language models (LLMs) with human preferences to improve interaction quality. However, the real world is pluralistic, which leads to diversified human preferences with respect to different religions, politics, cultures, etc. Moreover, each individual can have their unique preferences on various topics. Neglecting the diversity of human preferences, current human feedback aligning methods only consider a general reward model, which is below satisfaction for customized or personalized application scenarios. To explore customized preference learning, we collect a domain-specific preference (DSP) dataset, which includes preferred responses for each given query from four practical domains. Besides, from the perspective of data efficiency, we propose a three-stage customized RM learning scheme, then empirically verify its effectiveness on both general preference datasets and our DSP set. Furthermore, we test multiple training and data strategies on the three learning stages. We find several ways to better preserve the general preferring ability while training the customized RMs, especially general preference enrichment, and customized preference imitation learning. The DSP dataset and code are available at https://github.com/Linear95/DSP.

Topik & Kata Kunci

Penulis (5)

P

Pengyu Cheng

J

Jiawen Xie

K

Ke Bai

Y

Yong Dai

N

Nan Du

Format Sitasi

Cheng, P., Xie, J., Bai, K., Dai, Y., Du, N. (2023). Everyone Deserves A Reward: Learning Customized Human Preferences. https://arxiv.org/abs/2309.03126

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓