arXiv Open Access 2021

MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning

Markus Peschl Arkady Zgonnikov Frans A. Oliehoek Luciano C. Siebert
Lihat Sumber

Abstrak

Inferring reward functions from demonstrations and pairwise preferences are auspicious approaches for aligning Reinforcement Learning (RL) agents with human intentions. However, state-of-the art methods typically focus on learning a single reward model, thus rendering it difficult to trade off different reward functions from multiple experts. We propose Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse demonstrations of social norms into a Pareto-optimal policy. Through maintaining a distribution over scalarization weights, our approach is able to interactively tune a deep RL agent towards a variety of preferences, while eliminating the need for computing multiple policies. We empirically demonstrate the effectiveness of MORAL in two scenarios, which model a delivery and an emergency task that require an agent to act in the presence of normative conflicts. Overall, we consider our research a step towards multi-objective RL with learned rewards, bridging the gap between current reward learning and machine ethics literature.

Topik & Kata Kunci

Penulis (4)

M

Markus Peschl

A

Arkady Zgonnikov

F

Frans A. Oliehoek

L

Luciano C. Siebert

Format Sitasi

Peschl, M., Zgonnikov, A., Oliehoek, F.A., Siebert, L.C. (2021). MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning. https://arxiv.org/abs/2201.00012

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓