arXiv Open Access 2021

Adversarial Attacks in Cooperative AI

Ted Fujimoto Arthur Paul Pedersen
Lihat Sumber

Abstrak

Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation. If intelligent agents are to interact and work together to solve complex problems, methods that counter non-cooperative behavior are needed to facilitate the training of multiple agents. This is the goal of cooperative AI. Recent research in adversarial machine learning, however, shows that models (e.g., image classifiers) can be easily deceived into making inferior decisions. Meanwhile, an important line of research in cooperative AI has focused on introducing algorithmic improvements that accelerate learning of optimally cooperative behavior. We argue that prominent methods of cooperative AI are exposed to weaknesses analogous to those studied in prior machine learning research. More specifically, we show that three algorithms inspired by human-like social intelligence are, in principle, vulnerable to attacks that exploit weaknesses introduced by cooperative AI's algorithmic improvements and report experimental findings that illustrate how these vulnerabilities can be exploited in practice.

Topik & Kata Kunci

Penulis (2)

T

Ted Fujimoto

A

Arthur Paul Pedersen

Format Sitasi

Fujimoto, T., Pedersen, A.P. (2021). Adversarial Attacks in Cooperative AI. https://arxiv.org/abs/2111.14833

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓