Semantic Scholar Open Access 2016 314 sitasi

Artificial General Intelligence

James Babcock János Kramár Roman Yampolskiy

Abstrak

There is considerable uncertainty about what properties, capabilities and motivations future AGIs will have. In some plausible scenarios, AGIs may pose security risks arising from accidents and defects. In order to mitigate these risks, prudent early AGI research teams will perform significant testing on their creations before use. Unfortunately, if an AGI has human-level or greater intelligence, testing itself may not be safe; some natural AGI goal systems create emergent incentives for AGIs to tamper with their test environments, make copies of themselves on the internet, or convince developers and operators to do dangerous things. In this paper, we survey the AGI containment problem – the question of how to build a container in which tests can be conducted safely and reliably, even on AGIs with unknown motivations and capabilities that could be dangerous. We identify requirements for AGI containers, available mechanisms, and weaknesses that need to be addressed.

Topik & Kata Kunci

Penulis (3)

J

James Babcock

J

János Kramár

R

Roman Yampolskiy

Format Sitasi

Babcock, J., Kramár, J., Yampolskiy, R. (2016). Artificial General Intelligence. https://doi.org/10.1007/978-3-319-41649-6

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1007/978-3-319-41649-6
Informasi Jurnal
Tahun Terbit
2016
Bahasa
en
Total Sitasi
314×
Sumber Database
Semantic Scholar
DOI
10.1007/978-3-319-41649-6
Akses
Open Access ✓