arXiv Open Access 2025

Technical Requirements for Halting Dangerous AI Activities

Peter Barnett Aaron Scher David Abecassis
Lihat Sumber

Abstrak

The rapid development of AI systems poses unprecedented risks, including loss of control, misuse, geopolitical instability, and concentration of power. To navigate these risks and avoid worst-case outcomes, governments may proactively establish the capability for a coordinated halt on dangerous AI development and deployment. In this paper, we outline key technical interventions that could allow for a coordinated halt on dangerous AI activities. We discuss how these interventions may contribute to restricting various dangerous AI activities, and show how these interventions can form the technical foundation for potential AI governance plans.

Topik & Kata Kunci

Penulis (3)

P

Peter Barnett

A

Aaron Scher

D

David Abecassis

Format Sitasi

Barnett, P., Scher, A., Abecassis, D. (2025). Technical Requirements for Halting Dangerous AI Activities. https://arxiv.org/abs/2507.09801

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓