arXiv Open Access 2026

Human Control Is the Anchor, Not the Answer: Early Divergence of Oversight in Agentic AI Communities

Hanjing Shi Dominic DiFranzo
Lihat Sumber

Abstrak

Oversight for agentic AI is often discussed as a single goal ("human control"), yet early adoption may produce role-specific expectations. We present a comparative analysis of two newly active Reddit communities in Jan--Feb 2026 that reflect different socio-technical roles: r/OpenClaw (deployment and operations) and r/Moltbook (agent-centered social interaction). We conceptualize this period as an early-stage crystallization phase, where oversight expectations form before norms reach equilibrium. Using topic modeling in a shared comparison space, a coarse-grained oversight-theme abstraction, engagement-weighted salience, and divergence tests, we show the communities are strongly separable (JSD =0.418, cosine =0.372, permutation $p=0.0005$). Across both communities, "human control" is an anchor term, but its operational meaning diverges: r/OpenClaw} emphasizes execution guardrails and recovery (action-risk), while r/Moltbook} emphasizes identity, legitimacy, and accountability in public interaction (meaning-risk). The resulting distinction offers a portable lens for designing and evaluating oversight mechanisms that match agent role, rather than applying one-size-fits-all control policies.

Topik & Kata Kunci

Penulis (2)

H

Hanjing Shi

D

Dominic DiFranzo

Format Sitasi

Shi, H., DiFranzo, D. (2026). Human Control Is the Anchor, Not the Answer: Early Divergence of Oversight in Agentic AI Communities. https://arxiv.org/abs/2602.09286

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓