arXiv Open Access 2026

AI and World Models

Robert Worden
Lihat Sumber

Abstrak

While large neural nets perform impressively on specific tasks, they are unreliable and unsafe, as is shown by the persistent hallucinations of large language models. This paper shows that large neural nets are intrinsically unreliable, because it is not possible to make or validate a tractable theory of how a neural net works. There is no reliable way to extrapolate its performance from a limited number of test cases to an unlimited set of use cases. To have confidence in the performance of a neural net, it is necessary to enclose it in a guardrail which is provably safe, so that whatever the neural net does, there cannot be harmful consequences. World models have been proposed as a way to do this. This paper discusses the scope and architecture required of world models. World models are often conceived as models of the physical and natural world, using established theories of natural science, or learned regularities, to predict the physical consequences of AI actions. However, unforeseen consequences of AI actions impact the human social world as much as the physical world. To predict and control the consequences of AI, a world model needs to include a model of the human social world. I explore the challenges that this entails. Human language is based on a Common Ground of mutual understanding of the world, shared by the people conversing. The common ground is an overlapping subset of each persons world model, including their models of the physical, social and mental worlds. LLMs have no stable representation of a common ground. To be reliable, AI systems will need to represent a common ground with their users, including physical, mental and social domains.

Topik & Kata Kunci

Penulis (1)

R

Robert Worden

Format Sitasi

Worden, R. (2026). AI and World Models. https://arxiv.org/abs/2601.17796

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓