arXiv Open Access 2025

Deterministic or probabilistic? The psychology of LLMs as random number generators

Javier Coronado-Blázquez
Lihat Sumber

Abstrak

Large Language Models (LLMs) have transformed text generation through inherently probabilistic context-aware mechanisms, mimicking human natural language. In this paper, we systematically investigate the performance of various LLMs when generating random numbers, considering diverse configurations such as different model architectures, numerical ranges, temperature, and prompt languages. Our results reveal that, despite their stochastic transformers-based architecture, these models often exhibit deterministic responses when prompted for random numerical outputs. In particular, we find significant differences when changing the model, as well as the prompt language, attributing this phenomenon to biases deeply embedded within the training data. Models such as DeepSeek-R1 can shed some light on the internal reasoning process of LLMs, despite arriving to similar results. These biases induce predictable patterns that undermine genuine randomness, as LLMs are nothing but reproducing our own human cognitive biases.

Topik & Kata Kunci

Penulis (1)

J

Javier Coronado-Blázquez

Format Sitasi

Coronado-Blázquez, J. (2025). Deterministic or probabilistic? The psychology of LLMs as random number generators. https://arxiv.org/abs/2502.19965

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓