arXiv Open Access 2026

Linguistic properties and model scale in brain encoding: from small to compressed language models

Subba Reddy Oota Vijay Rowtula Satya Sai Srinath Namburi Khushbu Pahwa Anant Khandelwal +3 lainnya
Lihat Sumber

Abstrak

Recent work has shown that scaling large language models (LLMs) improves their alignment with human brain activity, yet it remains unclear what drives these gains and which representational properties are responsible. Although larger models often yield better task performance and brain alignment, they are increasingly difficult to analyze mechanistically. This raises a fundamental question: what is the minimal model capacity required to capture brain-relevant representations? To address this question, we systematically investigate how constraining model scale and numerical precision affects brain alignment. We compare full-precision LLMs, small language models (SLMs), and compressed variants (quantized and pruned) by predicting fMRI responses during naturalistic language comprehension. Across model families up to 14B parameters, we find that 3B SLMs achieve brain predictivity indistinguishable from larger LLMs, whereas 1B models degrade substantially, particularly in semantic language regions. Brain alignment is remarkably robust to compression: most quantization and pruning methods preserve neural predictivity, with GPTQ as a consistent exception. Linguistic probing reveals a dissociation between task performance and brain predictivity: compression degrades discourse, syntax, and morphology, yet brain predictivity remains largely unchanged. Overall, brain alignment saturates at modest model scales and is resilient to compression, challenging common assumptions about neural scaling and motivating compact models for brain-aligned language modeling.

Penulis (8)

S

Subba Reddy Oota

V

Vijay Rowtula

S

Satya Sai Srinath Namburi

K

Khushbu Pahwa

A

Anant Khandelwal

M

Manish Gupta

T

Tanmoy Chakraborty

B

Bapi S. Raju

Format Sitasi

Oota, S.R., Rowtula, V., Namburi, S.S.S., Pahwa, K., Khandelwal, A., Gupta, M. et al. (2026). Linguistic properties and model scale in brain encoding: from small to compressed language models. https://arxiv.org/abs/2602.07547

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓