arXiv Open Access 2025

Learning Dynamics of Meta-Learning in Small Model Pretraining

David Demitri Africa Yuval Weiss Paula Buttery Richard Diehl Martinez
Lihat Sumber

Abstrak

Large language models are powerful but costly. We ask whether meta-learning can make the pretraining of small language models not only better but also more interpretable. We integrate first-order MAML with subset-masked LM pretraining, producing four LLama-style decoder-only models (11M-570M params), and evaluate it on a fundamental NLP task with many settings and real-world applications. Compared with vanilla training, our model (i) reaches the same loss up to 1.6x sooner, (ii) improves F1 on multilingual Universal NER under equal compute, and (iii) makes the training dynamics easy to read: first the network's representations fan out ("diversify") and later they collapse into a smaller, shared subspace ("compress"). This two-stage shift shows up as a rise-and-fall in both effective-rank curves and attention-head entropy. The same curves pinpoint which layers specialise earliest and which later reconverge, giving a compact, interpretable signature of meta-adaptation. Code, checkpoints and WandB logs are released.

Topik & Kata Kunci

Penulis (4)

D

David Demitri Africa

Y

Yuval Weiss

P

Paula Buttery

R

Richard Diehl Martinez

Format Sitasi

Africa, D.D., Weiss, Y., Buttery, P., Martinez, R.D. (2025). Learning Dynamics of Meta-Learning in Small Model Pretraining. https://arxiv.org/abs/2508.02189

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓