Semantic Scholar Open Access 2020 7499 sitasi

Scaling Laws for Neural Language Models

J. Kaplan Sam McCandlish T. Henighan Tom B. Brown Benjamin Chess +5 lainnya

Abstrak

We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

Penulis (10)

J

J. Kaplan

S

Sam McCandlish

T

T. Henighan

T

Tom B. Brown

B

Benjamin Chess

R

R. Child

S

Scott Gray

A

Alec Radford

J

Jeff Wu

D

Dario Amodei

Format Sitasi

Kaplan, J., McCandlish, S., Henighan, T., Brown, T.B., Chess, B., Child, R. et al. (2020). Scaling Laws for Neural Language Models. https://www.semanticscholar.org/paper/e6c561d02500b2596a230b341a8eb8b921ca5bf2

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
7499×
Sumber Database
Semantic Scholar
Akses
Open Access ✓