Semantic Scholar Open Access 2019 2182 sitasi

On the Variance of the Adaptive Learning Rate and Beyond

Liyuan Liu Haoming Jiang Pengcheng He Weizhu Chen Xiaodong Liu +2 lainnya

Abstrak

The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Here, we study its mechanism in details. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate (i.e., it has problematically large variance in the early stage), suggest warmup works as a variance reduction technique, and provide both empirical and theoretical evidence to verify our hypothesis. We further propose RAdam, a new variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Extensive experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the effectiveness and robustness of our proposed method. All implementations are available at: this https URL.

Penulis (7)

L

Liyuan Liu

H

Haoming Jiang

P

Pengcheng He

W

Weizhu Chen

X

Xiaodong Liu

J

Jianfeng Gao

J

Jiawei Han

Format Sitasi

Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J. et al. (2019). On the Variance of the Adaptive Learning Rate and Beyond. https://www.semanticscholar.org/paper/2bf7c350a8280e7c593d46a60127f99b21517121

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2019
Bahasa
en
Total Sitasi
2182×
Sumber Database
Semantic Scholar
Akses
Open Access ✓