Semantic Scholar Open Access 2017 275 sitasi

Online Learning Rate Adaptation with Hypergradient Descent

A. G. Baydin R. Cornish David Martínez-Rubio Mark W. Schmidt Frank D. Wood

Abstrak

We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this "hypergradient" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.

Penulis (5)

A

A. G. Baydin

R

R. Cornish

D

David Martínez-Rubio

M

Mark W. Schmidt

F

Frank D. Wood

Format Sitasi

Baydin, A.G., Cornish, R., Martínez-Rubio, D., Schmidt, M.W., Wood, F.D. (2017). Online Learning Rate Adaptation with Hypergradient Descent. https://www.semanticscholar.org/paper/512ca06114f5292f7d0b536ce030e319863c781a

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2017
Bahasa
en
Total Sitasi
275×
Sumber Database
Semantic Scholar
Akses
Open Access ✓