Semantic Scholar Open Access 2018 492 sitasi

A System for Massively Parallel Hyperparameter Tuning

Liam Li Kevin G. Jamieson A. Rostamizadeh Ekaterina Gonina Jonathan Ben-tzur +3 lainnya

Abstrak

Modern learning models are characterized by large hyperparameter spaces and long training times. These properties, coupled with the rise of parallel computing and the growing demand to productionize machine learning workloads, motivate the need to develop mature hyperparameter optimization functionality in distributed computing settings. We address this challenge by first introducing a simple and robust hyperparameter optimization algorithm called ASHA, which exploits parallelism and aggressive early-stopping to tackle large-scale hyperparameter optimization problems. Our extensive empirical results show that ASHA outperforms existing state-of-the-art hyperparameter optimization methods; scales linearly with the number of workers in distributed settings; and is suitable for massive parallelism, as demonstrated on a task with 500 workers. We then describe several design decisions we encountered, along with our associated solutions, when integrating ASHA in Determined AI's end-to-end production-quality machine learning system that offers hyperparameter tuning as a service.

Penulis (8)

L

Liam Li

K

Kevin G. Jamieson

A

A. Rostamizadeh

E

Ekaterina Gonina

J

Jonathan Ben-tzur

M

Moritz Hardt

B

B. Recht

A

Ameet Talwalkar

Format Sitasi

Li, L., Jamieson, K.G., Rostamizadeh, A., Gonina, E., Ben-tzur, J., Hardt, M. et al. (2018). A System for Massively Parallel Hyperparameter Tuning. https://www.semanticscholar.org/paper/a2403c1ce02120f7bd383e395b561ff7c64d52ec

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2018
Bahasa
en
Total Sitasi
492×
Sumber Database
Semantic Scholar
Akses
Open Access ✓