Semantic Scholar Open Access 2020 467 sitasi

Neural Architecture Search without Training

J. Mellor Jack Turner A. Storkey Elliot J. Crowley

Abstrak

The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be extremely slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be remedied if we could infer a network's trained accuracy from its initial state. In this work, we examine how the linear maps induced by data points correlate for untrained network architectures in the NAS-Bench-201 search space, and motivate how this can be used to give a measure of modelling flexibility which is highly indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU. Code to reproduce our experiments is available at this https URL.

Penulis (4)

J

J. Mellor

J

Jack Turner

A

A. Storkey

E

Elliot J. Crowley

Format Sitasi

Mellor, J., Turner, J., Storkey, A., Crowley, E.J. (2020). Neural Architecture Search without Training. https://www.semanticscholar.org/paper/25c371d565b387dbf22207a954a9549557698c21

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
467×
Sumber Database
Semantic Scholar
Akses
Open Access ✓