Semantic Scholar Open Access 2018 783 sitasi

Demystifying Parallel and Distributed Deep Learning

Tal Ben-Nun T. Hoefler

Abstrak

Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.

Topik & Kata Kunci

Penulis (2)

T

Tal Ben-Nun

T

T. Hoefler

Format Sitasi

Ben-Nun, T., Hoefler, T. (2018). Demystifying Parallel and Distributed Deep Learning. https://doi.org/10.1145/3320060

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1145/3320060
Informasi Jurnal
Tahun Terbit
2018
Bahasa
en
Total Sitasi
783×
Sumber Database
Semantic Scholar
DOI
10.1145/3320060
Akses
Open Access ✓