Semantic Scholar Open Access 2020 521 sitasi

SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning

Hanrui Wang Zhekai Zhang Song Han

Abstrak

The attention mechanism is becoming increasingly popular in Natural Language Processing (NLP) applications, showing superior performance than convolutional and recurrent architectures. However, general-purpose platforms such as CPUs and GPUs are inefficient when performing attention inference due to complicated data movement and low arithmetic intensity. Moreover, existing NN accelerators mainly focus on optimizing convolutional or recurrent models, and cannot efficiently support attention. In this paper, we present SpAtten, an efficient algorithm-architecture co-design that leverages token sparsity, head sparsity, and quantization opportunities to reduce the attention computation and memory access. Inspired by the high redundancy of human languages, we propose the novel cascade token pruning to prune away unimportant tokens in the sentence. We also propose cascade head pruning to remove unessential heads. Cascade pruning is fundamentally different from weight pruning since there is no trainable weight in the attention mechanism, and the pruned tokens and heads are selected on the fly. To efficiently support them on hardware, we design a novel top-k engine to rank token and head importance scores with high throughput. Furthermore, we propose progressive quantization that first fetches MSBs only and performs the computation; if the confidence is low, it fetches LSBs and recomputes the attention outputs, trading computation for memory reduction.Extensive experiments on 30 benchmarks show that, on average, SpAtten reduces DRAM access by 10.0× with no accuracy loss, and achieves 1.6×, 3.0×, 162×, 347× speedup, and 1.4×, 3.2×, 1193×, 4059× energy savings over A3 accelerator, MNNFast accelerator, TITAN Xp GPU, Xeon CPU, respectively.

Topik & Kata Kunci

Penulis (3)

H

Hanrui Wang

Z

Zhekai Zhang

S

Song Han

Format Sitasi

Wang, H., Zhang, Z., Han, S. (2020). SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning. https://doi.org/10.1109/HPCA51647.2021.00018

Akses Cepat

Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Total Sitasi
521×
Sumber Database
Semantic Scholar
DOI
10.1109/HPCA51647.2021.00018
Akses
Open Access ✓