arXiv Open Access 2025

Low-Cost FlashAttention with Fused Exponential and Multiplication Hardware Operators

Kosmas Alexandridis Vasileios Titopoulos Giorgos Dimitrakopoulos
Lihat Sumber

Abstrak

Attention mechanisms, particularly within Transformer architectures and large language models (LLMs), have revolutionized sequence modeling in machine learning and artificial intelligence applications. To compute attention for increasingly long sequences, specialized accelerators have been proposed to execute key attention steps directly in hardware. Among the various recently proposed architectures, those based on variants of the FlashAttention algorithm, originally designed for GPUs, stand out due to their optimized computation, tiling capabilities, and reduced memory traffic. In this work, we focus on optimizing the kernel of floating-point-based FlashAttention using new hardware operators that fuse the computation of exponentials and vector multiplications, e.g., e^x, V. The proposed ExpMul hardware operators significantly reduce the area and power costs of FlashAttention-based hardware accelerators. When implemented in a 28nm ASIC technology, they achieve improvements of 28.8% in area and 17.6% in power, on average, compared to state-of-the-art hardware architectures with separate exponentials and vector multiplications hardware operators.

Topik & Kata Kunci

Penulis (3)

K

Kosmas Alexandridis

V

Vasileios Titopoulos

G

Giorgos Dimitrakopoulos

Format Sitasi

Alexandridis, K., Titopoulos, V., Dimitrakopoulos, G. (2025). Low-Cost FlashAttention with Fused Exponential and Multiplication Hardware Operators. https://arxiv.org/abs/2505.14314

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓