arXiv Open Access 2026

Reducing Compute Waste in LLMs through Kernel-Level DVFS

Jeffrey Spaan Kuan-Hsun Chen Ana-Lucia Varbanescu
Lihat Sumber

Abstrak

The rapid growth of AI has fueled the expansion of accelerator- or GPU-based data centers. However, the rising operational energy consumption has emerged as a critical bottleneck and a major sustainability concern. Dynamic Voltage and Frequency Scaling (DVFS) is a well-known technique used to reduce energy consumption, and thus improve energy-efficiency, since it requires little effort and works with existing hardware. Reducing the energy consumption of training and inference of Large Language Models (LLMs) through DVFS or power capping is feasible: related work has shown energy savings can be significant, but at the cost of significant slowdowns. In this work, we focus on reducing waste in LLM operations: i.e., reducing energy consumption without losing performance. We propose a fine-grained, kernel-level, DVFS approach that explores new frequency configurations, and prove these save more energy than previous, pass- or iteration-level solutions. For example, for a GPT-3 training run, a pass-level approach could reduce energy consumption by 2% (without losing performance), while our kernel-level approach saves as much as 14.6% (with a 0.6% slowdown). We further investigate the effect of data and tensor parallelism, and show our discovered clock frequencies translate well for both. We conclude that kernel-level DVFS is a suitable technique to reduce waste in LLM operations, providing significant energy savings with negligible slow-down.

Topik & Kata Kunci

Penulis (3)

J

Jeffrey Spaan

K

Kuan-Hsun Chen

A

Ana-Lucia Varbanescu

Format Sitasi

Spaan, J., Chen, K., Varbanescu, A. (2026). Reducing Compute Waste in LLMs through Kernel-Level DVFS. https://arxiv.org/abs/2601.08539

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2026
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓