arXiv Open Access 2025

Pinpoint resource allocation for GPU batch applications

Tim Voigtländer Manuel Giffels Günter Quast Matthias Schnepf Roger Wolf
Lihat Sumber

Abstrak

With the increasing usage of Machine Learning (ML) in High energy physics (HEP), there is a variety of new analyses with a large spread in compute resource requirements, especially when it comes to GPU resources. For institutes, like the Karlsruhe Institute of Technology (KIT), that provide GPU compute resources to HEP via their batch systems or the Grid, a high throughput, as well as energy efficient usage of their systems is essential. With low intensity GPU analyses specifically, inefficiencies are created by the standard scheduling, as resources are over-assigned to such workflows. An approach that is flexible enough to cover the entire spectrum, from multi-process per GPU, to multi-GPU per process, is necessary. As a follow-up to the techniques presented at ACAT 2022, this time we study NVIDIA's Multi-Process Service (MPS), its ability to securely distribute device memory and its interplay with the KIT HTCondor batch system. A number of ML applications were benchmarked using this approach to illustrate the performance implications in terms of throughput and energy efficiency.

Topik & Kata Kunci

Penulis (5)

T

Tim Voigtländer

M

Manuel Giffels

G

Günter Quast

M

Matthias Schnepf

R

Roger Wolf

Format Sitasi

Voigtländer, T., Giffels, M., Quast, G., Schnepf, M., Wolf, R. (2025). Pinpoint resource allocation for GPU batch applications. https://arxiv.org/abs/2505.08562

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2025
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓