DOAJ Open Access 2025

MQ-GNN: A Multi-Queue Pipelined Architecture for Scalable and Efficient GNN Training

Irfan Ullah Young-Koo Lee

Abstrak

Graph Neural Networks (GNNs) are powerful tools for learning graph-structured data, but their scalability is hindered by inefficient mini-batch generation, data transfer bottlenecks, and costly inter-GPU synchronization. Existing training frameworks fail to overlap these stages, leading to suboptimal resource utilization. This paper proposes MQ-GNN, a multi-queue pipelined framework that maximizes training efficiency by interleaving GNN training stages and optimizing resource utilization. MQ-GNN introduces Ready-to-Update Asynchronous Consistent Model (RaCoM), which enables asynchronous gradient sharing and model updates while ensuring global consistency through adaptive periodic synchronization. Additionally, it employs global neighbor sampling with caching to reduce data transfer overhead and an adaptive queue-sizing strategy to balance computation and memory efficiency. Experiments on four large-scale datasets and ten baseline models demonstrate that MQ-GNN achieves up to <inline-formula> <tex-math notation="LaTeX">${4.6\,\times }$ </tex-math></inline-formula> faster training time and 30% improved GPU utilization while maintaining competitive accuracy. These results establish MQ-GNN as a scalable and efficient solution for multi-GPU GNN training. The code is available at MQ-GNN.

Penulis (2)

I

Irfan Ullah

Y

Young-Koo Lee

Format Sitasi

Ullah, I., Lee, Y. (2025). MQ-GNN: A Multi-Queue Pipelined Architecture for Scalable and Efficient GNN Training. https://doi.org/10.1109/ACCESS.2025.3539976

Akses Cepat

PDF tidak tersedia langsung

Cek di sumber asli →
Lihat di Sumber doi.org/10.1109/ACCESS.2025.3539976
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.1109/ACCESS.2025.3539976
Akses
Open Access ✓