DOAJ Open Access 2025

CUBIC-Learn: A Reinforcement Learning Approach to CUBIC Congestion Control

Ehsan Abedini Mohsen Nickray

Abstrak

Managing congestion effectively enables reliable and fast data transfer over networks. CUBIC delivers reliable results under normal circumstances but cannot adapt effectively to changing network scenarios. We introduce CUBIC-Learn, an RL approach for improving congestion control in CUBIC. The central idea is to use a Q-learning algorithm to adjust congestion window thresholds based on current data on packet loss, throughput, and latency. Simulations demonstrate more efficient and reliable congestion control when using CUBIC-Learn compared to standard CUBIC. CUBIC-Learn achieves a 47% reduction in packet loss, over a 59% increase in bandwidth utilization, approximately a 28% decrease in retransmissions, and 47% lower latency. In addition, CUBIC-Learn shows significant improvements in congestion window (cwnd) growth behavior, fairness among competing flows, and stability under heterogeneous traffic and network scenarios, including gigabit-scale bandwidth conditions. Statistical analysis further confirms the robustness of these gains, while the method introduces no additional computational overhead. Overall, CUBIC-Learn performs better than PCC, Reno, Tahoe, NewReno, and BBRv3 in most metrics. These findings suggest that RL can markedly improve congestion control in high-speed networks. [JJCIT 2025; 11(4.000): 466-483]

Penulis (2)

E

Ehsan Abedini

M

Mohsen Nickray

Format Sitasi

Abedini, E., Nickray, M. (2025). CUBIC-Learn: A Reinforcement Learning Approach to CUBIC Congestion Control. https://doi.org/10.5455/jjcit.71-1748057293

Akses Cepat

Lihat di Sumber doi.org/10.5455/jjcit.71-1748057293
Informasi Jurnal
Tahun Terbit
2025
Sumber Database
DOAJ
DOI
10.5455/jjcit.71-1748057293
Akses
Open Access ✓