A Reinforcement Learning-Based Intelligent Duty Cycle MAC Protocol for Internet of Things
Abstrak
The Wireless Sensor Networks (WSNs) enabled Internet of Things (IoT) applications face energy efficiency challenge due to the limited battery capacity of the sensor nodes. Hence, the network’s performance often involves a tradeoff with network lifetime. Traditional medium access control (MAC) protocols are less adaptable to the dynamic network conditions. While existing reinforcement learning (RL) based MACs are more adaptable, they still encounter challenges such as complexity and dimensionality. Therefore, this work aims to develop an RL based intelligent Duty cycle MAC (RiD-MAC) protocol that incorporates suitable network information to balance complexity and performance, effectively. The proposed RiD-MAC protocol is based on the Q-learning algorithm, meticulously designed with remaining energy as the state space and duty cycle as the action space. The reward is then formulated based on energy consumption and throughput. It is implemented on OMNeT++ platform-based Castalia simulator and the performance is compared with three state-of-the-art protocols, including AQSen-MAC, rlDC-MAC and QX-MAC under three simulation scenarios, stationary nodes with periodic traffic, hybrid traffic and node mobility. The simulation results demonstrate that RiD-MAC protocol significantly improves energy efficiency, with reduction in receiver energy consumption of up to 21%, and receiver energy consumption per bit of up to 26%, when compared to state-of-the-art protocols.
Topik & Kata Kunci
Penulis (6)
Shah Abdul Latif
Micheal Drieberg
Sohail Sarang
Azrina Abd Aziz
Rizwan Ahmad
Goran M. Stojanovic
Akses Cepat
PDF tidak tersedia langsung
Cek di sumber asli →- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.1109/ACCESS.2025.3606053
- Akses
- Open Access ✓