IEEE ACCESS, cilt.14, ss.23598-23623, 2026 (SCI-Expanded, Scopus)
Many wireless Internet of Things (IoT) Networks consist of low-power nodes and have a limited range of coverage. New protocol suits are being developed to ensure stability and reliability in such low-power and lossy networks. Towards this goal, RPL (IPv6 Routing Protocol for Low-Power and Lossy Networks) protocol was developed. Due to communication failures in the network, nodes in an RPL network may have to switch their preferred parents regularly to guarantee a certain level of quality of service. But these parent switches bring about extra signaling overhead due to the need to reconfigure RPL network. This behavior poses a greater challenge in networks running IETF 6TiSCH (An Architecture for IPv6 over the Time-Slotted Channel Hopping Mode of IEEE 802.15.4) since the nodes need to re-allocate the communication resources to their parents during each parent switch. In this study, a new RPL objective function called RL-OF is proposed for the IETF 6TiSCH protocol stack. RL-OF employs a lightweight reinforcement learning-inspired reward-penalty based mechanism to adaptively regulate parent switching decisions. The main motivation of the proposed objective function is to reduce the number of unnecessary parent switches to reduce the overheads in 6TiSCH networks related to reallocation of resources during the parent switching process. Our tests show that RL-OF significantly improves the key performance metrics, namely Packet Delivery Ratio and MAC layer packet drops of 6TiSCH networks, as compared to the networks utilizing state-of-the-art objective functions. Real-world testing on a 141-node 6TiSCH network further validates these findings.