Adaptive Routing in Wireless Mesh Networks Using Hybrid Reinforcement Learning Algorithm

15Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Wireless mesh networks are popular due to their adaptability, easy-setup, flexibility, cost, and transmission time-reductions. The routing algorithm plays a vital role in transferring the data between the nodes. The network's performance is significantly impacted by the route opted by the algorithm. The router takes the decision to send the packet to the next router as per the policy of that algorithm. So even though that decision does not favor the right path selection, the router tends to follow its policy. This can be avoided by having intelligent routers that can make routing decisions on the fly. This paper presents the QL-Feed Forward routing algorithm (QFFR), a new generation of routing algorithms that combines reinforcement learning based on the Q-learning algorithm with a Feed Forward neural network. This algorithm (QFFR) can learn from the network environment and make routing decisions based on the algorithm's learnings. The AI agent's ability to select the fastest path, which enhances the efficiency of the routing operation, is demonstrated by the working of the suggested QFFR algorithm. This paper also evaluates the performance of traditional algorithms, namely, Ad-hoc On-Demand Distance-Vector, Optimized-Link-State-routing, Destination-Sequenced Distance-Vector and Distance Source routing. The evaluation parameters include throughput, packet delivery ratio, and delay. The parameters are the outcomes of the time the information takes to reach from source to destination. This analysis highlights the improvement in the routing decision ability of a router. As per analysis, Ad hoc On-Demand Distance Vector Algorithm outperforms with throughput 723.13 Kbps, delay 343.73 ns. Q-learning agent identifies the route and reaches the destination in average of 3.7s in non-grid architecture. The Q-learning agent takes 0.49sec with a grid size ten by ten and 0.53sec in three by four grid size. The suggested QFFR takes 7.62s score-over time with stable, consistent performance.

Cite

CITATION STYLE

APA

Mahajan, S., Harikrishnan, R., & Kotecha, K. (2022). Adaptive Routing in Wireless Mesh Networks Using Hybrid Reinforcement Learning Algorithm. IEEE Access, 10, 107961–107979. https://doi.org/10.1109/ACCESS.2022.3210993

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free