An UAV is a small hovering machine that can be remotely guided or flown autonomously through software-controlled flight plans in its embedded systems, operating in combination with onboard sensors and GPS. UAVs have demonstrated their versatility during the COVID-19 pandemic. Medicines and personal protection equipment were airlifted to the remote locations using UAVs. In future, transportation of drugs using UAV will be cost-effective and efficient. However, using human resources to operate these UAVs need a lot of time and investment in training. Most UAVs use GPS technology to travel to their destination from the start point. Many UAVs in the airspace generate a need for a drone traffic management system to mitigate collision risk. The drone traffic management system again demands human experts and massive expenditures. To overlook this challenge, we propose to model UAVs’ autonomous navigation using the already available infrastructures present in highways like bike lanes and walking lanes. This research suggests a framework by using reinforcement learning and GPS way-points to allow the UAV to operate successfully from the origin location to the end location by following the bike lanes present on the roads.
CITATION STYLE
Jacob, B., Kaushik, A., & Velavan, P. (2022). Autonomous Navigation of Drones Using Reinforcement Learning. In Studies in Computational Intelligence (Vol. 998, pp. 159–176). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-16-7220-0_10
Mendeley helps you to discover research relevant for your work.