Deep Learning-Based Visual Navigation Algorithms for Mobile Robots: A Comprehensive Study

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

This research addresses the challenges faced by mobile robots in efficiently navigating complex environments. A novel approach is proposed, leveraging deep learning techniques, and introducing the Neo model. The method combines Split Attention with the ResNeSt50 network to enhance the recognition accuracy of key features in the observed images. Furthermore, improvements have been made in the loss calculation method to improve navigation accuracy across different scenarios. Evaluations conducted on AI2THOR and active vision datasets demonstrate that the improved model achieves higher average navigation accuracy (92.3%) in scene 4 compared to other methods. The success rate of navigation reached 36.8%, accompanied by a 50% reduction in ballistic length. Additionally, compared to HAUSR and LSTM-Nav, this technology significantly reduced collision rates to 0.01 and reduced time consumption by over 8 seconds. The research methodology addresses navigation model accuracy, speed, and generalization issues, thus making significant advancements for intelligent autonomous robots. ACM CCS (2012) Classification: Computing methodologies → Machine learning → Machine learning algorithms Artificial intelligence → Computer vision → Vision for robotics.

Cite

CITATION STYLE

APA

Yu, W., & Tian, X. (2022). Deep Learning-Based Visual Navigation Algorithms for Mobile Robots: A Comprehensive Study. Journal of Computing and Information Technology, 30(4), 257–273. https://doi.org/10.20532/cit.2022.1005689

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free