This paper presents a hardware and software architecture for an indoor navigation of unmanned ground vehicles. It discusses the complete process of taking input from the camera to steering the vehicle in a desired direction. Images taken from a single front-facing camera are taken as input. We have prepared our own dataset of the indoor environment in order to generate data for training the network. For training, the images are mapped with steering directions, those are, left, right, forward or reverse. The pre-trained convolutional neural network(CNN) model then predicts the direction to steer in. The model then gives this output direction to the microprocessor, which in turn controls the motors to transverse in that direction. With minimum amount of training data and time taken for training, very accurate results were obtained, both in the simulation as well as actual hardware testing. After training, the model itself learned to stay within the boundary of the corridor and identify any immediate obstruction which might come up. The system operates at a speed of 2 fps. For training as well as making real time predictions, MacBook Air was used.
CITATION STYLE
Jain*, A., Singh*, A., … Tripathi*, Prof. M. M. (2020). Indoor Navigation of Unmanned Grounded Vehicle using CNN. International Journal of Recent Technology and Engineering (IJRTE), 8(6), 1766–1771. https://doi.org/10.35940/ijrte.f7972.038620
Mendeley helps you to discover research relevant for your work.