Context Aware Self Learning Voice Assistant for Smart Navigation with Contextual LSTM

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The gift of vision for humans is a valuable blessing but regrettably there are around 37 million people who are visually impaired. Among them, 15 million people are from India. They undergo numerous challenges in their daily lives. They are always dependent on others for traveling to different places. It is noted that context-awareness has a key responsibility in the lives of the visually impaired. There are many mobile applications contributing to ease them, but due to dependence on many additional resources, it has become a nightmare. To sophisticate the above challenge, the proposed mobile-cloud context-aware application will act as a voice chat-bot that provides context-aware travel assistance to the visually challenged people which is implemented in specific public environments. It is an interactive application and provides them with a help desk where they can query their necessary information through speech interface. This application relies on the Location based services including providers and Geo-coordinates for manipulating the latitude and longitude of places. The present location of the user is tracked by using location services. The distance from the user’s exact location to the destination is pre-determined and this application will assist them with the route to travel through audible directions. This would completely assist them with the travel by replying to the queries asked by them and it helps them to travel independently. The application flow initially takes the voice instruction and converts that into the text instructions. The contextual LSTM (Long-short Term Memory) in the application takes care of the conversational strategy, analyzes it and advocates all the users with answers for whatever questions are been posted. It also drives the visually handicapped to destination by identifying the obstacle and detection of the object in the way. The application uses the computational resources from cloud servers such as location specific resources and in turn pushes all the data in cloud server for reference and future usage.

Cite

CITATION STYLE

APA

Silviya Nancy, J., Udhayakumar, S., Pavithra, J., Preethi, R., & Revathy, G. (2019). Context Aware Self Learning Voice Assistant for Smart Navigation with Contextual LSTM. In Communications in Computer and Information Science (Vol. 1075, pp. 441–450). Springer. https://doi.org/10.1007/978-981-15-0108-1_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free