Localization, together with scene and human activity sensing, provides primitive and essential information for upper layer mobile applications. In this paper, we present a novel indoor localization system. We not only make rough localization by the use of the Wi-Fi/GSM signal, but also use the microphone of the smartphone to deeply sense the environment. By analyzing the ambient sound and speech after voice activity detection, we can know exactly if the user appears somewhere and does something at the regulated time according to his or her schedule. The ambient noise is used to identify the ambient scene, and we can deduce the users' current activities by the user's speech sensing. Conclusively, accurate localization and the status information on the user are made by synthesizing the above sound sensing information. And, according to the prestored schedule in the backend server, the location and sensing information are returned to the monitor who wants to know where the user is and what he is doing. We prototype the system on Android mobile phones and evaluate the system comprehensively with data collected from 61 different indoor sites by 100 volunteers over a two-month period of experiments by employing different phone models. We believe this is a novel approach to indoor localization, holding promise of real-world deployment. © 2013 Junzhao Du et al.
CITATION STYLE
Du, J., Chen, W., Liu, Y., Gu, Y., & Liu, H. (2013). Catch you as i can: Indoor localization via ambient sound signature and human behavior. International Journal of Distributed Sensor Networks, 2013. https://doi.org/10.1155/2013/434301
Mendeley helps you to discover research relevant for your work.