Recognizing physical contexts of mobile video learners via smartphone sensors

Citations of this article
Mendeley users who have this article in their library.


Current studies can effectively recognize several human activities in a single semantic context, but don't recognize the semantics of a single activity in different contexts. The main challenge is the conflicting phone usages as well as the special requirements of the energy consumption. This paper tests a classic learning scenario regarding mobile video viewing and validates the proposed recognition method by comprehensively taking the recognizing accuracy, effectiveness and the energy consumption into consideration. Readings of four carefully-selected sensors are collected and a wide range of machine learning algorithms are investigated. The results show the combination of accelerometer, light and sound sensors is better than that of acceleration, light and gyroscope sensors, the features with respect to energy spectral don't improve the recognition accuracy, and the system reaches robustness in a few minutes. The proposed method is simple, effective and practical in real applications of pervasive learning.




Xie, T., Zheng, Q., & Zhang, W. (2017). Recognizing physical contexts of mobile video learners via smartphone sensors. Knowledge-Based Systems, 136, 75–84.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free