Activity recognition has become a popular research branch in the field of pervasive computing in recent years. A large number of experiments can be obtained that activity sensor-based data’s characteristics in activity recognition is variety, volume, and velocity. Deep learning technology, together with its various models, is one of the most effective ways of working on activity data. Nevertheless, there is no clear understanding of why it performs so well or how to make it more effective. In order to solve this problem, first, we applied convolution neural network on Human Activity Recognition Using Smartphones Data Set. Second, we realized the visualization of the sensor-based activity’s data features extracted from the neural network. Then we had in-depth analysis of the visualization of features, explored the relationship between activity and features, and analyzed how Neural Networks identify activity based on these features. After that, we extracted the significant features related to the activities and sent the features to the DNN-based fusion model, which improved the classification rate to 96.1%. This is the first work to our knowledge that visualizes abstract sensor-based activity data features. Based on the results, the method proposed in the paper promises to realize the accurate classification of sensor-based activity recognition.
CITATION STYLE
Xue, L., Xiandong, S., Lanshun, N., Jiazhen, L., Renjie, D., Dechen, Z., & Dianhui, C. (2018). Understanding and improving deep neural network for activity recognition. In International Conference on Mobile Multimedia Communications (MobiMedia) (Vol. 2018-June). ICST. https://doi.org/10.4108/eai.21-6-2018.2276632
Mendeley helps you to discover research relevant for your work.