Deep convolutional bidirectional LSTM based transportation mode recognition

32Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Traditional machine learning approaches for recognizing modes of transportation rely heavily on hand-crafted feature extraction methods which require domain knowledge. So, we propose a hybrid deep learning model: Deep Convolutional Bidirectional-LSTM (DCBL) which combines convolutional and bidirectional LSTM layers and is trained directly on raw sensor data to predict the transportation modes. We compare our model to the traditional machine learning approaches of training Support Vector Machines and Multilayer Perceptron models on extracted features. In our experiments, DCBL performs better than the feature selection methods in terms of accuracy and simplifies the data processing pipeline. The models are trained on the Sussex-Huawei Locomotion-Transportation (SHL) dataset. The submission of our team, Vahan, to SHL recognition challenge uses an ensemble of DCBL models trained on raw data using the different combination of sensors and window sizes and achieved an F1-score of 0.96 on our test data.

Cite

CITATION STYLE

APA

Jeyakumar, J. V., Sandha, S. S., Lee, E. S., Tausik, N., Xia, Z., & Srivastava, M. (2018). Deep convolutional bidirectional LSTM based transportation mode recognition. In UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers (pp. 1606–1615). Association for Computing Machinery, Inc. https://doi.org/10.1145/3267305.3267529

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free