Classification of human activities from wearable sensor data is challenged by inter-subject variance and resource-constrained platforms. We address these issues with SincEMG, a deep neural network that exploits digital signal processing concepts and transfer learning to reduce model size for activity recognition on raw sensor data. The model's first layer decomposes signals into frequency bands using finite impulse response filters optimized directly from the data. The subsequent convolutional layers downsample across time and aggregate the first layer's band data. Batch normalization and dropout help to regularize intermediate layer outputs. This approach reduces compute requirements by decreasing the number of learned parameters and eliminating any significant data pre-processing. In addition to these improvements, the model's first layer learns a set of bandpass filters, which provide insight into predictive regions of the source spectrum. We evaluate SincEMG using two publicly available surface electromyography datasets. Our model uses far fewer parameters and achieves state-of-the-art results with 98.53% accuracy for 7-classes and 68.45% accuracy for 18-classes.
CITATION STYLE
Stuart, M., & Manic, M. (2021). Deep Learning Shared Bandpass Filters for Resource-Constrained Human Activity Recognition. IEEE Access, 9, 39089–39097. https://doi.org/10.1109/ACCESS.2021.3064031
Mendeley helps you to discover research relevant for your work.