In recent years, smart home technology has become prevalant and important for various applications. A typical smart home system consists of sensing nodes sending raw data to a cloud server which performs inference using a Machine Learning (ML) model trained offline. This approach suffers from high energy and communication costs and raises privacy concerns. To address these issues researchers proposed hierarchy aware models which distributes the inference computations across the sensor network with each node processing a part of the inference. While hierarchical models reduce these overheads significantly they are computationally intensive to run on resource constrained devices which are typical to smart home deployments. In this work we present a novel approach combining Hierarchy aware Neural Networks (HNN) with variational dropout technique to generate sparse models which have low computational overhead allowing them to be run on edge devices with limited resources. We evaluate our approach using an extensive real-world smart home deployment consisting of several edge devices. Measurements across different devices show that without significant loss of accuracy, energy consumption can be reduced by up to 35% over state-of-The-Art.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Chandrasekaran, R., Guo, Y., Thomas, A., Menarini, M., Ostertag, M. H., Kim, Y., & Rosing, T. (2019). Efficient sparse processing in smart home applications. In SenSys-ML 2019 - Proceedings of the 1st Workshop on Machine Learning on Edge in Sensor Systems, Part of SenSys 2019 (pp. 19–24). Association for Computing Machinery, Inc. https://doi.org/10.1145/3362743.3362963