Abstract
AI-driven Internet of Things (IoT) use AI inference to characterize data harvested from IoT sensors. Together, AI inference and IoT support smart buildings, smart cities and autonomous vehicles. However, AI inference consumes precious energy, drains batteries and shortens IoT lifetimes. Deep sleep modes on IoT processors can save energy during long, uninterrupted idle periods. When AI software is updated frequently, scheduling policies must choose between interrupting deep sleep and degrading AI inference by delaying updates. Scheduling is challenging because of the stochastic nature of update arrivals, processing needs and updates cannot be delayed indefinitely. This paper studies scheduling policies when (1) updates (tasks) arrive frequently, (2) updates must be processed within staleness limits and (3) energy footprint is the metric of merit. We define a scheduling policy as a sequence of choices that decide when updates are applied. We use random walks to explore the space of scheduling policies and 2Kr design of experiments to quantify primary effects and interactions between factors. We conducted 6 2Kr tests with 5X replication each. Each test executes 1,000,000 random walks and computes their energy footprint. We simulated multiple IoT, e.g., varying the number of AI inference components from 5-500. The best random-walk policy uses much less energy than 99th and 95th percentiles. First-come-first-serve and shortest-job-first policies use 7X more energy than the best policy.
Cite
CITATION STYLE
Babu, N. T. R., & Stewart, C. (2019). Energy, latency and staleness tradeoffs in AI-driven IoT. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, SEC 2019 (pp. 425–430). Association for Computing Machinery, Inc. https://doi.org/10.1145/3318216.3363381
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.