Abstract
While large deep neural networks excel at general video analytics tasks, the significant demand on computing capacity makes them infeasible for real-time inference on resource-constrained end cameras. In this paper, we propose an edge-assisted framework that continuously updates the lightweight model deployed on the end cameras to achieve accurate predictions in adverse environments. This framework consists of three modules, namely, a key frame extractor, a trigger controller, and a retraining manager. The low-cost key frame extractor obtains frames that can best represent the current environment. Those frames are then transmitted and buffered as the retraining data for model update at the edge server. Once the trigger controller detects a significant accuracy drop in the selected frames, the retraining manager outputs the optimal retraining configuration balancing the accuracy and time cost. We prototype our system on two end devices of different computing capacities with one edge server. The results demonstrate that our approach significantly improves accuracy across all tested adverse environment scenarios (up to 24%) and reduces more than 50% of the retraining time compared to existing benchmarks.
Author supplied keywords
Cite
CITATION STYLE
Kong, Y., Yang, P., & Cheng, Y. (2023). Edge-Assisted On-Device Model Update for Video Analytics in Adverse Environments. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 9051–9060). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3612585
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.