Abstract
Deep neural networks (DNNs) have been extensively used in multiple applications. Due to the over-parameterized DNN-based model, the mobile device has computation and energy limitations to deploy such a model for machine learning tasks. Thus, many works focus on compressing a large-scale model to a small-scale model. In addition, the mobile device training a model assisted by the edge server is an emerging solution in the edge computing environment. However, recent researches have found that DNNs are vulnerable to adversarial examples. These crafted adversarial examples can fool a DNN-based model incorrect predictions. In particular, the DNN-based model can cause risks when used in safety-critical settings. To address this problem, we design a framework for generating a robust deep-convolutional-neural-network-based compressed model in the edge computing environment. The model is partitioned and trained by the mobile device and the edge server. The robust compressed model is constructed mainly via model compression and model robustness. In model robustness, a defensive mechanism is proposed for enhancing the robustness of the compressed model against adversarial examples. Furthermore, the weight distribution of the compressed model is considered for improving the model's accuracy in the defense method. The small-scale compressed model is effective and robust as a collaborative device-server inference for providing recognition tasks for the near devices. On the other hand, it is practical to deploy on the mobile device due to its small-size. Experimental results show that the generated compressed model has strong robustness against adversarial examples while holding high accuracy.
Author supplied keywords
Cite
CITATION STYLE
Yan, Y., & Pei, Q. (2019). A Robust Deep-Neural-Network-Based Compressed Model for Mobile Device Assisted by Edge Server. IEEE Access, 7, 179104–179117. https://doi.org/10.1109/ACCESS.2019.2958406
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.