On the adversarial robustness of full integer quantized TinyML models at the edge

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

The recent surge in deploying machine learning (ML) models at the edge has revolutionized various industries by enabling real-time decision-making on resource-constrained devices, such as TinyML models on microcontrollers. However, this trend brings forth a critical concern - the vulnerability of these models to adversarial examples. ML at the edge offers tremendous potential but demands heightened vigilance in the realm of cybersecurity. Our research has shown that any adversarial robustness attained in standard TensorFlow models through adversarial training can be completely nullified during post-training full integer quantization to address resource constraints of edge devices. This finding raises crucial questions about the adversarial robustness of TinyML models on microcontrollers limited to integer-only operations. As edge computing continues to proliferate, addressing these vulnerabilities and developing lightweight defenses tailored to resource-constrained environments becomes imperative for ensuring the security and trustworthiness of edge-deployed ML models.

Cite

CITATION STYLE

APA

Preuveneers, D., Verheyen, W., Joos, S., & Joosen, W. (2023). On the adversarial robustness of full integer quantized TinyML models at the edge. In MiddleWEdge 2023 - Proceedings of the 2nd International Workshop on Middleware for the Edge, Part of: ACM/IFIP Middleware 2023 (pp. 7–12). Association for Computing Machinery, Inc. https://doi.org/10.1145/3630180.3631201

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free