On-Device Training of Machine Learning Models on Microcontrollers with Federated Learning

22Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

Abstract

Recent progress in machine learning frameworks has made it possible to now perform inference with models using cheap, tiny microcontrollers. Training of machine learning models for these tiny devices, however, is typically done separately on powerful computers. This way, the training process has abundant CPU and memory resources to process large stored datasets. In this work, we explore a different approach: training the machine learning model directly on the microcontroller and extending the training process with federated learning. We implement this approach for a keyword spotting task. We conduct experiments with real devices to characterize the learning behavior and resource consumption for different hyperparameters and federated learning configurations. We observed that in the case of training locally with fewer data, more frequent federated learning rounds more quickly reduced the training loss but involved a cost of higher bandwidth usage and longer training time. Our results indicate that, depending on the specific application, there is a need to determine the trade-off between the requirements and the resource usage of the system.

Cite

CITATION STYLE

APA

Giménez, N. L., Grau, M. M., Centelles, R. P., & Freitag, F. (2022). On-Device Training of Machine Learning Models on Microcontrollers with Federated Learning. Electronics (Switzerland), 11(4). https://doi.org/10.3390/electronics11040573

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free