Low-Overhead Reinforcement Learning-Based Power Management Using 2QoSM

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

With the computational systems of even embedded devices becoming ever more powerful, there is a need for more effective and pro-active methods of dynamic power management. The work presented in this paper demonstrates the effectiveness of a reinforcement-learning based dynamic power manager placed in a software framework. This combination of Q-learning for determining policy and the software abstractions provide many of the benefits of co-design, namely, good performance, responsiveness and application guidance, with the flexibility of easily changing policies or platforms. The Q-learning based Quality of Service Manager (2QoSM) is implemented on an autonomous robot built on a complex, powerful embedded single-board computer (SBC) and a high-resolution path-planning algorithm. We find that the 2QoSM reduces power consumption up to 42% compared to the Linux on-demand governor and 10.2% over a state-of-the-art situation aware governor. Moreover, the performance as measured by path error is improved by up to 6.1%, all while saving power.

Cite

CITATION STYLE

APA

Giardino, M., Schwyn, D., Ferri, B., & Ferri, A. (2022). Low-Overhead Reinforcement Learning-Based Power Management Using 2QoSM. Journal of Low Power Electronics and Applications, 12(2). https://doi.org/10.3390/jlpea12020029

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free