In this paper, a reinforcement learning-based throughput on demand (ToD) provisioning dynamic power management method (RLTDPM) is proposed for sustaining perpetual operation and satisfying the ToD requirements for today's energy harvesting wireless sensor node (EHWSN). The RLTDPM monitors the environmental state of the EHWS and adjusts their operational duty cycle under criteria of energy neutrality to meet the demanded throughput. Outcomes of these observation-adjustment interactions are then evaluated by feedback/reward that represents how well the ToD requests are met; subsequently, the observation-adjustment-evaluation process, so-called reinforcement learning, continues. After the learning process, the RLTDPM is able to autonomously adjust the duty cycle for satisfying the ToD requirement, and in doing so, sustain the perpetual operation of the EHWSN. Simulations of the proposed RLTDPM on a wireless sensor node powered by a battery and solar cell for image sensing tasks were performed. Experimental results demonstrate that the achieved demanded throughput is improved 10.7% for the most stringent ToD requirement, while the residual battery energy of the RLTDPM is improved 7.4% compared with an existing DPM algorithm for EHWSN with image sensing purpose.
CITATION STYLE
Hsu, R. C., Liu, C. T., & Wang, H. L. (2014). A reinforcement learning-based ToD provisioning dynamic power management for sustainable operation of energy harvesting wireless sensor node. IEEE Transactions on Emerging Topics in Computing, 2(2), 181–191. https://doi.org/10.1109/TETC.2014.2316518
Mendeley helps you to discover research relevant for your work.