Transmission policies for energy harvesting sensors based on markov chain energy supply

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Due to the small energy harvesting rates and stochastic energy harvesting processes, energy management of energy harvesting senor is still crucial for body network. Transmission polices for energy harvesting sensors with Markov chain energy supply over time varying channels is formulated as an infinite discounted reward Markov Decision Problem under the assumption of geometric distribution of sensors' lifetime. In this paper, we firstly propose a low-storage transmission policy based on probability of successful transmission for body network. Then we narrow the feasible region of parameters in our policies from the real domain to a discrete set with limited number, which makes the method of combing optimal equations and enumeration algorithm to obtain optimal parameters workable. Finally, numerical results show that our presented transmission policies can achieve a good approximated performance of optimal policies, which can be derived by policy iteration algorithm. Compared with the optimal policies, our presented policies has the advantage of low storage.

Cite

CITATION STYLE

APA

Zhu, W., Xu, P., Zheng, M., Wu, G., & Wang, H. (2016). Transmission policies for energy harvesting sensors based on markov chain energy supply. EAI Endorsed Transactions on Energy Web, 16(8), 1–5. https://doi.org/10.4108/eai.28-9-2015.2261406

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free