Overhead-controlled routing in WSNs with reinforcement learning

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The use of wireless sensor networks in industry has been increased past few years, bringing multiple benefits compared to wired systems, like network flexibility and manageability. Such networks consist of a possibly large number of small and autonomous sensor and actuator devices with wireless communication capabilities. The data collected by sensors are sent - directly or through intermediary nodes along the network - to a base station called sink node. The data routing in this environment is an essential matter since it is strictly bounded to the energy efficiency, thus the network lifetime. This work investigates the application of a routing technique based on reinforcement learning's Q-learning algorithm to a wireless sensor network by using an NS-2 simulated environment. Several metrics like routing overhead, data packet delivery rates and delays are used to validate the proposal comparing it with another solutions existing in the literature. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Campos, L. R. S., Oliveira, R. D., Melo, J. D., & Neto, A. D. D. (2012). Overhead-controlled routing in WSNs with reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7435 LNCS, pp. 622–629). https://doi.org/10.1007/978-3-642-32639-4_75

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free