Adopel: Adaptive data collection protocol using reinforcement learning for vanets

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Efficient propagation of information over a vehicular wireless network has usually remained the focus of the research community. Although, scanty contributions have been made in the field of vehicular data collection and more especially in applying learning techniques to such a very changing networking scheme. These smart learning approaches excel in making the collecting operation more reactive to nodes mobility and topology changes compared to traditional techniques where a simple adaptation of MANETs propositions was carried out. To grasp the efficiency opportunities offered by these learning techniques, an Adaptive Data collection Protocol using reinforcement Learning (ADOPEL) is proposed for VANETs. The proposal is based on a distributed learning algorithm on which a reward function is defined. This latter takes into account the delay and the number of aggregatable packets. The Q-learning technique offers to vehicles the opportunity to optimize their interactions with the very dynamic environment through their experience in the network. Compared to non-learning schemes, our proposal confirms its efficiency and achieves a good tradeoffbetween delay and collection ratio.

Cite

CITATION STYLE

APA

Soua, A., & Afifi, H. (2014). Adopel: Adaptive data collection protocol using reinforcement learning for vanets. Journal of Computer Science, 10(11), 2182–2193. https://doi.org/10.3844/jcssp.2014.2182.2193

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free