Multiagent Q -learning for aloha-like spectrum access in cognitive radio systems

67Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

An Aloha-like spectrum access scheme without negotiation is considered for multiuser and multichannel cognitive radio systems. To avoid collisions incurred by the lack of coordination, each secondary user learns how to select channels according to its experience. Multiagent reinforcement leaning (MARL) is applied for the secondary users to learn good strategies of channel selection. Specifically, the framework of Q -learning is extended from single user case to multiagent case by considering other secondary users as a part of the environment. The dynamics of the Q -learning are illustrated using a Metrick-Polak plot, which shows the traces of Q -values in the two-user case. For both complete and partial observation cases, rigorous proofs of the convergence of multiagent Q -learning without communications, under certain conditions, are provided using the Robins-Monro algorithm and contraction mapping, respectively. The learning performance (speed and gain in utility) is evaluated by numerical simulations. Copyright © 2010 Husheng Li.

Cite

CITATION STYLE

APA

Li, H. (2010). Multiagent Q -learning for aloha-like spectrum access in cognitive radio systems. Eurasip Journal on Wireless Communications and Networking, 2010. https://doi.org/10.1155/2010/876216

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free