Phe-Q: A pheromone based Q-Learning

13Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Biological systems have often provided inspiration for the design of artificial systems. On such example of a natural system that has inspired researchers is the ant colony. In this paper an algorithm for multi-agent reinforcement learning, a modified Q-learning, is proposed. The algorithm is inspired by the natural behaviour of ants, which deposit pheromones in the environment to communicate. The benefit besides simulating ant behaviour in a colony is to design complex multi-agent systems. Complex behaviour can emerge from relatively simple interacting agents. The proposed Q-learning update equation includes a belief factor. The belief factor reflects the confidence the agent has in the pheromone detected in its environment. Agents communicate implicitly to co-operate in learning to solve a path-planning problem. The results indicate that combining synthetic pheromone with standard Q-learning speeds up the learning process. It will be shown that the agents can be biased towards a preferred solution by adjusting the pheromone deposit and evaporation rates.

Cite

CITATION STYLE

APA

Monekosso, N., & Remagnino, P. (2001). Phe-Q: A pheromone based Q-Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2256, pp. 345–355). Springer Verlag. https://doi.org/10.1007/3-540-45656-2_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free