Action inhibition

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An explicit exploration strategy is necessary in reinforcement learning (RL) to balance the need to reduce the uncertainty associated with the expected outcome of an action and the need to converge to a solution. This dependency is more acute in on-policy reinforcement learning where the exploration guides the search for an optimal solution. The need for a self-regulating exploration is manifest in knowledge transfer with the readaptation of past solutions. Tabu search (TS) is an adaptive memory-based exploration method that has been successful in combinatorial optimization problems by systematically exploring the search space and avoiding cycles through action inhibition. Tabu search has also been successfully used in genetic algorithms to ensure diversity and protect against premature convergence. This paper presents an approach to tabu search exploration in reinforcement learning. Experimental results are presented in the discounted, tabular cases of the grid and packet routing problems.

Cite

CITATION STYLE

APA

Abramson, M. (2004). Action inhibition. In Proceedings of the International Conference on Artificial Intelligence, IC-AI’04 (Vol. 2, pp. 925–931). https://doi.org/10.1007/978-3-540-68706-1_4026

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free