Reactive Search Optimization advocates the adoption of learning mechanisms as an integral part of a heuristic optimization scheme. This work studies reinforcement learning methods for the online tuning of parameters in stochastic local search algorithms. In particular, the reactive tuning is obtained by learning a (near-)optimal policy in a Markov decision process where the states summarize relevant information about the recent history of the search. The learning process is performed by the Least Squares Policy Iteration (LSPI) method. The proposed framework is applied for tuning the prohibition value in the Reactive Tabu Search, the noise parameter in the Adaptive Walksat, and the smoothing probability in the Reactive Scaling and Probabilistic Smoothing (RSAPS) algorithm. The novel approach is experimentally compared with the original ad hoc. reactive schemes.
CITATION STYLE
Battiti, R., & Campigotto, P. (2013). An investigation of reinforcement learning for reactive search optimization. In Autonomous Search (Vol. 9783642214349, pp. 131–160). Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-21434-9_6
Mendeley helps you to discover research relevant for your work.