We present a heuristic function evaluation framework that allows to quickly compare a heuristic function’s output to benchmark values that are precomputed for a subset of the state space of the game. Our framework reduces the time to evaluate a heuristic function drastically while also providing some insight into where the heuristic is performing well or below par. We analyze the feasibility of using Monte-Carlo Tree Search to compute benchmark values instead of relying on game theoretic values that are hard to obtain in many cases. We also propose several metrics for comparing heuristic evaluations to benchmark values and discuss the feasibility of using MCTS benchmarks with those metrics.
CITATION STYLE
Nešić, N., & Schiffel, S. (2016). Heuristic function evaluation framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10068 LNCS, pp. 71–80). Springer Verlag. https://doi.org/10.1007/978-3-319-50935-8_7
Mendeley helps you to discover research relevant for your work.