Consider a mobile agent (such as a robot) surrounded by objects that may or may not meet its needs. An important goal of such an agent is to learn probabilities that different types of objects meet needs, based on objects it has previously explored. This requires a rational strategy for determining which objects to explore next based on distances to objects, prevalence of similar objects, and amount of information the agent expects to gain. We define information gain in terms of how additional examples increase the certainty of the probabilities (represented as beta distributions), based on how that certainty reduces future travel time by preventing the agent from moving to objects which do not actually meet needs. This is used to create a smart terrain-based influence map in which objects send signals proportional to their information gain (with inverse falloff over distance) to enable simple agent navigation to those objects. © 2011 Springer-Verlag.
CITATION STYLE
Sullins, J. (2011). Exploration strategies for learned probabilities in smart terrain. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6871 LNAI, pp. 224–238). https://doi.org/10.1007/978-3-642-23199-5_17
Mendeley helps you to discover research relevant for your work.