Exploration strategies for learned probabilities in smart terrain

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Consider a mobile agent (such as a robot) surrounded by objects that may or may not meet its needs. An important goal of such an agent is to learn probabilities that different types of objects meet needs, based on objects it has previously explored. This requires a rational strategy for determining which objects to explore next based on distances to objects, prevalence of similar objects, and amount of information the agent expects to gain. We define information gain in terms of how additional examples increase the certainty of the probabilities (represented as beta distributions), based on how that certainty reduces future travel time by preventing the agent from moving to objects which do not actually meet needs. This is used to create a smart terrain-based influence map in which objects send signals proportional to their information gain (with inverse falloff over distance) to enable simple agent navigation to those objects. © 2011 Springer-Verlag.

Author supplied keywords

Cite

CITATION STYLE

APA

Sullins, J. (2011). Exploration strategies for learned probabilities in smart terrain. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6871 LNAI, pp. 224–238). https://doi.org/10.1007/978-3-642-23199-5_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free