Self-reconfigurable (SR) robots have been developed with the promise of enabling a wide range of abilities within a single hardware system. Particularly in the context of locomotion, SR robots may be able to traverse a wider range of environments than traditional legged or wheeled robots. Additionally, with a large number of modules, the system may be able to divide to explore areas in parallel and recombine to make larger, more capable groups. However, the choice of how to divide and merge these "reconfigurable teams" to most effectively solve tasks such as exploration of unknown terrain remains unanswered. This exploration problem can be seen as a superset of traditional multi-robot exploration, as robots must not only choose places to visit but also may coordinate to dynamically adjust the number of entities in the team. In this paper, we present a state-based distributed control algorithm for entities within a reconfigurable team that uses local sensory information and a team model to choose actions within the environment. We show a set of empirical results from simulation that show how the best choice of strategy may depend on the nature of the environment.
CITATION STYLE
Butler, Z., & Fabricant, E. (2009). Reconfigurable teams: Cooperative goal seeking with self-reconfigurable robots. In Distributed Autonomous Robotic Systems 8 (pp. 417–428). Springer Publishing Company. https://doi.org/10.1007/978-3-642-00644-9_37
Mendeley helps you to discover research relevant for your work.