Hidden Markov modeling of multi-agent systems and its learning method

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Two frameworks of hidden Markov modeling for multi-agent systems and its learning procedure are proposed. Although a couple of variations of HMMs have been proposed to model agents and their interactions, these models have not handled changes of environments, so that it is hard to simulate behaviors of agents that act in dynamic environments like soccer. The proposed frameworks enables HMMs to represent environments directly inside of state transitions. I first propose a model that handles the dynamics of the environments in the same state transition of the agent itself. In this model, the derived learning procedure can segment the environments according to the tasks and behaviors the agent is performing. I also investigate a more structured model in which the dynamics of the environments and agents are treated as separated state transitions and coupled each other. For this model, in order to reduce the number of parameters, I introduce "symmetricity" among agents. Furthermore, I discuss relation between reducing dependency in transitions and assumption of cooperative behaviors among multiple agents. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Noda, I. (2003). Hidden Markov modeling of multi-agent systems and its learning method. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2752, pp. 94–110). Springer Verlag. https://doi.org/10.1007/978-3-540-45135-8_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free