Issues on aligning the meaning of symbols in multiagent systems

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The autonomy of a multiagent system in relation to external environment can be greatly extended thorough the incorporation of a language emergence mechanism. In such a system the population of agents autonomously learn, adapt and optimize their semantics to the available mechanisms of perception and the external environment, i.e. it dynamically adapts the used language to suit the shape of external world, assumed perception mechanism and intra-population interactions. For instance, used symbols should denote only the directly available states of the external world, as otherwise the symbols have no meaning to the agents. Further, the incorporated language sign, denoting certain meaning, representation can be adapted to suit the demands of communication, e.g. by lowering the energy utilization - shorter signs should denote more frequent symbols. Additionally, the proposed approach to language emergence is applied in the area of tagging systems, where it helps to solve and automate several problems. © 2009 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Lorkiewicz, W., & Katarzyniak, R. P. (2009). Issues on aligning the meaning of symbols in multiagent systems. In Studies in Computational Intelligence (Vol. 244, pp. 217–229). https://doi.org/10.1007/978-3-642-03958-4_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free