Evolving artificial language through evolutionary reinforcement learning

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Computational simulation of language evolution provides valuable insights into the origin of language. Simulating the evolution of language among agents in an artificial world also presents an interesting challenge in evolutionary computation and machine learning. In this paper, a “jungle world” is constructed where agents must accomplish different tasks such as hunting and mating by evolving their own language to coordinate their actions. In addition, all agents must acquire the language during their lifetime through interaction with other agents. This paper proposes the algorithm of Evolutionary Reinforcement Learning with Potentiation and Memory (ERL-POM) as a computational approach for achieving this goal. Experimental results show that ERL-POM is effective in situated simulation of language evolution, demonstrating that languages can be evolved in the artificial environment when communication is necessary for some or all of the tasks the agents perform.

Cite

CITATION STYLE

APA

Li, X., & Miikkulainen, R. (2016). Evolving artificial language through evolutionary reinforcement learning. In Proceedings of the Artificial Life Conference 2016, ALIFE 2016. MIT Press Journals. https://doi.org/10.7551/978-0-262-33936-0-ch079

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free