Flexibility of Emulation Learning from Pioneers in Nonstationary Environments

1Citations
Citations of this article
N/AReaders
Mendeley users who have this article in their library.
Get full text

Abstract

This is an extension from a selected paper from JSAI2019. Social learning is crucial in acquisition of the intelligent behaviors of humans and many kinds of animals, as it makes behavior learning far more efficient than pure trial-and-error. In imitation learning, a representative form of social learning, the agent observes specific action-state pair sequences produced by another agent (expert) and reflect them into its own action. One of its implementations in reinforcement learning is the inverse reinforcement learning. We propose another form of social learning, emulation learning, which requires much less information from another agent (pioneer). In emulation learning, the agent is given only a certain level of achievement by another agent, or a record. In this study, we implement emulation learning in the reinforcement learning setting by applying a model of satisficing action policy. We show that the emulation learning algorithm works well both in stationary and non-stationary reinforcement learning tasks, breaking the often observed trade-off like relationship between efficiency and flexibility.

Cite

CITATION STYLE

APA

Shinriki, M., Wakabayashi, H., Kono, Y., & Takahashi, T. (2020). Flexibility of Emulation Learning from Pioneers in Nonstationary Environments. In Advances in Intelligent Systems and Computing (Vol. 1128 AISC, pp. 90–101). Springer. https://doi.org/10.1007/978-3-030-39878-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free