Game environment exploration using curiosity-driven learning

ISSN: 22773878
1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

Abstract

Reinforcement learning (RL) has emerged as a preferred methodology for coaching agents to perform complicated tasks. In several real-world situations, rewards extrinsic to the agent are very distributed or absent altogether. In such cases to change the agent learn new skills and explore its surroundings, the curiosity will act as an intrinsic reward signal which may be helpful later in its life. The concept of Curiosity-Driven learning is to make a rewarding work that is characteristic for the agent (produced by the operator itself). It implies the operator will be a self-student since he will be the understudy here. However additionally the feedback master. An agent learns quickly if every of its action incorporates a reward, so he gets swift feedback. Curiosity is an intrinsic reward that's equal to the error of our agent to predict the consequence of its own actions given its current state (aka to predict subsequent state given current state and action taken). We demonstrate our output in a 3D simulated virtual environment.

Cite

CITATION STYLE

APA

Harihara Sudhan, N., Shriram, S., Anand, M., & Sujeetha, R. (2019). Game environment exploration using curiosity-driven learning. International Journal of Recent Technology and Engineering, 7(6), 715–718.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free