Deep Feedback Learning

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An agent acting in an environment aims to minimise uncertainties so that being attacked can be predicted, and rewards are not only found by chance. These events define an error signal which can be used to improve performance. In this paper we present a new algorithm where an error signal from a reflex trains a novel deep network: the error is propagated forwards through the network from its input to its output, in order to generate pro-active actions. We demonstrate the algorithm in two scenarios: a 1st-person shooter game and a driving car scenario, where in both cases the network develops strategies to become pro-active.

Cite

CITATION STYLE

APA

Porr, B., & Miller, P. (2018). Deep Feedback Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10994 LNAI, pp. 189–200). Springer Verlag. https://doi.org/10.1007/978-3-319-97628-0_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free