Yerkes-Dodson Law in agents' training

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Well known Yerkes-Dodson Law (YDL) claims that medium intensity stimulation encourages fastest learning. Mostly experimenters explained YDL by sequential action of two different processes. We show that YDL can be elucidated even with such simple model as nonlinear single layer perceptron and gradient descent training where differences between desired outputs values are associated with stimulation strength. Non-linear nature of curves "a number of iterations is a function of stimulation" is caused by smoothly bounded nonlinearities of the perceptron's activation function and a difference in desired outputs. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Raudys, Š., & Justickis, V. (2003). Yerkes-Dodson Law in agents’ training. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2902, 54–58. https://doi.org/10.1007/978-3-540-24580-3_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free