Well known Yerkes-Dodson Law (YDL) claims that medium intensity stimulation encourages fastest learning. Mostly experimenters explained YDL by sequential action of two different processes. We show that YDL can be elucidated even with such simple model as nonlinear single layer perceptron and gradient descent training where differences between desired outputs values are associated with stimulation strength. Non-linear nature of curves "a number of iterations is a function of stimulation" is caused by smoothly bounded nonlinearities of the perceptron's activation function and a difference in desired outputs. © Springer-Verlag Berlin Heidelberg 2003.
CITATION STYLE
Raudys, Š., & Justickis, V. (2003). Yerkes-Dodson Law in agents’ training. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2902, 54–58. https://doi.org/10.1007/978-3-540-24580-3_13
Mendeley helps you to discover research relevant for your work.