The computer simulation/mathematical model called DMOD, which can simulate over 35 different phenomena in appetitive discrete-trial and simple free-operant situations, has been extended to include aversive discrete-trial situations. Learning (V) is calculated using a three-parameter equation {Mathematical expression} (see Daly & Daly, 1982; Rescorla & Wagner, 1972). The equation is applied to three possible goal events in the appetitive (e.g., food) case and to three in the aversive (e.g., shock) case. The original goal event can be present, absent, or reintroduced; in the appetitive situation, these events condition approach (Vap), avoidance (Vav), and courage (Vcc), respectively. In the aversive situation, the events condition avoidance (Vav*), approach (Vap*), and cowardice (Vcc*), respectively. The model was developed in simple learning situations and subsequently was applied to complex situations. It can account for such diverse phenomena as contrast effects after reward shifts, greater persistence following partial than following continuous reinforcement, and a preference for predictable appetitive and predictable aversive events. Application of the aversive version of the model to "reward" shifts is described. © 1987 Psychonomic Society, Inc.
CITATION STYLE
Daly, H. B., & Daly, J. T. (1987). A computer simulation/mathematical model of learning: Extension of DMOD from appetitive to aversive situations. Behavior Research Methods, Instruments, & Computers, 19(2), 108–112. https://doi.org/10.3758/BF03203767
Mendeley helps you to discover research relevant for your work.