Testing for Equivalence: A Methodology for Computational Cognitive Modelling

  • Stewart T
  • West R
N/ACitations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.

Cite

CITATION STYLE

APA

Stewart, T., & West, R. (2011). Testing for Equivalence: A Methodology for Computational Cognitive Modelling. Journal of Artificial General Intelligence, 2(2), 69–87. https://doi.org/10.2478/v10229-011-0010-8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free