Catharsis Theory

  • Scheff T
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Synonyms Absolute accuracy; Confidence in retrieval; Prospective judgment; Retrospective judgment; Test postdiction; Test prediction Definition Calibration is the degree to which a person's perception of performance corresponds with his or her actual performance (Keren 1991). The degree of correspondence is determined by a person's judgment of his or her performance compared against an objectively determined measure of that performance (Hacker et al. 2008). That judgment, which involves self-evaluation, defines calibration as a metacognitive monitoring process. To illustrate, consider the following example. Before taking an exam, a student might estimate how well he or she will perform on the exam, and then estimate after taking the exam how well he or she did perform. If this student predicted that she would score an 85 but actually scored a 90, she is fairly accurate but a bit underconfident. Alternatively, if a student predicts that he will score a 95 and actually scores a 60, he is grossly inaccurate and overconfident. In the former case, the student's perception of performance corresponds well with actual performance, and therefore, she is well calibrated. In the latter case, the student's perception of performance corresponds poorly with actual performance and therefore is poorly calibrated. Although there are various methods of measuring calibration, all measures of calibration provide a quantitative assessment of the degree of discrepancy between perceived performance and actual performance (Hacker et al. 2008). The various methods can be grouped into two categories: difference scores and calibration curves. Difference scores involve calculating the difference between a person's judged performance and his or her actual performance. Judged performance can entail judgments made on a percentage of likelihood scale or confidence scale; they can be made at a global level, in which a single judgment over multiple items is made or at the item level and averaged over multiple items; and judgments can be made before (predictions or prospective judgments) or after (postdictions or retrospective judgments) performance. Often, the absolute value of the difference between judgment and performance is taken, in which case, values closer to zero indicate greater calibration accuracy, with perfect calibration at zero. If the signed difference is calculated, a bias score is produced. Negative values are interpreted as underconfidence and positive values as overconfidence. In our example, the first student predicted an 85 and scored a 90, which means the difference score would be À5, indicating slight underconfidence; and the second student predicted a 95 and scored a 60, putting the difference at + 35, indicating large overconfidence.

Cite

CITATION STYLE

APA

Scheff, T. J. (2012). Catharsis Theory. In Encyclopedia of the Sciences of Learning (pp. 518–520). Springer US. https://doi.org/10.1007/978-1-4419-1428-6_573

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free