Abstract
We develop a game-theoretical model of a classroom scenario, where n students collaborate on a common task. We assume that there exists an objective truth known to the students but not to the course instructor. Each of the students estimates the contributions of all team members and reports her estimates to the instructor. Thus, a matrix A of peer evaluations arises and the instructor's task is to grade students individually based on peer evaluations. The method of deriving individual grades from the matrix A is supposed to be psychometrically valid and reliable. We argue that mathematically it means that 1) the collective truth-telling is a strict Nash equilibrium and 2) individual grade of student i does not depend on the true contribution of student j for j i. Existing methods of peer evaluation commonly used in educational practice fail to satisfy at least one of these properties. We construct a new method of peer evaluation satisfying both desired properties for n ≥ 5. We share a large dataset (1201 students, 220 teams, 6619 evaluations) of peer evaluations collected in undergraduate courses taught by the author, outline some practical challenges, and show how these challenges can be addressed.
Cite
CITATION STYLE
Duzhin, F. (2023). Learning in Teams: Peer Evaluation for Fair Assessment of Individual Contributions. In Frontiers in Artificial Intelligence and Applications (Vol. 372, pp. 606–612). IOS Press BV. https://doi.org/10.3233/FAIA230322
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.