Re-evaluating evaluation

61Citations
Citations of this article
226Readers
Mendeley users who have this article in their library.

Abstract

“What we observe is not nature itself, but nature exposed to our method of questioning.” - Werner Heisenberg Progress in machine learning is measured by careful evaluation on problems of outstanding common interest. However, the proliferation of benchmark suites and environments, adversarial attacks, and other complications has diluted the basic evaluation model by overwhelming researchers with choices. Deliberate or accidental cherry picking is increasingly likely, and designing well-balanced evaluation suites requires increasing effort. In this paper we take a step back and propose Nash averaging. The approach builds on a detailed analysis of the algebraic structure of evaluation in two basic scenarios: agent-vs-agent and agent-vs-task. The key strength of Nash averaging is that it automatically adapts to redundancies in evaluation data, so that results are not biased by the incorporation of easy tasks or weak agents. Nash averaging thus encourages maximally inclusive evaluation - since there is no harm (computational cost aside) from including all available tasks and agents.

Cite

CITATION STYLE

APA

Balduzzi, D., Tuyls, K., Perolat, J., & Graepel, T. (2018). Re-evaluating evaluation. In Advances in Neural Information Processing Systems (Vol. 2018-December, pp. 3268–3279). Neural information processing systems foundation. https://doi.org/10.37514/jbw-j.1978.1.4.02

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free