Representation in AI Evaluations

4Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Calls for representation in artificial intelligence (AI) and machine learning (ML) are widespread, with "representation"or "representativeness"generally understood to be both an instrumentally and intrinsically beneficial quality of an AI system, and central to fairness concerns. But what does it mean for an AI system to be "representative"? Each element of the AI lifecycle is geared towards its own goals and effect on the system, therefore requiring its own analyses with regard to what kind of representation is best. In this work we untangle the benefits of representation in AI evaluations to develop a framework to guide an AI practitioner or auditor towards the creation of representative ML evaluations. Representation, however, is not a panacea. We further lay out the limitations and tensions of instrumentally representative datasets, such as the necessity of data existence and access, surveillance vs expectations of privacy, implications for foundation models and power. This work sets the stage for a research agenda on representation in AI, which extends beyond instrumentally valuable representation in evaluations towards refocusing on, and empowering, impacted communities.

Cite

CITATION STYLE

APA

Bergman, A. S., Hendricks, L. A., Rauh, M., Wu, B., Agnew, W., Kunesch, M., … Isaac, W. (2023). Representation in AI Evaluations. In ACM International Conference Proceeding Series (pp. 519–533). Association for Computing Machinery. https://doi.org/10.1145/3593013.3594019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free