Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

4Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Bias evaluation benchmarks and dataset and model documentation have emerged as central processes for assessing the biases and harms of artificial intelligence (AI) systems. However, these auditing processes have been criticized for their failure to integrate the knowledge of marginalized communities and consider the power dynamics between auditors and the communities. Consequently, modes of bias evaluation have been proposed that engage impacted communities in identifying and assessing the harms of AI systems (e.g., bias bounties). Even so, asking what marginalized communities want from such auditing processes has been neglected. In this paper, we ask queer communities for their positions on, and desires from, auditing processes. To this end, we organized a participatory workshop to critique and redesign bias bounties from queer perspectives. We found that when given space, the scope of feedback from workshop participants goes far beyond what bias bounties afford, with participants questioning the ownership, incentives, and efficacy of bounties. We conclude by advocating for community ownership of bounties and complementing bounties with participatory processes (e.g., co-creation).

Cite

CITATION STYLE

APA

Dennler, N., Ovalle, A., Singh, A., Soldaini, L., Subramonian, A., Tu, H., … Pinhal, J. D. J. D. P. (2023). Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms. In AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 375–386). Association for Computing Machinery, Inc. https://doi.org/10.1145/3600211.3604682

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free