Why do we need to be bots? What prevents society from detecting biases in recommendation systems

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Concerns about social networks manipulating the (general) public opinion have become a recurring theme in recent years. Whether such an impact actually exists could so far only be tested to a very limited extent. Yet to guarantee the accountability of recommendation and information filtering systems, society needs to be able to determine whether they comply with ethical and legal requirements. This paper focuses on black box analyses as methods that are designed to systematically assess the performance of such systems, but that are, at the same time, not very intrusive. We describe the conditions that must be met to allow black box analyses of recommendation systems based on an application on Facebook’s News Feed. While black box analyses have proven to be useful in the past, several barriers can easily get in the way, such as a limited possibility of automated account control, bot detection and bot inhibition. Drawing on the insights from our case study and the state of the art of research on algorithmic accountability, we formulate several policy demands that need to be met in order to allow monitoring of ADM systems for their compliance with social values.

Cite

CITATION STYLE

APA

Krafft, T. D., Hauer, M. P., & Zweig, K. A. (2020). Why do we need to be bots? What prevents society from detecting biases in recommendation systems. In Communications in Computer and Information Science (Vol. 1245 CCIS, pp. 27–34). Springer. https://doi.org/10.1007/978-3-030-52485-2_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free