Algorithmic Transparency via Quantitative Input Influence

16Citations
Citations of this article
133Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Algorithmic systems that employ machine learning are often opaque—it is difficult to explain why a certain decision was made. We present a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of input influence on system outputs. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals and groups. Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the average marginal influence of individual inputs within such a set (e.g., income) using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting.

Cite

CITATION STYLE

APA

Datta, A., Sen, S., & Zick, Y. (2017). Algorithmic Transparency via Quantitative Input Influence. In Studies in Big Data (Vol. 32, pp. 71–94). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-319-54024-5_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free