Algorithmic discrimination and responsibility: Selected examples from the United States of America and South America

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper discusses examples and activities that promote consumer protection through adapting of non-discriminatory algorithms. The casual observer of data from smartphones to artificial intelligence believes in technological determinism. To them, data reveal real trends with neutral decision-makers that are not prejudiced. However, machine learning technologies are created by people. Therefore, creator biases can appear in decisions based on algorithms used for surveillance, social profiling, surveillance, and business intelligence. This paper adapts Lawrence Lessig’s framework (laws, markets, codes, and social norms). It highlights cases in the USA and South America where algorithms discriminated and how statutes tried to mitigate the negative consequences. Global companies such as Facebook and Amazon are among those discussed in the case studies. In the case of Ecuador, the algorithms and the lack of protection of personal data for citizens are not regulated or protected in the treatment of information that arises in social networks used by public and private institutions. Consequently, individual rights are not strictly shielded by national and international laws and or through regulations of telecommunications and digital networks. In the USA, a proposed bill, the “Algorithmic Accountability Act” would require large companies to audit their machine-learning powered automated systems such as facial recognition or ad targeting algorithm for bias. The Federal Trade Commission (FTC) will create rules for evaluating automated systems, while companies would evaluate the algorithms powering these tools for bias or discrimination, including threats to consumer privacy or security.

Cite

CITATION STYLE

APA

Kapatamoyo, M., Ramos-Gil, Y. T., & Márquez Dominiguez, C. (2019). Algorithmic discrimination and responsibility: Selected examples from the United States of America and South America. In Communications in Computer and Information Science (Vol. 1051 CCIS, pp. 147–157). Springer. https://doi.org/10.1007/978-3-030-32475-9_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free