Introduction to Responsible AI

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the first part of this tutorial we define responsible AI and we discuss the problems embedded in terms like ethical or trustworthy AI. In the second part, to set the stage, we cover irresponsible AI: discrimination (e.g., the impact of human biases); pseudo-science (e.g., biometric based behavioral predictions); human limitations (e.g., human incompetence, cognitive biases); technical limitations (data as a proxy of reality, wrong evaluation); social impact (e.g., unfair digital markets or mental health and disinformation issues created by large language models); environmental impact (e.g., indiscriminate use of computing resources). These examples do have a personal bias but set the context for the third part where we cover the current challenges: ethical principles, governance and regulation. We finish by discussing our responsible AI initiatives, many recommendations, and some philosophical issues.

Cite

CITATION STYLE

APA

Baeza-Yates, R. (2024). Introduction to Responsible AI. In WSDM 2024 - Proceedings of the 17th ACM International Conference on Web Search and Data Mining (pp. 1114–1117). Association for Computing Machinery, Inc. https://doi.org/10.1145/3616855.3636455

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free