Explainable AI (XAI) and its Applications in Building Trust and Understanding in AI Decision Making

  • Tiwari R
N/ACitations
Citations of this article
140Readers
Mendeley users who have this article in their library.

Abstract

In recent years, there has been a growing need for Explainable AI (XAI) to build trust and understanding in AI decision making. XAI is a field of AI research that focuses on developing algorithms and models that can be easily understood and interpreted by humans. The goal of XAI is to make the inner workings of AI systems transparent and explainable, which can help people to understand the reasoning behind the decisions made by AI and make better decisions. In this paper, we will explore the various applications of XAI in different domains such as healthcare, finance, autonomous vehicles, and legal and government decisions. We will also discuss the different techniques used in XAI such as feature importance analysis, model interpretability, and natural language explanations. Finally, we will examine the challenges and future directions of XAI research. This paper aims to provide an overview of the current state of XAI research and its potential impact on building trust and understanding in AI decision making.

Cite

CITATION STYLE

APA

Tiwari, R. (2023). Explainable AI (XAI) and its Applications in Building Trust and Understanding in AI Decision Making. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT, 07(01). https://doi.org/10.55041/ijsrem17592

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free