Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence

307Citations
Citations of this article
1.1kReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, such as banking, e-commerce, healthcare, and public services and safety, is the difficulty in interpreting them. Due to the rapid proleferation of these AI models, explaining their learning and decision making process are getting harder which require transparency and easy predictability. Aiming to collate the current state-of-the-art in interpreting the black-box models, this study provides a comprehensive analysis of the explainable AI (XAI) models. To reduce false negative and false positive outcomes of these back-box models, finding flaws in them is still difficult and inefficient. In this paper, the development of XAI is reviewed meticulously through careful selection and analysis of the current state-of-the-art of XAI research. It also provides a comprehensive and in-depth evaluation of the XAI frameworks and their efficacy to serve as a starting point of XAI for applied and theoretical researchers. Towards the end, it highlights emerging and critical issues pertaining to XAI research to showcase major, model-specific trends for better explanation, enhanced transparency, and improved prediction accuracy.

References Powered by Scopus

ImageNet Large Scale Visual Recognition Challenge

30426Citations
N/AReaders
Get full text

Microsoft COCO: Common objects in context

28855Citations
N/AReaders
Get full text

Mastering the game of Go with deep neural networks and tree search

12805Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer’s disease detection

43Citations
N/AReaders
Get full text

Advancing Precision Medicine: A Review of Innovative In Silico Approaches for Drug Development, Clinical Pharmacology and Personalized Healthcare

42Citations
N/AReaders
Get full text

The Challenges of Machine Learning: A Critical Review

28Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., … Hussain, A. (2024, January 1). Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognitive Computation. Springer. https://doi.org/10.1007/s12559-023-10179-8

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 167

61%

Researcher 40

15%

Lecturer / Post doc 35

13%

Professor / Associate Prof. 30

11%

Readers' Discipline

Tooltip

Computer Science 81

46%

Engineering 56

32%

Business, Management and Accounting 25

14%

Social Sciences 14

8%

Article Metrics

Tooltip
Mentions
News Mentions: 6

Save time finding and organizing research with Mendeley

Sign up for free