Insights into the Black Box Machine Learning Models Through Explainability and Interpretability

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial intelligence (AI) and machine learning (ML) technologies are considered to be the Holy Grail for the researchers across the world. The applications of AI and ML are proving disruptive across the global technological spectrum, and there is practically no area which has been left untouched by these technologies right from computer science to manufacturing, healthcare, insurance, credit ratings, cybersecurity, and many more. It would not be an exaggeration to say that it is the next big thing after the advent of the Internet and potentially holds a similar impact in touching the lives of human beings. Whilst most researchers using machine learning in research across diverse domains do not need to look beyond the model abstraction for their work, the need for understanding what is happening beneath the surface is sometimes necessary. This becomes especially important in the cases where the predictions are too good to be apparently true, and the researcher running the model is not sure about its validity as the logic for prediction is obscure. The process of feature engineering brings in more accuracy to predictions, but in the absence of intuitive background information regarding the features, the task gets more challenging. The scientific reasoning has been driven by logic through ages, and the scientist community remains sceptical of the results unless they can extract useful insights from the black box ML models. The paper applies five popular explainability algorithms being used by the research community to demystify the abstract nature of ML black box models and compare the relative clarity of the insights being provided individually by each from a practitioner’s perspective using the publicly available UCI wine quality dataset.

Cite

CITATION STYLE

APA

Gupta, S., & Gupta, B. (2023). Insights into the Black Box Machine Learning Models Through Explainability and Interpretability. In Lecture Notes in Networks and Systems (Vol. 396, pp. 633–644). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-16-9967-2_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free