A Generic Approach to Extend Interpretability of Deep Networks

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The recent advent of machine learning as a transforming technology has sparked fears about human inability to comprehend the rational of gradually more complex approaches. Interpretable Machine Learning (IML) was triggered by such concerns, with the purpose of enabling different actors to grasp the application scenarios, including trustworthiness and decision support in highly regulated sectors as those related to health and public services. YOLO (You Only Look Once) models, as other deep Convolutional Neural Network (CNN) approaches, have recently shown remarkable performance in several tasks dealing with object detection. However, interpretability of these models is still an open issue. Therefore, in this work we extend the LIME (Local Interpretable Model-agnostic Explanations) framework to be used with YOLO models. The main contribution is a public add-on to LIME that can effectively improve YOLO interpretability. Results on complex images show the potential improvement.

Cite

CITATION STYLE

APA

Silva, C., Morais, A., & Ribeiro, B. (2022). A Generic Approach to Extend Interpretability of Deep Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13566 LNAI, pp. 488–499). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16474-3_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free