Tell me, what do you see?—interpretable classification of wiring harness branches with deep neural networks

21Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

In the context of the robotisation of industrial operations related to manipulating de-formable linear objects, there is a need for sophisticated machine vision systems, which could classify the wiring harness branches and provide information on where to put them in the assembly pro-cess. However, industrial applications require the interpretability of the machine learning system predictions, as the user wants to know the underlying reason for the decision made by the system. We propose several different neural network architectures that are tested on our novel dataset to address this issue. We conducted various experiments to assess the influence of modality, data fusion type, and the impact of data augmentation and pretraining. The outcome of the network is evaluated in terms of the performance and is also equipped with saliency maps, which allow the user to gain in-depth insight into the classifier’s operation, including a way of explaining the responses of the deep neural network and making system predictions interpretable by humans.

Cite

CITATION STYLE

APA

Kicki, P., Bednarek, M., Lembicz, P., Mierzwiak, G., Szymko, A., Kraft, M., & Walas, K. (2021). Tell me, what do you see?—interpretable classification of wiring harness branches with deep neural networks. Sensors, 21(13). https://doi.org/10.3390/s21134327

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free