Exploring neural network hidden layer activity using vector fields†

13Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Deep Neural Networks are known for impressive results in a wide range of applications, being responsible for many advances in technology over the past few years. However, debugging and understanding neural networks models’ inner workings is a complex task, as there are several parameters and variables involved in every decision. Multidimensional projection techniques have been successfully adopted to display neural network hidden layer outputs in an explainable manner, but comparing different outputs often means overlapping projections or observing them side-by-side, presenting hurdles for users in properly conveying data flow. In this paper, we introduce a novel approach for comparing projections obtained from multiple stages in a neural network model and visualizing differences in data perception. Changes among projections are transformed into trajectories that, in turn, generate vector fields used to represent the general flow of information. This representation can then be used to create layouts that highlight new information about abstract structures identified by neural networks.

Cite

CITATION STYLE

APA

Cantareira, G. D., Etemad, E., & Paulovich, F. V. (2020). Exploring neural network hidden layer activity using vector fields†. Information (Switzerland), 11(9), 1–15. https://doi.org/10.3390/info11090426

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free