NeuroView-RNN: It's About Time

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recurrent Neural Networks (RNNs) are important tools for processing sequential data such as time-series or video. Interpretability is defined as the ability to be understood by a person and is different from explainability, which is the ability to be explained in a mathematical formulation. A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner. We propose NeuroView-RNN as a family of new RNN architectures that explains how all the time steps are used for the decision-making process. Each member of the family is derived from a standard RNN architecture by concatenation of the hidden steps into a global linear classifier. The global linear classifier has all the hidden states as the input, so the weights of the classifier have a linear mapping to the hidden states. Hence, from the weights, NeuroView-RNN can quantify how important each time step is to a particular decision. As a bonus, NeuroView-RNN also offers higher accuracy in many cases compared to the RNNs and their variants. We showcase the benefits of NeuroView-RNN by evaluating on a multitude of diverse time-series datasets.

Cite

CITATION STYLE

APA

Barberan, C., Alemmohammad, S., Liu, N., Balestriero, R., & Baraniuk, R. (2022). NeuroView-RNN: It’s About Time. In ACM International Conference Proceeding Series (pp. 1683–1697). Association for Computing Machinery. https://doi.org/10.1145/3531146.3533224

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free