Interpretable Textual Neuron Representations for NLP

16Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

Abstract

Input optimization methods, such as Google Deep Dream, create interpretable representations of neurons for computer vision DNNs. We propose and evaluate ways of transferring this technology to NLP. Our results suggest that gradient ascent with a gumbel softmax layer produces n-gram representations that outperform naive corpus search in terms of target neuron activation. The representations highlight differences in syntax awareness between the language and visual models of the Imaginet architecture.

Cite

CITATION STYLE

APA

Poerner, N., Roth, B., & Schütze, H. (2018). Interpretable Textual Neuron Representations for NLP. In EMNLP 2018 - 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 1st Workshop (pp. 325–327). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-5437

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free