SpinE: Sparse interpretable neural embeddings

61Citations
Citations of this article
133Readers
Mendeley users who have this article in their library.

Abstract

Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embed-ddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.

Cite

CITATION STYLE

APA

Subramanian, A., Pruthi, D., Jhamtani, H., Berg-Kirkpatrick, T., & Hovy, E. (2018). SpinE: Sparse interpretable neural embeddings. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 4921–4928). AAAI press. https://doi.org/10.1609/aaai.v32i1.11935

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free