Towards dynamic computation graphs via sparse latent structure

16Citations
Citations of this article
129Readers
Mendeley users who have this article in their library.

Abstract

Deep NLP models benefit from underlying structures in the data-e.g., parse trees-typically extracted using off-the-shelf parsers. Recent attempts to jointly learn the latent structure encounter a tradeoff: either make factorization assumptions that limit expressiveness, or sacrifice end-to-end differentiability. Using the recently proposed SparseMAP inference, which retrieves a sparse distribution over latent structures, we propose a novel approach for end-to-end learning of latent structure predictors jointly with a downstream predictor. To the best of our knowledge, our method is the first to enable unrestricted dynamic computation graph construction from the global latent structure, while maintaining differentiability.

Cite

CITATION STYLE

APA

Niculae, V., Martins, A. F. T., & Cardie, C. (2018). Towards dynamic computation graphs via sparse latent structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 905–911). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1108

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free