Learning to discover, ground and use words with segmental neural language models

15Citations
Citations of this article
138Readers
Mendeley users who have this article in their library.

Abstract

We propose a segmental neural language model that combines the generalization power of neural networks with the ability to discover word-like units that are latent in unsegmented character sequences. In contrast to previous segmentation models that treat word segmentation as an isolated task, our model unifies word discovery, learning how words fit together to form sentences, and, by conditioning the model on visual context, how words' meanings ground in representations of non-linguistic modalities. Experiments show that the unconditional model learns predictive distributions better than character LSTM models, discovers words competitively with nonparametric Bayesian word segmentation models, and that modeling language conditional on visual context improves performance on both.

Cite

CITATION STYLE

APA

Kawakami, K., Dyer, C., & Blunsom, P. (2020). Learning to discover, ground and use words with segmental neural language models. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 6429–6441). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1645

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free