Continuous-space language processing: Beyond word embeddings

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Spoken and written language processing has seen a dramatic shift in recent years to increased use of continuous-space representations of language via neural networks and other distributional methods. In particular, word embeddings are used in many applications. This paper looks at the advantages of the continuous-space approach and limitations of word embeddings, reviewing recent work that attempts to model more of the structure in language. In addition, we discuss how current models characterize the exceptions in language and opportunities for advances by integrating traditional and continuous approaches.

Cite

CITATION STYLE

APA

Ostendorf, M. (2016). Continuous-space language processing: Beyond word embeddings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9918 LNCS, pp. 3–15). Springer Verlag. https://doi.org/10.1007/978-3-319-45925-7_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free