New directions in connectionist language modeling

11Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In language engineering, language models are employed in order to improve system performance. These language models are usually N-gram models which are estimated from large text databases using the occurrence frequencies of these N-grams. An alternative to conventional frequency-based estimation of N-gram probabilities consists in using neural networks to this end. These “connectionist N-gram models”, although their training is very time-consuming, present a pair of interesting advantages over the conventional approach: networks provide an implicit smoothing in their estimations and the number of free parameters does not grow exponentially with N. Some experimental works provide empirical evidence on the capability of multilayer perceptrons and simple recurrent networks to emulate N-gram models, and proposes new directions for extending neural networks-based language models.

Cite

CITATION STYLE

APA

José Castro, M., & Prat, F. (2003). New directions in connectionist language modeling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2686, pp. 598–605). Springer Verlag. https://doi.org/10.1007/3-540-44868-3_76

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free