Finite automata for compact representation of language models in NLP

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A technique for compact representation of language models in Natural Language Processing is presented. After a brief review of the motivations for a more compact representation of such language models, it is shown how finite-state automata can be used to compactly represent such language models. The technique can be seen as an application and extension of perfect hashing by means of finite-state automata. Preliminary practical experiments indicate that the technique yields considerable and important space savings of up to 90% in practice.

Cite

CITATION STYLE

APA

Daciuk, J., & van Noord, G. (2002). Finite automata for compact representation of language models in NLP. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2494, pp. 65–73). Springer Verlag. https://doi.org/10.1007/3-540-36390-4_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free