Learning rational stochastic languages

20Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given a finite set of words w1,⋯, wn independently drawn according to a fixed unknown distribution law P called a stochastic language, a usual goal in Grammatical Inference is to infer an estimate of P in some class of probabilistic models, such as Probabilistic Automata (PA). Here, we study the class Sℝrat(∑) of rational stochastic languages, which consists in stochastic languages that can be generated by Multiplicity Automata (MA) and which strictly includes the class of stochastic languages generated by PA. Rational stochastic languages have minimal normal representation which may be very concise, and whose parameters can be efficiently estimated from stochastic samples. We design an efficient inference algorithm DEES which aims at building a minimal normal representation of the target. Despite the fact that no recursively enumerable class of MA computes exactly Sℚrat(∑), we show that DEES strongly identifies Sℚrat(∑) in the limit. We study the intermediary MA output by DEES and show that they compute rational series which converge absolutely and which can be used to provide stochastic languages which closely estimate the target. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Denis, F., Esposito, Y., & Habrard, A. (2006). Learning rational stochastic languages. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4005 LNAI, pp. 274–288). Springer Verlag. https://doi.org/10.1007/11776420_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free