On the Learnability and Usage of Acyclic Probabilistic Finite Automata

76Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose and analyze a distribution learning algorithm for a sub-class of acyclic probalistic finite automata (APFA). This subclass is characterized by a certain distinguishability property of the automata's states. Though hardness results are known for learning distributions generated by general APFAs, we prove that our algorithm can efficiently learn distributions generated by the subclass of APFAs we consider. In particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made arbitrarily small with high confidence in polynomial time. We present two applications of our algorithm. In the first, we show how to model cursively written letters. The resulting models are part of a complete cursive handwriting recognition system. In the second application we demonstrate how APFAs can be used to build multiple-pronunciation models for spoken words. We evaluate the APFA-based pronunciation models on labeled speech data. The good performance (in terms of the log-likelihood obtained on test data) achieved by the APFAs and the little time needed for learning suggests that the learning algorithm of APFAs might be a powerful alternative to commonly used probabilistic models. © 1998 Academic Press.

Cite

CITATION STYLE

APA

Ron, D., Singer, Y., & Tishby, N. (1998). On the Learnability and Usage of Acyclic Probabilistic Finite Automata. Journal of Computer and System Sciences, 56(2), 133–152. https://doi.org/10.1006/jcss.1997.1555

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free