Abstract
Weighted finite automata (WFA) are often used to represent probabilistic models, such as n-gram language models, since they are efficient for recognition tasks in time and space. The probabilistic source to be represented as a WFA, however, may come in many forms. Given a generic probabilistic model over sequences, we propose an algorithm to approximate it as a weighted finite automaton such that the Kullback-Leibler divergence between the source model and the WFA target model is minimized. The proposed algorithm involves a counting step and a difference of convex optimization, both of which can be performed efficiently. We demonstrate the usefulness of our approach on some tasks including distilling n-gram models from neural models.
Cite
CITATION STYLE
Suresh, A. T., Roark, B., Riley, M., & Schogol, V. (2019). Distilling weighted finite automata from arbitrary probabilistic models. In FSMNLP 2019 - 14th International Conference on Finite-State Methods and Natural Language Processing, Proceedings (pp. 87–97). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-3112
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.