Complexity of comparing hidden markov models

9Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The basic theory of hidden Markov models was developed and applied to problems in speech recognition in the late 1960's, and has since then been applied to numerous problems, e.g. biological sequence analysis. In this paper we consider the problem of computing the most likely string generated by a given model, and its implications on the complexity of comparing hidden Markov models. We show that computing the most likely string, and approximating its probability within any constant factor, is NP-hard, and establish the NP-hardness of comparing two hidden Markov models under the Lα- and L 1-norms. We discuss the applicability of the technique used to other measures of distance between probability distributions. In particular we show that it cannot be used to prove NP-hardness of determining the Kullback-Leibler distance between the probability distributions of two hidden Markov models, or of comparing them under the L k -norm for any xed even integer k. © 2001 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Lyngsø, R. B., & Pedersen, C. N. S. (2001). Complexity of comparing hidden markov models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2223 LNCS, pp. 416–428). https://doi.org/10.1007/3-540-45678-3_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free