Supervised learning of hidden Markov models for sequence discrimination

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

We present two supervised learning algorithms for hidden Markov models (HMMs) for sequence discrimination. When we model a class of sequences with an HMM, conventional learning algorithms for HMMs have trained the HMM with training examples belonging to the class, i.e. positive examples alone, while both of our methods allow us to use negative examples as well as positive examples. One of our algorithms minimizes a kind of distance between a target likelihood of a given training sequence and an actual likelihood of the sequence, which is obtained by a given HMM, using an additive type of parameter updating based on a gradient-descent learning. The other algorithm maximizes a criterion which represents a kind of ratio of the likelihood of a positive example to the likelihood of the total example, using a multiplicative type of parameter updating which is more efficient in actual computation time than the additive type one. We compare our two methods with two conventional methods on a type of cross-validation of actual motif classification experiments. Experimental results show that in terms of the average number of classification errors, our two methods out-perform the two conventional algorithms.

Cite

CITATION STYLE

APA

Mamitsuka, H. (1997). Supervised learning of hidden Markov models for sequence discrimination. In Proceedings of thr Annual International Conference on Computational Molecular Biology, RECOMB (pp. 202–208). ACM. https://doi.org/10.1145/267521.267551

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free