The huge popularity of hidden Markov models (HMMs) in pattern recognition is due to the ability to "learn" model parameters from an observation sequence through Baum-Welch and other re-estimation procedures. In the case of HMM parameter estimation from an ensemble of observation sequences, rather than a single sequence, we require techniques for finding the parameters which maximize the likelihood of the estimated model given the entire set of observation sequences. The importance of this study is that HMMs with parameters estimated from multiple observations are shown to be many orders of magnitude more probable than HMM models learned from any single observation sequence - thus the effectiveness of HMM "learning" is greatly enhanced. In this paper we present techniques that usually find models significantly more likely than Rabiner's well-known method on both seen and unseen sequences.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below