Abstract
In the variational learning process of a Bayesian Hidden Markov model, the forward-backward algorithm is heuristically applied without theoretical justification. This is potentially problematic, because the original derivation of the forward-backward algorithm implicitly requires the parameters to be normalized, which does not hold in the variational learning process of Bayesian HMM. In this paper, we prove that such a requirement is not necessary for the forward-backward algorithm to obtain the correct result. We prove the result from two perspectives. The first proof straightforwardly verifies that implementing the forward-backward algorithm with the unnormalised parameters is equivalent to implementing it with the normalized parameters. The second proof provides a new derivation of the forward-backward algorithm without hidden Markov assumptions and probabilistic meanings of the parameters. As a result, we justify that applying the forward-backward algorithm is theoretically correct and reasonable in the variational learning of Bayesian hidden Markov models.
Author supplied keywords
Cite
CITATION STYLE
Li, T., & Ma, J. (2022). On theoretical justification of the forward–backward algorithm for the variational learning of Bayesian hidden Markov models. IET Signal Processing, 16(6), 674–679. https://doi.org/10.1049/sil2.12129
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.