Input/Output HMMs: A Recurrent Bayesian Network View

  • Frasconi P
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper reviews Markovian models for sequence processing tasks, with particular emphasis on input/output hidden Markov models (IOHMMs) for supervised learning on temporal domains. HMMs and IOHMMs are viewed as special cases of belief networks that might be called recurrent Bayesian networks. This view opens the way to more general structures that could be devised for learning probabilistic relationships among sets of data streams (instead of just input and output data streams) or that might exploit multiple hidden state variables. Introducing the concept of belief network unfolding it is shown that recurrent Bayesian networks operating on discrete domains are equivalent to recurrent neural networks with higher order connections and linear units.

Cite

CITATION STYLE

APA

Frasconi, P. (1997). Input/Output HMMs: A Recurrent Bayesian Network View (pp. 63–79). https://doi.org/10.1007/978-1-4471-0951-8_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free