Abstract
We study the problem of learning to infer hidden-state sequences of processes whose states and observations are propositionally or relationally factored. Unfortunately, standard exact inference techniques such as Viterbi and graphical model inference exhibit exponential complexity for these processes. The main motivation behind our work is to identify a restricted space of models, which facilitate efficient inference, yet are expressive enough to remain useful in many applications. In particular, we present the penalty-logic simple-transition model, which utilizes a very simple-transition structure where the transition cost between any two states is constant. While not appropriate for all complex processes, we argue that it is often rich enough in many applications of interest, and when it is applicable there can be inference and learning advantages compared to more general models. In particular, we show that sequential inference for this model, that is, finding a minimum-cost state sequence, efficiently reduces to a single-state minimization (SSM) problem. We then show how to define atemporal-cost models in terms of penalty logic, or weighted logical constraints, and how to use this representation for practically efficient SSM computation. We present a method for learning the weights of our model from labeled training data based on Perceptron updates. Finally, we give experiments in both propositional and relational video-interpretation domains showing advantages compared to more general models. © 2009 Wiley Periodicals, Inc.
Author supplied keywords
Cite
CITATION STYLE
Fern, A. (2009). A penalty-logic simple-transition model for structured sequences. Computational Intelligence, 25(4), 302–334. https://doi.org/10.1111/j.1467-8640.2009.00346.x
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.