Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

  • john L
  • Andrew M
  • Fernando C
N/ACitations
Citations of this article
3.7kReaders
Mendeley users who have this article in their library.
Get full text

Abstract

We present conditional random fields, a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.

Cite

CITATION STYLE

APA

john, lafferty, Andrew, M., & Fernando, C. N. P. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. ICML ’01: Proceedings of the Eighteenth International Conference on Machine Learning, 2001(June), 282–289. https://doi.org/10.29122/mipi.v11i1.2792

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free