Accelerated training of conditional random fields with stochastic gradient methods

  • Vishwanathan S
  • Schraudolph N
  • Schmidt M
 et al. 
  • 1


    Mendeley users who have this article in their library.
  • N/A


    Citations of this article.


We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limited-memory BFGS, the leading method reported to date. We report results for both exact and inexact inference techniques.

Author-supplied keywords

  • Convergence of numerical methods Data reduction Le

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in


  • S V N Vishwanathan

  • N N Schraudolph

  • M W Schmidt

  • K P Murphy

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free