Learning precise timing with LSTM recurrent networks

  • Gers F
  • Schraudolph N
  • Schmidhuber J
  • 473


    Mendeley users who have this article in their library.
  • 291


    Citations of this article.


The temporal distance between events conveys information essential for numerous sequential tasks such as motor control and rhythm detection. While Hidden Markov Models tend to ignore this information, recurrent neural networks (RNNs) can in principle learn to make use of it. We focus on Long Short-Term Memory (LSTM) because it has been shown to outperform other RNNs on tasks involving long time lags. We find that LSTM augmented by "peephole connections" from its internal cells to its multiplicative gates can learn the fine distinction between sequences of spikes spaced either 50 or 49 time steps apart without the help of any short training exemplars. Without external resets or teacher forcing, our LSTM variant also learns to generate stable streams of precisely timed spikes and other highly nonlinear periodic patterns. This makes LSTM a promising approach for tasks that require the accurate measurement or generation of time intervals.

Author-supplied keywords

  • Long Short-Term Memory
  • Recurrent Neural Networks
  • Timing

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • Felix A. Gers

  • Nicol N. Schraudolph

  • Jürgen Schmidhuber

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free