Stochastic gradient descent in continuous time: A central limit theorem

19Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Stochastic gradient descent in continuous time (SGDCT) provides a computa-tionally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An Lp convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.

Cite

CITATION STYLE

APA

Sirignano, J., & Spiliopoulos, K. (2020). Stochastic gradient descent in continuous time: A central limit theorem. Stochastic Systems, 10(2), 124–151. https://doi.org/10.1287/stsy.2019.0050

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free