Online convex optimization with switching cost and delayed gradients

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider the online convex optimization (OCO) problem with quadratic and linear switching cost in the limited information setting, where an online algorithm can choose its action using only gradient information about the previous objective function. For L-smooth and μ-strongly convex objective functions, we propose an online multiple gradient descent (OMGD) algorithm and show that its competitive ratio for the OCO problem with quadratic switching cost is at most 4(L+5)+. The competitive ratio upper bound for OMGD is also shown to be order-wise tight in terms of L,μ. In addition, we show that the competitive ratio of any online algorithm is max{Ω(L),Ω in the limited information setting when the switching cost is quadratic. We also show that the OMGD algorithm achieves the optimal (order-wise) dynamic regret in the limited information setting. For the linear switching cost, the competitive ratio upper bound of the OMGD algorithm is shown to depend on both the path length and the squared path length of the problem instance, in addition to L,μ, and is shown to be order-wise, the best competitive ratio any online algorithm can achieve. Consequently, we conclude that the optimal competitive ratio for the quadratic and linear switching costs are fundamentally different in the limited information setting.

Cite

CITATION STYLE

APA

Senapati, S., & Vaze, R. (2023). Online convex optimization with switching cost and delayed gradients. Performance Evaluation, 162. https://doi.org/10.1016/j.peva.2023.102371

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free