An efficient primal dual prox method for non-smooth optimization

17Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We study the non-smooth optimization problems in machine learning, where both the loss function and the regularizer are non-smooth functions. Previous studies on efficient empirical loss minimization assume either a smooth loss function or a strongly convex regularizer, making them unsuitable for non-smooth optimization. We develop a simple yet efficient method for a family of non-smooth optimization problems where the dual form of the loss function is bilinear in primal and dual variables. We cast a non-smooth optimization problem into a minimax optimization problem, and develop a primal dual prox method that solves the minimax optimization problem at a rate of O(1/T) assuming that the proximal step can be efficiently solved, significantly faster than a standard subgradient descent method that has an (Formula Presented) convergence rate. Our empirical studies verify the efficiency of the proposed method for various non-smooth optimization problems that arise ubiquitously in machine learning by comparing it to the state-of-the-art first order methods.

Cite

CITATION STYLE

APA

Yang, T., Mahdavi, M., Jin, R., & Zhu, S. (2015). An efficient primal dual prox method for non-smooth optimization. Machine Learning, 98(3), 369–406. https://doi.org/10.1007/s10994-014-5436-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free