Optimal training and efficient model selection for parameterized large margin learning

8Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently diverse variations of large margin learning formalism have been proposed to improve the flexibility and the performance of classic discriminative models such as SVM. However, extra difficulties do arise in optimizing non-convex learning objectives and selecting multiple hyperparameters. Observing that many variations of large margin learning could be reformulated as jointly minimizing a parameterized quadratic objective, in this paper we propose a novel optimization framework, namely Parametric Dual sub-Gradient Descent Procedure (PDGDP), that produces a globally optimal training algorithm and an efficient model selection algorithm for two classes of large margin learning variations. The theoretical bases are a series of new results for parametric program, which characterize the unique local and global structure of the dual optimum. The proposed algorithms are evaluated on two representative applications, i.e., the training of latent SVM and the model selection of cost sensitive feature re-scaling SVM. The results show that PDGDP based training and model selection achieves significant improvement over the state-of-the-art approaches.

Cite

CITATION STYLE

APA

Zhou, Y., Baek, J. Y., Li, D., & Spanos, C. J. (2016). Optimal training and efficient model selection for parameterized large margin learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9651, pp. 52–64). Springer Verlag. https://doi.org/10.1007/978-3-319-31753-3_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free