Hinge loss projection for classification

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hinge loss is one-sided function which gives optimal solution than that of squared error (SE) loss function in case of classification. It allows data points which have a value greater than 1 and less than −1 for positive and negative classes, respectively. These have zero contribution to hinge function. However, in the most classification tasks, least square (LS) method such as ridge regression uses SE instead of hinge function. In this paper, a simple projection method is used to minimize hinge loss function through LS methods. We modify the ridge regression and its kernel based version i.e. kernel ridge regression so that it can adopt to hinge function instead of using SE in case of classification problem. The results show the effectiveness of hinge loss projection method especially on imbalanced data sets in terms of geometric mean (GM).

Cite

CITATION STYLE

APA

Alfarozi, S. A. I., Woraratpanya, K., Pasupa, K., & Sugimoto, M. (2016). Hinge loss projection for classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9948 LNCS, pp. 250–258). Springer Verlag. https://doi.org/10.1007/978-3-319-46672-9_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free