— We present a margin-based loss function for clas-sification, inspired by the recently proposed similarity measure called correntropy. We show that correntropy induces a non-convex loss function that is a closer approximation to the mis-classification loss (ideal 0-1 loss). We show that the discriminant function obtained by optimizing the proposed loss function using a neural network is insensitive to outliers and has better generalization performance as compared to using the squared loss function which is common in neural network classifiers. The proposed method of training classifiers is a practical way of obtaining better results on real world classification problems, that uses a simple gradient based online training procedure for minimizing the empirical risk.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below