Improving deep neural network performance with kernelized min-max objective

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present a novel training strategy using kernelized Min-Max objective to enable improved object recognition performance on deep neural networks (DNN), e.g., convolutional neural networks (CNN). Without changing the other part of the original model, the kernelized Min-Max objective works by combining the kernel trick with the Min-Max objective and being embedded into a high layer of the networks in the training phase. The proposed kernelized objective explicitly enforces the learned object feature maps to maintain in a kernel space the least compactness for each category manifold and the biggest margin among different category manifolds. With very few additional computation costs, the proposed strategy can be widely used in different DNN models. Extensive experiments with shallow convolutional neural network model, deep convolutional neural network model, and deep residual neural network model on two benchmark datasets show that the proposed approach outperforms those competitive models.

Cite

CITATION STYLE

APA

Yao, K., Huang, K., Zhang, R., & Hussain, A. (2018). Improving deep neural network performance with kernelized min-max objective. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11301 LNCS, pp. 182–191). Springer Verlag. https://doi.org/10.1007/978-3-030-04167-0_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free