Improving Neural Network Classifier Using Gradient-Based Floating Centroid Method

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Floating centroid method (FCM) offers an efficient way to solve a fixed-centroid problem for the neural network classifiers. However, evolutionary computation as its optimization method restrains the FCM to achieve satisfactory performance for different neural network structures, because of the high computational complexity and inefficiency. Traditional gradient-based methods have been extensively adopted to optimize the neural network classifiers. In this study, a gradient-based floating centroid (GDFC) method is introduced to address the fixed centroid problem for the neural network classifiers optimized by gradient-based methods. Furthermore, a new loss function for optimizing GDFC is introduced. The experimental results display that GDFC obtains promising classification performance than the comparison methods on the benchmark datasets.

Cite

CITATION STYLE

APA

Islam, M., Liu, S., Zhang, X., & Wang, L. (2019). Improving Neural Network Classifier Using Gradient-Based Floating Centroid Method. In Communications in Computer and Information Science (Vol. 1143 CCIS, pp. 423–431). Springer. https://doi.org/10.1007/978-3-030-36802-9_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free