Training Generalized Feedforword Kernelized Neural Networks on Very Large Datasets for Regression Using Minimal-Enclosing-Ball Approximation

  • Wang J
  • Deng Z
  • Wang S
  • et al.
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Training feedforward neural networks (FNNs) on very large datasets is one of the most critical issues in FNNs studies. In this work, the connection between FNNs and kernel methods is investigated and, HFSR-GCVM, a scalable learning method for GFKNN on large datasets, is proposed. In HFSR-GCVM, the parameters in the hidden nodes are generated randomly independent of the training data. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in GFKNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consuming is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions.

Cite

CITATION STYLE

APA

Wang, J., Deng, Z., Wang, S., & Gao, Q. (2015). Training Generalized Feedforword Kernelized Neural Networks on Very Large Datasets for Regression Using Minimal-Enclosing-Ball Approximation (pp. 203–214). https://doi.org/10.1007/978-3-319-14063-6_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free