GRNN++: A Parallel and Distributed Version of GRNN Under Apache Spark for Big Data Regression

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Among the neural network architectures for prediction, multi-layer perceptron (MLP), radial basis function (RBF), wavelet neural network (WNN), general regression neural network (GRNN), and group method of data handling (GMDH) are popular. Out of these architectures, GRNN is preferable because it involves single-pass learning and produces reasonably good results. Although GRNN involves single-pass learning, it cannot handle big datasets because a pattern layer is required to store all the cluster centers after clustering all the samples. Therefore, this paper proposes a hybrid architecture, GRNN++, which makes GRNN scalable for big data by invoking a parallel distributed version of K-means++, namely, K-means||, in the pattern layer of GRNN. The whole architecture is implemented in the distributed parallel computational architecture of Apache Spark with HDFS. The performance of the GRNN++ was measured on gas sensor dataset which has 613 MB of data under a ten-fold cross-validation setup. The proposed GRNN++ produces very low mean squared error (MSE). It is worthwhile to mention that the primary motivation of this article is to present a distributed and parallel version of the traditional GRNN.

Cite

CITATION STYLE

APA

Kamaruddin, S., & Ravi, V. (2020). GRNN++: A Parallel and Distributed Version of GRNN Under Apache Spark for Big Data Regression. In Advances in Intelligent Systems and Computing (Vol. 1042, pp. 215–227). Springer. https://doi.org/10.1007/978-981-32-9949-8_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free