Parallelizing Neural Network Learning to Build Safe Trained Model

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning has produced a wide range of application in recent past. Deep learning techniques can help us solve complex problems like regression, clustering, classification for variety of data like unstructured, structured, and semi-structured input data. One of the most challenging task with deep learning in current era is to make it execute faster. Also if we host this model on cloud then security of trained model is a concern. The most common strategy that can be used is by solving the gradient descent in parallel on systems by either making use of model parallelism or data parallelism and to apply homomorphic encryption to build a safe trained model. In this paper, we demonstrate how basic parallelism concepts can be used to improve the performance of neural network training. We also demonstrate how homomorphic encryption techniques can help us provide security to trained model. Our experimental analysis uses MNIST dataset for handwritten character recognition as data for neural network learning problem. Experimental results indicate that the performance improvement in parallel version of neural network learning is achieved that provides a safe trained model.

Cite

CITATION STYLE

APA

Sayyad, S., & Kulkarni, D. (2020). Parallelizing Neural Network Learning to Build Safe Trained Model. In Advances in Intelligent Systems and Computing (Vol. 1025, pp. 479–488). Springer. https://doi.org/10.1007/978-981-32-9515-5_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free