VARIANTS OF NEURAL NETWORKS: A REVIEW

1Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Machine learning (ML) techniques are part of artificial intelligence. ML involves imitating human behavior in solving different problems, such as object detection, text handwriting recognition, and image classification. Several techniques can be used in machine learning, such as Neural Networks (NN). The expansion in information technology enables researchers to collect large amounts of various data types. The challenging issue is to uncover neural network parameters suitable for object detection problems. Therefore, this paper presents a literature review of the latest proposed and developed components in neural network techniques to cope with different sizes and data types. A brief discussion is also introduced to demonstrate the different types of neural network parameters, such as activation functions, loss functions, and regularization methods. Moreover, this paper also uncovers parameter optimization methods and hyperparameters of the model, such as weight, the learning rate, and the number of iterations. From the literature, it is notable that choosing the activation function, loss function, number of neural network layers, and data size is the major factor affecting NN performance. Additionally, utilizing deep learning NN resulted in a significant improvement in model performance for a variety of issues, which became the researcher's attention.

Cite

CITATION STYLE

APA

Nayef, B. H., Abdullah, S. N. H. S., Sulaiman, R., & Alyasseri, Z. A. A. K. (2022). VARIANTS OF NEURAL NETWORKS: A REVIEW. Malaysian Journal of Computer Science, 35(2), 158–178. https://doi.org/10.22452/mjcs.vol35no2.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free