Convolutional Neural Networks (CNNs) have become a standard approach to many image processing dilemmas. Consequently, most of the proposed CNN architectures tend to increase the model deepness or layer complexity. Thus, they are composed of many parameters and need considerable computing resources and training examples. However, some recent works show that either shallow neural networks or architectures without convolutions can achieve similar results with these models often being used in systems with limited resources. Consideration of these aspects led us to a relatively simple preprocessing layer that increases the accuracy of CNN or may reduce its complexity. The layer is composed of two parts: the first is used to transform RGB data to binary representation, the second is a neural network that transforms the binary data into a multi-channel, real-value matrix and is trained in a fully unsupervised manner. Our proposal also includes a metric that may be used for measuring the similarity of training data, with the latter proving useful when performing transfer learning. Our experiments show that the resulting architecture not only helps to improve accuracy but is also more robust to image noise, including adversarial attacks, when compared to state-of-the-art models.
CITATION STYLE
Sobczak, S., & Kapela, R. (2022). Hybrid Restricted Boltzmann Machine- Convolutional Neural Network Model for Image Recognition. IEEE Access, 10, 24985–24994. https://doi.org/10.1109/ACCESS.2022.3155873
Mendeley helps you to discover research relevant for your work.