A Trainable Monogenic ConvNet Layer Robust in front of Large Contrast Changes in Image Classification

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

At present, Convolutional Neural Networks (ConvNets) achieve remarkable performance in image classification tasks. However, current ConvNets cannot guarantee the capabilities of mammalian visual systems such as invariance to contrast and illumination changes. Some ideas for overcoming the illumination and contrast variations must usually be tuned manually and tend to fail when tested with other types of data degradation. In this context, a new bio-inspired entry layer is presented in this work, M6, which detects low-level geometric features (lines, edges, and orientations) similar to those patterns detected by the V1 visual cortex. This new trainable layer is capable of dealing with image classification tasks even with large contrast variations. The explanation for this behavior is due to the use of monogenic signal geometry, which represents each pixel value in a 3D space using quaternions, a fact that confers a degree of explainability to the networks. The M6 was compared to conventional convolutional layer (C) and a deterministic quaternion local phase layer (Q9). The experimental setup is designed to evaluate the robustness of this M6 enriched ConvNet model and includes three architectures, four datasets, and three types of contrast degradation (including non-uniform haze degradations). The numerical results reveal that the models with M6 are the most robust in front of any kind of contrast variations. This amounts to a significant enhancement of the C models, which usually have reasonably good performance only when the same training and test degradation are used, except for the case of maximum degradation. Moreover, the Structural Similarity Index Measure (SSIM) and Peak Signal to Noise Ratio (PSNR) are used to analyze and explain the robustness effect of the M6 feature maps under any kind of contrast degradations.

Cite

CITATION STYLE

APA

Moya-Sanchez, E. U., Xambo-Descamps, S., Perez, A. S., Salazar-Colores, S., & Cortes, U. (2021). A Trainable Monogenic ConvNet Layer Robust in front of Large Contrast Changes in Image Classification. IEEE Access, 9, 163735–163746. https://doi.org/10.1109/ACCESS.2021.3128552

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free