DCNN Optimization using Wavelet-based Image Fusion

  • Alshehri A
  • et al.
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.

Abstract

We propose to develop image fusion algorithms and architecture for enhanced deep learning and analysis of large sets of data. Usually, images captured from different perspectives, using different types of sensors, different frequencies, etc. must be considered separately and interpreted by human operators. Using image fusion techniques, different forms of sensor information into a single data feed for a neural network to interpret and learn from can be implemented. This will increase the accuracy of neural network classification, as well as improve effectiveness in situations involving suboptimal conditions, such as obstructed or malfunctioning sensors. Another disadvantage of current deep learning technique is that they often require massive datasets to train to an acceptable level of accuracy, especially when situations involve potentially thousands of classification categories. Increasing the size of the dataset exponentially increases the amount of time to train, even when training on relatively simple neural network architectures. In protection scenarios, where new classes of threats can emerge frequently, it is unacceptable to have to take down the security system for long periods of time and train it to identify new threats.

Cite

CITATION STYLE

APA

Alshehri, A. A., & Ezekiel, S. (2020). DCNN Optimization using Wavelet-based Image Fusion. International Journal of Engineering and Advanced Technology, 9(3), 3082–3088. https://doi.org/10.35940/ijeat.c6093.029320

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free