Robust adversarial uncertainty quantification for deep learning fine-tuning

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes a deep learning model that is robust and capable of handling highly uncertain inputs. The model is divided into three phases: creating a dataset, creating a neural network based on the dataset, and retraining the neural network to handle unpredictable inputs. The model utilizes entropy values and a non-dominant sorting algorithm to identify the candidate with the highest entropy value from the dataset. This is followed by merging the training set with adversarial samples, where a mini-batch of the merged dataset is used to update the dense network parameters. This method can improve the performance of machine learning models, categorization of radiographic images, risk of misdiagnosis in medical imaging, and accuracy of medical diagnoses. To evaluate the efficacy of the proposed model, two datasets, MNIST and COVID, were used with pixel values and without transfer learning. The results showed an increase of accuracy from 0.85 to 0.88 for MNIST and from 0.83 to 0.85 for COVID, which suggests that the model successfully classified images from both datasets without using transfer learning techniques.

Cite

CITATION STYLE

APA

Ahmed, U., & Lin, J. C. W. (2023, July 1). Robust adversarial uncertainty quantification for deep learning fine-tuning. Journal of Supercomputing. Springer. https://doi.org/10.1007/s11227-023-05087-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free