MOBILENET PERFORMANCE IMPROVEMENTS FOR DEEPFAKE IMAGE IDENTIFICATION USING ACTIVATION FUNCTION AND REGULARIZATION

  • Noprisson H
  • Ayumi V
  • Purba M
  • et al.
N/ACitations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Deepfake images are often used to spread false information, manipulate public opinion, and harm individuals by creating fake content, making developing deepfake detection technology essential to mitigate these potential dangers. This study utilized the MobileNet architecture by applying regularization and activation function methods to improve detection accuracy. ReLU (Rectified Linear Unit) enhances the model's efficiency and ability to capture non-linear features, while Dropout and L2 regularization help reduce overfitting by penalizing large weights, thereby improving generalization. Based on experimental results, the MobileNet model optimized with ReLU and Dropout achieved an accuracy of 99.17% in the training phase, 85.34% in validation, and 70.60% in testing, whereas the MobileNet model optimized with ReLU and L2 showed lower accuracy in the training and validation phases compared to Dropout but achieved higher accuracy in testing at 72.18%. This study recommends MobileNet with ReLU and L2 due to its better generalization ability when testing data (resulting from reduced overfitting).

Cite

CITATION STYLE

APA

Noprisson, H., Ayumi, V., Purba, M., & Ani, N. (2024). MOBILENET PERFORMANCE IMPROVEMENTS FOR DEEPFAKE IMAGE IDENTIFICATION USING ACTIVATION FUNCTION AND REGULARIZATION. JITK (Jurnal Ilmu Pengetahuan Dan Teknologi Komputer), 10(2), 441–448. https://doi.org/10.33480/jitk.v10i2.5798

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free