The goal of image splicing localization is to detect the tampered area in an input image. Deep learning models have shown good performance in such a task, but are generally unable to detect the boundaries of the tampered area well. In this paper, we propose a novel deep learning model for image splicing localization that not only considers local image features, but also extracts global information of images by using a multi-scale guided learning strategy. In addition, the model integrates spatial and channel self-attention mechanisms to focus on extracting important features instead of restraining unimportant or noisy features. The proposed model is trained on the CASIA v2.0 dataset, and its performance is tested on the CASIA v1.0, Columbia Uncompressed, and DSO-1 datasets. Experimental results show that, with the help of the multi-scale guided learning strategy and self-attention mechanisms, the proposed model can locate the tampered area more effectively than the state-of-the-art models.
CITATION STYLE
Li, Z., You, Q., & Sun, J. (2022). A Novel Deep Learning Architecture with Multi-Scale Guided Learning for Image Splicing Localization. Electronics (Switzerland), 11(10). https://doi.org/10.3390/electronics11101607
Mendeley helps you to discover research relevant for your work.