DC-Net: A Dual-Branch, Dual-Guidance and Cross-Refine Network for Camouflaged Object Detection

81Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this article, we propose a novel framework for camouflaged object detection (COD), named D$^{2}$C-Net, which contains two new modules: Dual-branch features extraction (DFE) and gradually refined cross fusion (GRCF). Specifically, the DFE simulates the two-stage detection process of human visual mechanisms in observing camouflage scenes. For the first stage, a dense concatenation is employed to aggregate multilevel features and expand the receptive field. The first stage feature maps are then utilized to extract two-direction guidance information, which benefits the second stage. The GRCF consists of a self-refine attention unit and a cross-refinement unit, with the aim of combining the peer layer features and DFE features for an improved COD performance. The proposed framework outperforms 13 state-of-the-art deep learning-based methods upon three public datasets in terms of five widely used metrics. Finally, we show evidence for the successful applications of the proposed method in the fields of surface defect detection and medical image segmentation.

Cite

CITATION STYLE

APA

Wang, K., Bi, H., Zhang, Y., Zhang, C., Liu, Z., & Zheng, S. (2022). DC-Net: A Dual-Branch, Dual-Guidance and Cross-Refine Network for Camouflaged Object Detection. IEEE Transactions on Industrial Electronics, 69(5), 5364–5374. https://doi.org/10.1109/TIE.2021.3078379

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free