Edge Detection via Fusion Difference Convolution

10Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Edge detection is a crucial step in many computer vision tasks, and in recent years, models based on deep convolutional neural networks (CNNs) have achieved human-level performance in edge detection. However, we have observed that CNN-based methods rely on pre-trained backbone networks and generate edge images with unwanted background details. We propose four new fusion difference convolution (FDC) structures that integrate traditional gradient operators into modern CNNs. At the same time, we have also added a channel spatial attention module (CSAM) and an up-sampling module (US). These structures allow the model to better recognize the semantic and edge information in the images. Our model is trained from scratch on the BIPED dataset without any pre-trained weights and achieves promising results. Moreover, it generalizes well to other datasets without fine-tuning.

Cite

CITATION STYLE

APA

Yin, Z., Wang, Z., Fan, C., Wang, X., & Qiu, T. (2023). Edge Detection via Fusion Difference Convolution. Sensors, 23(15). https://doi.org/10.3390/s23156883

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free