Learning deep structured multi-scale features for crisp and object occlusion edge detection

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The key challenge for edge detection is that predicted edges are thick and need Non-Maximum Suppression to post-process to obtain crisp edges. In addition, object occlusion edge detection is an important research problem in computer vision. To increase the crispness and accuracy of occlusion relationships effectively, we propose a novel method of edge detection called MSDF (Multi Scale Decode and Fusion) based on deep structured multi-scale features to generate crisp salient edges in this paper. The decoder layer of MSDF can fuse the adjacent-scale features and increase the affinity between the features. We also propose a novel loss function to solve the class imbalance in object occlusion edge detection and a two streams learning framework to predict edge and occlusion orientation. Extensive experiments on BSDS500 dataset and the larger NYUD dataset show that the effectiveness of the proposed model and of the overall hierarchical framework. We also surpass the state of the art on the BSDS ownership dataset in occlusion edge detection.

Cite

CITATION STYLE

APA

Dong, Z., Zhang, R., & Shao, X. (2019). Learning deep structured multi-scale features for crisp and object occlusion edge detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11729 LNCS, pp. 253–266). Springer Verlag. https://doi.org/10.1007/978-3-030-30508-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free