The salient object detection is receiving more and more attention from researchers. An accurate saliency map will be useful for subsequent tasks. However, in most saliency maps predicted by existing models, the objects regions are very blurred and the edges of objects are irregular. The reason is that the hand-crafted features are the main basis for existing traditional methods to predict salient objects, which results in different pixels belonging to the same object often being predicted different saliency scores. Besides, the convolutional neural network (CNN)-based models predict saliency maps at patch scale, which causes the objects edges of the output to be fuzzy. In this paper, we attempt to add an edge convolution constraint to a modified U-Net to predict the saliency map of the image. The network structure we adopt can fuse the features of different layers to reduce the loss of information. Our SalNet predicts the saliency map pixel-by-pixel, rather than at the patch scale as the CNN-based models do. Moreover, in order to better guide the network mining the information of objects edges, we design a new loss function based on image convolution, which adds an L1 constraint to the edge information of saliency map and ground-truth. Finally, experimental results reveal that our SalNet is effective in salient object detection task and is also competitive when compared with 11 state-of-the-art models.
CITATION STYLE
Han, L., Li, X., & Dong, Y. (2019). Convolutional Edge Constraint-Based U-Net for Salient Object Detection. IEEE Access, 7, 48890–48900. https://doi.org/10.1109/ACCESS.2019.2910572
Mendeley helps you to discover research relevant for your work.