Based on Fully Convolutional Networks, recent salient object detection (SOD) methods achieve impressive results. Some studies improve the classical SOD frameworks by utilizing auxiliary information like fixation points, salient numbers and salient edges. These works adopt auxiliary information by embedding sub-networks into the main network. However, how to incorporate auxiliary information regardless of the specific structure with less coupling is unexplored in SOD. In this paper, we present DANet, a new Dynamic network to leverage arbitrary Auxiliary information for SOD. The proposed framework consists of 1) a Dynamic Weight Generator (DWG) which converts arbitrary auxiliary features into dynamic weights, 2) a Dynamic Bridge Block (DBB) which uses dynamic weight convolution to incorporate auxiliary information from DWG and then refines the fused features, and 3) a two-step training strategy to alleviate the side effect caused by drastic changes among different input images. Extensive experiments demonstrate the effect of different auxiliary information and the proposed framework is a universal method to improve SOD with auxiliary information. Comparison experiments show that DANet achieves state-of-the-art performance without any pre-processing and post-processing.
CITATION STYLE
Zhu, Y., Cheng, T., Tang, H., & Chen, C. (2021). DANet: Dynamic Salient Object Detection Networks Leveraging Auxiliary Information. IEEE Access, 9, 92070–92082. https://doi.org/10.1109/ACCESS.2021.3092191
Mendeley helps you to discover research relevant for your work.