Abstract
Visual attention plays an important role in saliency detection by highlighting meaningful context regions. In this paper, we present a novel saliency detection method using a bilateral attention network. The proposed network consists of two branches: i) a spatial path using an encoder-decoder structure to learn spatial cues and ii) a context path using an attention mechanism to learn contextual cues. The feature aggregation module is finally used to predict salient objects by concatenating the cues. To optimize the weights of the network in the sense of minimizing the class imbalance problem, we minimize the dice coefficient loss together with the classical cross-entropy loss. The proposed network can predict salient regions in an end-to-end manner without post-processing. Experimental results show that the proposed network achieved better performance than existing state-of-the-art methods in most cases. Furthermore, the proposed network takes only 0.03 seconds to process a 224 \times 224 image. The code for the proposed method can be found at the following URL: https://github.com/tiruss/SdBAN
Author supplied keywords
Cite
CITATION STYLE
Kang, D., Park, S., & Paik, J. (2020). SdBAN: Salient Object Detection Using Bilateral Attention Network with Dice Coefficient Loss. IEEE Access, 8, 104357–104370. https://doi.org/10.1109/ACCESS.2020.2999627
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.