TANet: Transformer-based asymmetric network for RGB-D salient object detection

8Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Existing RGB-D salient object detection methods mainly rely on a symmetric two-stream Convolutional Neural Network (CNN)-based network to extract RGB and depth channel features separately. However, there are two problems with the symmetric conventional network structure: first, the ability of CNN in learning global contexts is limited; second, the symmetric two-stream structure ignores the inherent differences between modalities. In this study, a Transformer-based asymmetric network is proposed to tackle the issues mentioned above. The authors employ the powerful feature extraction capability of Transformer to extract global semantic information from RGB data and design a lightweight CNN backbone to extract spatial structure information from depth data without pre-training. The asymmetric hybrid encoder effectively reduces the number of parameters in the model while increasing speed without sacrificing performance. Then, a cross-modal feature fusion module which enhances and fuses RGB and depth features with each other is designed. Finally, the authors add edge prediction as an auxiliary task and propose an edge enhancement module to generate sharper contours. Extensive experiments demonstrate that our method achieves superior performance over 14 state-of-the-art RGB-D methods on six public datasets. The code of the authors will be released at https://github.com/lc012463/TANet.

Cite

CITATION STYLE

APA

Liu, C., Yang, G., Wang, S., Wang, H., Zhang, Y., & Wang, Y. (2023). TANet: Transformer-based asymmetric network for RGB-D salient object detection. IET Computer Vision, 17(4), 415–430. https://doi.org/10.1049/cvi2.12177

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free