Multi-type self-attention guided degraded saliency detection

21Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Existing saliency detection techniques are sensitive to image quality and perform poorly on degraded images. In this paper, we systematically analyze the current status of the research on detecting salient objects from degraded images and then propose a new multi-type self-attention network, namely MSANet, for degraded saliency detection. The main contributions include: 1) Applying attention transfer learning to promote semantic detail perception and internal feature mining of the target network on degraded images; 2) Developing a multi-type self-attention mechanism to achieve the weight recalculation of multi-scale features. By computing global and local attention scores, we obtain the weighted features of different scales, effectively suppress the interference of noise and redundant information, and achieve a more complete boundary extraction. The proposed MSANet converts low-quality inputs to high-quality saliency maps directly in an end-to-end fashion. Experiments on seven widely-used datasets show that our approach produces good performance on both clear and degraded images.

Cite

CITATION STYLE

APA

Zhou, Z., Wang, Z., Lu, H., Wang, S., & Sun, M. (2020). Multi-type self-attention guided degraded saliency detection. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13082–13089). AAAI press. https://doi.org/10.1609/aaai.v34i07.7010

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free