Current segmentation networks based on the encoder-decoder architecture have tried recovering spatial information by stacking convolution blocks in the decoder. Unconventionally, we consider that iteratively exploiting spatial attention from high stage to refine lower stage features can form an attention-driven mechanism to step-wise recover detailed features. In this paper, we rethink image segmentation from a novel perspective: a process of step-wise focusing on targets. We develop a lightweight Focus Module (FM) and present a powerful transplantable Step-wise Focus Network (SFN) for biomedical image segmentation. FM extracts high-level spatial attention and combines it with low-level features by our proposed focus learning to generate revised features. Our SFN extends U-Net encoder sub-network and employs just FMs to construct a focus path in order to consistently refine features. We evaluate SFNs in comparison with U-Net and other state-of-art methods on multiple biomedical image segmentation benchmarks. While using 30% floating-point operations and 60% parameters of U-Net, SFNs achieve great performances without any postprocessing.
CITATION STYLE
Wei, S., & Wang, L. (2019). Learn to Step-wise Focus on Targets for Biomedical Image Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11861 LNCS, pp. 525–532). Springer. https://doi.org/10.1007/978-3-030-32692-0_60
Mendeley helps you to discover research relevant for your work.