Image segmentation is fundamental task for medical image analysis, whose accuracy is improved by the development of neural networks. However, the existing algorithms that achieve high-resolution performance require high-resolution input, resulting in substantial computational expenses and limiting their applicability in the medical field. Several studies have proposed dual-stream learning frameworks incorporating a super-resolution task as auxiliary. In this paper, we rethink these frameworks and reveal that the feature similarity between tasks is insufficient to constrain vessels or lesion segmentation in the medical field, due to their small proportion in the image. To address this issue, we propose a DS2F (Dual-Stream Shared Feature) framework, including a Shared Feature Extraction Module (SFEM). Specifically, we present Multi-Scale Cross Gate (MSCG) utilizing multi-scale features as a novel example of SFEM. Then we define a proxy task and proxy loss to enable the features focus on the targets based on the assumption that a limited set of shared features between tasks is helpful for their performance. Extensive experiments on six publicly available datasets across three different scenarios are conducted to verify the effectiveness of our framework. Furthermore, various ablation studies are conducted to demonstrate the significance of our DS2F.
CITATION STYLE
Qiu, Z., Hu, Y., Chen, X., Zeng, D., Hu, Q., & Liu, J. (2024). Rethinking Dual-Stream Super-Resolution Semantic Learning in Medical Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(1), 451–464. https://doi.org/10.1109/TPAMI.2023.3322735
Mendeley helps you to discover research relevant for your work.