SegViT v2: Exploring Efficient and Continual Semantic Segmentation with Plain Vision Transformers

8Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper investigates the capability of plain Vision Transformers (ViTs) for semantic segmentation using the encoder–decoder framework and introduce SegViTv2. In this study, we introduce a novel Attention-to-Mask (ATM) module to design a lightweight decoder effective for plain ViT. The proposed ATM converts the global attention map into semantic masks for high-quality segmentation results. Our decoder outperforms popular decoder UPerNet using various ViT backbones while consuming only about 5% of the computational cost. For the encoder, we address the concern of the relatively high computational cost in the ViT-based encoders and propose a Shrunk++ structure that incorporates edge-aware query-based down-sampling (EQD) and query-based up-sampling (QU) modules. The Shrunk++ structure reduces the computational cost of the encoder by up to 50% while maintaining competitive performance. Furthermore, we propose to adapt SegViT for continual semantic segmentation, demonstrating nearly zero forgetting of previously learned knowledge. Experiments show that our proposed SegViTv2 surpasses recent segmentation methods on three popular benchmarks including ADE20k, COCO-Stuff-10k and PASCAL-Context datasets. The code is available through the following link: https://github.com/zbwxp/SegVit.

Cite

CITATION STYLE

APA

Zhang, B., Liu, L., Phan, M. H., Tian, Z., Shen, C., & Liu, Y. (2024). SegViT v2: Exploring Efficient and Continual Semantic Segmentation with Plain Vision Transformers. International Journal of Computer Vision, 132(4), 1126–1147. https://doi.org/10.1007/s11263-023-01894-8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free