EMSViT: Efficient Multi Scale Vision Transformer for Biomedical Image Segmentation

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a novel network named Efficient Multi Scale Vision Transformer for Biomedical Image Segmentation (EMSViT). Our network splits the input feature maps into three parts with 1 × 1, 3 × 3 and 5 × 5 convolutions in both encoder and decoder. Concat operator is used to merge the features before being fed to three consecutive transformer blocks with attention mechanism embedded inside it. Skip connections are used to connect encoder and decoder transformer blocks. Similarly, transformer blocks and multi scale architecture is used in decoder before being linearly projected to produce the output segmentation map. We test the performance of our network using Synapse multi-organ segmentation dataset, Automated cardiac diagnosis challenge dataset, Brain tumour MRI segmentation dataset and Spleen CT segmentation dataset. Without bells and whistles, our network outperforms most of the previous state of the art CNN and transformer based models using Dice score and the Hausdorff distance as the evaluation metrics.

Cite

CITATION STYLE

APA

Sagar, A. (2022). EMSViT: Efficient Multi Scale Vision Transformer for Biomedical Image Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12962 LNCS, pp. 39–51). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-08999-2_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free