Multi-input Vision Transformer with Similarity Matching

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-input models for image classification have recently gained considerable attention. However, multi-input models do not always exhibit superior performance compared to single models. In this paper, we propose a multi-input vision transformer (ViT) with similarity matching, which uses original and cropped images based on the region of interest (ROI) as inputs, without additional encoder architectures. Specifically, two types of images are matched on the basis of their cosine similarity in descending order, and they serve as inputs for a multi-input model with two parallel ViT-architectures. We conduct two experiments using a dataset of pediatric orbital wall fracture and chest X-rays. Consequently, the multi-input models with similarity matching outperform the baseline models and achieve balanced results. Furthermore, it is feasible that our method provides both global and local features, and the Grad-CAM results demonstrate that two different inputs of the proposed mechanism can help complementarily study the image. The code is available at https://github.com/duneag2/vit-similarity.

Cite

CITATION STYLE

APA

Lee, S., Hwang, S. H., Oh, S., Park, B. J., & Cho, Y. (2023). Multi-input Vision Transformer with Similarity Matching. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14277 LNCS, pp. 184–193). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-46005-0_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free