Vision Transformers for Breast Cancer Histology Image Classification

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a self-attention Vision Transformer (ViT) model tailored for breast cancer histology image classification. The proposed architecture uses a stack of transformer layers, with each layer consisting of a multi-head self-attention mechanism and a position-wise feed-forward network, and it is trained with different strategies and configurations, including pretraining, resize dimension, data augmentation, patch overlap, and patch size, to investigate their impact on performance on the histology image classification task. Experimental results show that pretraining on ImageNet and using geometric and color data augmentation techniques significantly improve the model’s accuracy on the task. Additionally, a patch size of 16 × 16 and no patch overlap were found to be optimal for this task. These findings provide valuable insights for the design of future ViT-based models for similar image classification tasks.

Cite

CITATION STYLE

APA

Baroni, G. L., Rasotto, L., Roitero, K., Siraj, A. H., & Della Mea, V. (2024). Vision Transformers for Breast Cancer Histology Image Classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14366, pp. 15–26). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-51026-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free