Generating Mammography Reports from Multi-view Mammograms with BERT

0Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

Abstract

Writing mammography reports can be errorprone and time-consuming for radiologists. In this paper we propose a method to generate mammography reports given four images, corresponding to the four views used in screening mammography. To the best of our knowledge our work represents the first attempt to generate the mammography report using deep-learning. We propose an encoder-decoder model that includes an EfficientNet-based encoder and a Transformerbased decoder. We demonstrate that the Transformer-based attention mechanism can combine visual and semantic information to localize salient regions on the input mammograms and generate a visually interpretable report. The conducted experiments, including an evaluation by a certified radiologist, show the effectiveness of the proposed method. Our code is available at https://github. com/sberbank-ai-lab/mammo2text.

Cite

CITATION STYLE

APA

Yalunin, A., Sokolova, E., Burenko, I., Ponomarchuk, A., Puchkova, O., & Umerenkov, D. (2021). Generating Mammography Reports from Multi-view Mammograms with BERT. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 153–162). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free