Writing mammography reports can be errorprone and time-consuming for radiologists. In this paper we propose a method to generate mammography reports given four images, corresponding to the four views used in screening mammography. To the best of our knowledge our work represents the first attempt to generate the mammography report using deep-learning. We propose an encoder-decoder model that includes an EfficientNet-based encoder and a Transformerbased decoder. We demonstrate that the Transformer-based attention mechanism can combine visual and semantic information to localize salient regions on the input mammograms and generate a visually interpretable report. The conducted experiments, including an evaluation by a certified radiologist, show the effectiveness of the proposed method. Our code is available at https://github. com/sberbank-ai-lab/mammo2text.
CITATION STYLE
Yalunin, A., Sokolova, E., Burenko, I., Ponomarchuk, A., Puchkova, O., & Umerenkov, D. (2021). Generating Mammography Reports from Multi-view Mammograms with BERT. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 153–162). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.15
Mendeley helps you to discover research relevant for your work.