This paper describes the solution of the QIAI lab sent to the Radiology Report Summarization (RRS) challenge at MEDIQA 2021. This paper aims to investigate whether using multimodality during training improves the summarizing performances of the model at test-time. Our preliminary results shows that taking advantage of the visual features from the x-rays associated to the radiology reports leads to higher evaluation metrics compared to a text-only baseline system. These improvements are reported according to the automatic evaluation metrics METEOR, BLEU and ROUGE scores. Our experiments can be fully replicated at the following address: https://github.com/jbdel/vilmedic.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Delbrouck, J. B., Zhang, H., & Rubin, D. L. (2021). QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization. In Proceedings of the 20th Workshop on Biomedical Language Processing, BioNLP 2021 (pp. 285–290). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.bionlp-1.33