WBI at MEDIQA 2021: Summarizing Consumer Health Questions with Generative Transformers

11Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

This paper describes our contribution for the MEDIQA-2021 Task 1 question summarization competition. We model the task as conditional generation problem. Our concrete pipeline performs a finetuning of the large pretrained generative transformers PEGASUS (Zhang et al., 2020a) and BART (Lewis et al., 2020). We used the resulting models as strong baselines and experimented with (i) integrating structured knowledge via entity embeddings, (ii) ensembling multiple generative models with the generator-discriminator framework and (iii) disentangling summarization and interrogative prediction to achieve further improvements. Our best performing model, a fine-tuned vanilla PEGASUS, reached the second place in the competition with an ROUGE-2-F1 score of 15.99. We observed that all of our additional measures hurt performance (up to 5.2 pp) on the official test set. In course of a post-hoc experimental analysis which uses a larger validation set results indicate slight performance improvements through the proposed extensions. However, further analysis is need to provide stronger evidence.

Cite

CITATION STYLE

APA

Sänger, M., Weber, L., & Leser, U. (2021). WBI at MEDIQA 2021: Summarizing Consumer Health Questions with Generative Transformers. In Proceedings of the 20th Workshop on Biomedical Language Processing, BioNLP 2021 (pp. 86–95). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.bionlp-1.9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free