A gradually soft multi-task and data-augmented approach to medical question understanding

27Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Users of medical question answering systems often submit long and detailed questions, making it hard to achieve high recall in answer retrieval. To alleviate this problem, we propose a novel Multi-Task Learning (MTL) method with data augmentation for medical question understanding. We first establish an equivalence between the tasks of question summarization and Recognizing Question Entailment (RQE) using their definitions in the medical domain. Based on this equivalence, we propose a data augmentation algorithm to use just one dataset to optimize for both tasks, with a weighted MTL loss. We introduce gradually soft parameter-sharing: a constraint for decoder parameters to be close, that is gradually loosened as we move to the highest layer. We show through ablation studies that our proposed novelties improve performance. Our method outperforms existing MTL methods across 4 datasets of medical question pairs, in ROUGE scores, RQE accuracy and human evaluation. Finally, we show that our method fares better than single-task learning under 4 low-resource settings.

Cite

CITATION STYLE

APA

Mrini, K., Dernoncourt, F., Yoon, S., Bui, T., Chang, W., Farcas, E., & Nakashole, N. (2021). A gradually soft multi-task and data-augmented approach to medical question understanding. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 1505–1515). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.119

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free