Pentagon at MEDIQA 2019: Multi-task learning for filtering and re-ranking answers using language inference and question entailment

5Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream tasks by just fine-tuning on domain-specific datasets (similar to transfer learning). However, using powerful models on nontrivial tasks, such as ranking and large document classification, still remains a challenge due to input size limitations of parallel architecture and extremely small datasets (insufficient for fine-tuning). In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in medical domain. We use task-specific pre-trained models as deep feature extractors. Our model achieves the highest Spearman's Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task.

Cite

CITATION STYLE

APA

Pugaliya, H., Saxena, K., Garg, S., Shalini, S., Gupta, P., Nyberg, E., & Mitamura, T. (2019). Pentagon at MEDIQA 2019: Multi-task learning for filtering and re-ranking answers using language inference and question entailment. In BioNLP 2019 - SIGBioMed Workshop on Biomedical Natural Language Processing, Proceedings of the 18th BioNLP Workshop and Shared Task (pp. 389–398). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5041

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free