Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks

0Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP). However, fine tuning such models still requires a large number of training examples for each target task, thus annotating multiple datasets and training these models on various downstream tasks becomes time consuming and expensive. In this work, we propose a simple extension of the Prototypical Networks for few-shot text classification. Our main idea is to replace the class prototypes by Gaussians and introduce a regularization term that encourages the examples to be clustered near the appropriate class centroids. Experimental results show that our method outperforms various strong baselines on 13 public and 4 internal datasets. Furthermore, we use the class distributions as a tool for detecting potential out-of-distribution (OOD) data points during deployment.

Cite

CITATION STYLE

APA

Sehanobish, A., Kannan, K., Abraham, N., Das, A., & Odry, B. (2022). Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks. In EMNLP 2022 - Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track (pp. 342–357). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-industry.34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free