RAINPROOF: An umbrella to shield text generators from Out-of-Distribution data

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chat-bots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.

Cite

CITATION STYLE

APA

Darrin, M., Piantanida, P., & Colombo, P. (2023). RAINPROOF: An umbrella to shield text generators from Out-of-Distribution data. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5831–5857). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.357

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free