Measuring and Mitigating Local Instability in Deep Neural Networks

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample “label entropy” across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in data-specific ways. We study data-agnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates. We find that our localized data-specific mitigation strategy dramatically outperforms data-agnostic methods, and comes within 90% of the gold standard, achieved by ensembling, at a fraction of the computational cost.

References Powered by Scopus

Rethinking the Inception Architecture for Computer Vision

24022Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Integration of natural and deep artificial cognitive models in medical images: BERT-based NER and relation extraction for electronic medical records

9Citations
N/AReaders
Get full text

Compute-Efficient Churn Reduction for Conversational Agents

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Datta, A., Nandi, S., Xu, J., Steeg, G. V., Xie, H., Kumar, A., & Galstyan, A. (2023). Measuring and Mitigating Local Instability in Deep Neural Networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2810–2823). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.176

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

60%

Lecturer / Post doc 1

20%

Researcher 1

20%

Readers' Discipline

Tooltip

Computer Science 7

88%

Medicine and Dentistry 1

13%

Save time finding and organizing research with Mendeley

Sign up for free