Uncertainty-Guided Testing and Robustness Enhancement for Deep Learning Systems

0Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning (DL) systems, though being widely used, still suffer from quality and reliability issues. Researchers have put many efforts to investigate these issues. One promising direction is to leverage uncertainty, an intrinsic characteristic of DL systems when making decisions, to better understand their erroneous behavior. DL system testing is an effective method to reveal potential defects before the deployment into safety- A nd security-critical applications. Various techniques and criteria have been designed to generate defect-triggers, i.e. adversarial examples (AEs). However, whether these test inputs could achieve a full spectrum examination of DL systems remains unknown and there still lacks understanding of the relation between AEs and DL uncertainty. In this work, we first conduct an empirical study to uncover the characteristics of AEs from the perspective of uncertainty. Then, we propose a novel approach to generate inputs that are missed by existing techniques. Further, we investigate the usefulness and effectiveness of the data for DL robustness enhancement.

Cite

CITATION STYLE

APA

Zhang, X. (2020). Uncertainty-Guided Testing and Robustness Enhancement for Deep Learning Systems. In Proceedings - 2020 ACM/IEEE 42nd International Conference on Software Engineering: Companion, ICSE-Companion 2020 (pp. 101–103). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3377812.3382160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free