Exposing previously undetectable faults in deep neural networks

17Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing methods for testing DNNs solve the oracle problem by constraining the raw features (e.g. image pixel values) to be within a small distance of a dataset example for which the desired DNN output is known. But this limits the kinds of faults these approaches are able to detect. In this paper, we introduce a novel DNN testing method that is able to find faults in DNNs that other methods cannot. The crux is that, by leveraging generative machine learning, we can generate fresh test inputs that vary in their high-level features (for images, these include object shape, location, texture, and colour). We demonstrate that our approach is capable of detecting deliberately injected faults as well as new faults in state-of-the-art DNNs, and that in both cases, existing methods are unable to find these faults.

References Powered by Scopus

DeepXplore: Automated Whitebox Testing of Deep Learning Systems

992Citations
N/AReaders
Get full text

DeepTest: Automated testing of deep-neural-network-driven autonomous cars

981Citations
N/AReaders
Get full text

Deeproad: GaN-based metamorphic testing and input validation framework for autonomous driving systems

481Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Efficient and Effective Feature Space Exploration for Testing Deep Learning Systems

27Citations
N/AReaders
Get full text

When and Why Test Generators for Deep Learning Produce Invalid Inputs: an Empirical Study

17Citations
N/AReaders
Get full text

DeepAtash: Focused Test Generation for Deep Learning Systems

9Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Dunn, I., Pouget, H., Kroening, D., & Melham, T. (2021). Exposing previously undetectable faults in deep neural networks. In ISSTA 2021 - Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis (pp. 56–66). Association for Computing Machinery, Inc. https://doi.org/10.1145/3460319.3464801

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 5

83%

Lecturer / Post doc 1

17%

Readers' Discipline

Tooltip

Computer Science 7

78%

Agricultural and Biological Sciences 1

11%

Business, Management and Accounting 1

11%

Save time finding and organizing research with Mendeley

Sign up for free