Existing methods for testing DNNs solve the oracle problem by constraining the raw features (e.g. image pixel values) to be within a small distance of a dataset example for which the desired DNN output is known. But this limits the kinds of faults these approaches are able to detect. In this paper, we introduce a novel DNN testing method that is able to find faults in DNNs that other methods cannot. The crux is that, by leveraging generative machine learning, we can generate fresh test inputs that vary in their high-level features (for images, these include object shape, location, texture, and colour). We demonstrate that our approach is capable of detecting deliberately injected faults as well as new faults in state-of-the-art DNNs, and that in both cases, existing methods are unable to find these faults.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Dunn, I., Pouget, H., Kroening, D., & Melham, T. (2021). Exposing previously undetectable faults in deep neural networks. In ISSTA 2021 - Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis (pp. 56–66). Association for Computing Machinery, Inc. https://doi.org/10.1145/3460319.3464801