Distortion and faults in machine learning software

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning software, deep neural networks (DNN) software in particular, discerns valuable information from a large dataset, a set of data, so as to synthesize approximate input-output relations. The outcomes of such DNN programs are dependent on the quality of both learning programs and datasets. However, the quality assurance of DNN software is difficult. The trained machine learning models, defining the functional behavior of the approximate relations, are unknown prior to its development, and the validation is conducted indirectly in terms of the prediction performance. This paper introduces a hypothesis that faults in DNN programs manifest themselves as distortions in trained machine learning models. Relative distortion degrees measured with appropriate observer functions may indicate that the programs have some hidden faults. The proposal is demonstrated with the cases of the MNIST dataset.

Cite

CITATION STYLE

APA

Nakajima, S. (2020). Distortion and faults in machine learning software. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12028 LNCS, pp. 29–41). Springer. https://doi.org/10.1007/978-3-030-41418-4_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free