Abstract
We examine deep neural network (DNN) performance and behavior using contrasting explanations generated from a semantically relevant latent space. We develop a semantically relevant latent space by training a variational autoencoder (VAE) augmented by a metric learning loss on the latent space. The properties of the VAE provide for a smooth latent space supported by a simple density and the metric learning term organizes the space in a semantically relevant way with respect to the target classes. In this space we can both linearly separate the classes and generate meaningful interpolation of contrasting data points across decision boundaries. This allows us to examine the DNN model beyond its performance on a test set for potential biases and its sensitivity to perturbations of individual factors disentangled in the latent space.
Author supplied keywords
Cite
CITATION STYLE
van Doorenmalen, J., & Menkovski, V. (2020). Evaluation of CNN Performance in Semantically Relevant Latent Spaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12080 LNCS, pp. 145–157). Springer. https://doi.org/10.1007/978-3-030-44584-3_12
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.