Interpretable Neuron Structuring with Graph Spectral Regularization

2Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

While neural networks are powerful approximators used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose Graph Spectral Regularization for making hidden layers more interpretable without significantly impacting performance on the primary task. Taking inspiration from spatial organization and localization of neuron activations in biological networks, we use a graph Laplacian penalty to structure the activations within a layer. This penalty encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data via co-activations of a hidden layer of the neural network. We show numerous uses for this additional structure including cluster indication and visualization in biological and image data sets.

Cite

CITATION STYLE

APA

Tong, A., van Dijk, D., Stanley, J. S., Amodio, M., Yim, K., Muhle, R., … Krishnaswamy, S. (2020). Interpretable Neuron Structuring with Graph Spectral Regularization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12080 LNCS, pp. 509–521). Springer. https://doi.org/10.1007/978-3-030-44584-3_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free