ShapeShop: Towards understanding deep learning representations via interactive experimentation

16Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

Cite

CITATION STYLE

APA

Hohman, F., Hodas, N., & Chau, D. H. (2017). ShapeShop: Towards understanding deep learning representations via interactive experimentation. In Conference on Human Factors in Computing Systems - Proceedings (Vol. Part F127655, pp. 1694–1699). Association for Computing Machinery. https://doi.org/10.1145/3027063.3053103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free