On visualizations in the role of universal data representation

7Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The deep learning revolution changed the world of machine learning and boosted the AI industry as such. In particular, the most effective models for image retrieval are based on deep convolutional neural networks (DCNN), outperforming the traditional "hand-engineered" models by far. However, this tremendous success was redeemed by a high cost in the form of an exhaustive gathering of labeled data, followed by designing and training the DCNN models. In this paper, we outline a vision of a framework for instant transfer learning, where a generic pre-trained DCNN model is used as a universal feature extraction method for visualized unstructured data in many (non-visual) domains. The deep feature descriptors are then usable in similarity search tasks (database queries, joins) and in other parts of the data processing pipeline. The envisioned framework should enable practitioners to instantly use DCNN-based data representations in their new domains without the need for the costly training step. Moreover, by use of the framework the information visualization community could acquire a versatile metric for measuring the quality of data visualizations, which is generally a difficult task.

Cite

CITATION STYLE

APA

Skopal, T. (2020). On visualizations in the role of universal data representation. In ICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 362–367). Association for Computing Machinery, Inc. https://doi.org/10.1145/3372278.3390743

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free