ContextNet: representation and exploration for painting classification and retrieval in context

32Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In automatic art analysis, models that besides the visual elements of an artwork represent the relationships between the different artistic attributes could be very informative. Those kinds of relationships, however, usually appear in a very subtle way, being extremely difficult to detect with standard convolutional neural networks. In this work, we propose to capture contextual artistic information from fine-art paintings with a specific ContextNet network. As context can be obtained from multiple sources, we explore two modalities of ContextNets: one based on multitask learning and another one based on knowledge graphs. Once the contextual information is obtained, we use it to enhance visual representations computed with a neural network. In this way, we are able to (1) capture information about the content and the style with the visual representations and (2) encode relationships between different artistic attributes with the ContextNet. We evaluate our models on both painting classification and retrieval, and by visualising the resulting embeddings on a knowledge graph, we can confirm that our models represent specific stylistic aspects present in the data.

Cite

CITATION STYLE

APA

Garcia, N., Renoust, B., & Nakashima, Y. (2020). ContextNet: representation and exploration for painting classification and retrieval in context. International Journal of Multimedia Information Retrieval, 9(1), 17–30. https://doi.org/10.1007/s13735-019-00189-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free