Visual urban perception with deep semantic-aware network

12Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual urban perception has received a lot attention for its importance in many fields. In this paper we transform it into a ranking task by pairwise comparison of images, and use deep neural networks to predict the specific perceptual score of each image. Distinguished from existing researches, we highlight the important role of object semantic information in visual urban perception through the attribute activation maps of images. Base on this concept, we combine the object semantic information with the generic features of images in our method. In addition, we use the visualization techniques to obtain the correlations between objects and visual perception attributes from the well trained neural network, which further proves the correctness of our conjecture. The experimental results on large-scale dataset validate the effectiveness of our method.

Cite

CITATION STYLE

APA

Xu, Y., Yang, Q., Cui, C., Shi, C., Song, G., Han, X., & Yin, Y. (2019). Visual urban perception with deep semantic-aware network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11296 LNCS, pp. 28–40). Springer Verlag. https://doi.org/10.1007/978-3-030-05716-9_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free