Visual saliency based blind image quality assessment via convolutional neural network

7Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image quality assessment (IQA), as one of the fundamental techniques in image processing, is widely used in many computer vision and image processing applications. In this paper, we propose a novel visual saliency based blind IQA model, which combines the property of human visual system (HVS) with features extracted by a deep convolutional neural network (CNN). The proposed model is totally data-driven thus using no hand-crafted features. Instead of feeding the model with patches selected randomly from images, we introduce a salient object detection algorithm to calculate regions of interest which are acted as training data. Experimental results on the LIVE and CSIQ database demonstrate that our approach outperforms the state-of-art methods compared.

Cite

CITATION STYLE

APA

Li, J., & Zhou, Y. (2017). Visual saliency based blind image quality assessment via convolutional neural network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10639 LNCS, pp. 550–557). Springer Verlag. https://doi.org/10.1007/978-3-319-70136-3_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free