A Visual Residual Perception Optimized Network for Blind Image Quality Assessment

16Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Blind image quality assessment (BIQA) is a fundamental yet challenging problem in the image processing system. Existing BIQA models have the following problems: 1) Due to the lack of available quality-label images, most of the methods have poor generalization ability in different distortion categories; 2) The impact of human visual characteristics on the content of images is not taken into account. In this paper, we proposed a visual residual perception optimized network (VRPON) that can effectively solve these problems. The proposed method separates the training of BIQA into two stages: 1) a distortion degree identification network and 2) an image quality prediction network. In the first stage, the spatial and temporal features of image sequences are extracted by CNN and LSTM respectively, which are used to evaluate the degree of image distortion. And then the proposed model is learned to predict image patches' scores in the second stage with the outputs of the first stage. Finally, a pooling strategy that follows the human visual saliency is designed to evaluate the quality score of the whole image. Experimental results show that the proposed VRPON not only has better performance than state-of-the-art methods on synthetic distorted images (LIVE, TID2013, CSIQ), but also has better robustness for different authentic distortions (LIVE challenge).

Cite

CITATION STYLE

APA

He, L., Zhong, Y., Lu, W., & Gao, X. (2019). A Visual Residual Perception Optimized Network for Blind Image Quality Assessment. IEEE Access, 7, 176087–176098. https://doi.org/10.1109/ACCESS.2019.2957292

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free