A Deep Blind Image Quality Assessment with Visual Importance Based Patch Score

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Convolutional neural networks (CNNs)-based no-reference image quality assessment (NR-IQA) suffers from insufficient training data. The conventional solution is splitting the training image into patches, assigning each patch the quality score, while the assignment of patch score is not consistent with the human visual system (HVS) well. To address the problem, we propose a patch quality assignment strategy, introducing the weighting map to describe the degree of visual importance of each distorted pixel, integrating the weighting map and the feature map to pool the quality score of each patch. With the patch quality, a CNNs-based NR-IQA model is trained. Experimental results demonstrate that proposed method, named as blind image quality metric with improved patch score (BIQIPS), improves the performance on most of the distortion types, especially on the types of local distortions, and achieves state-of-the-art prediction accuracy among the NR-IQA metrics.

Cite

CITATION STYLE

APA

Lv, Z., Wang, X., Wang, K., & Liang, X. (2019). A Deep Blind Image Quality Assessment with Visual Importance Based Patch Score. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11362 LNCS, pp. 147–162). Springer Verlag. https://doi.org/10.1007/978-3-030-20890-5_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free