No-Reference Image Quality Assessment Based on Multi-Task Generative Adversarial Network

12Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Since human observers are the ultimate receivers of an image, most of the image quality assessment (IQA) methods are based on analysis of the properties and mechanism of the human visual system. However, due to the lack of undistorted images for a reference, the accuracy of the no-reference IQA (NR-IQA) cannot compete with that of the full-reference IQA (FR-IQA). To bridge the performance gap between the FR-IQA and NR-IQA methods, we propose a NR-IQA method based on multi-task generative adversarial network, which attempts to restore dependable hallucinated images to compensate for the missing corresponding reference images. Two tasks, hallucination images and the quality maps are outputted by the generator and are combined with the specific loss to improve the reliability of hallucination images. Besides, two discriminator networks are used to respectively distinguish the undistorted images and hallucination images pairs, quality maps and structural similarity index measurement maps pairs. Finally, the hallucination images and distorted images are input into the IQA network, and quality scores are evaluated based on the differences between them. The superiority of our proposed method is verified by several different experiments on the LIVE datasets, TID2008 datasets, and TID2013 datasets.

Cite

CITATION STYLE

APA

Ma, Y., Cai, X., Sun, F., & Hao, S. (2019). No-Reference Image Quality Assessment Based on Multi-Task Generative Adversarial Network. IEEE Access, 7, 146893–146902. https://doi.org/10.1109/ACCESS.2019.2942625

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free