Non-photorealistic videos are in demand with the wave of the metaverse, but lack of sufficient research studies. This work aims to take a step forward to understand how humans perceive non-photorealistic videos with eye fixation (i.e., saliency detection), which is critical for enhancing media production, artistic design, and game user experience. To fill in the gap of missing a suitable dataset for this research line, we present NPF-200, the first large-scale multi-modal dataset of purely non-photorealistic videos with eye fixations. Our dataset has three characteristics: 1) it contains soundtracks that are essential according to vision and psychological studies; 2) it includes diverse semantic content and videos are of high-quality; 3) it has rich motions across and within videos. We conduct a series of analyses to gain deeper insights into this task and compare several state-of-the-art methods to explore the gap between natural images and non-photorealistic data. Additionally, as the human attention system tends to extract visual and audio features with different frequencies, we propose a universal frequency-aware multi-modal non-photorealistic saliency detection model called NPSNet, demonstrating the state-of-the-art performance of our task. The results uncover strengths and weaknesses of multi-modal network design and multi-domain training, opening up promising directions for future works. Our dataset and code can be found at https://github.com/Yangziyu/NPF200
CITATION STYLE
Yang, Z., Ren, S., Wu, Z., Zhao, N., Wang, J., Qin, J., & He, S. (2023). NPF-200: A Multi-Modal Eye Fixation Dataset and Method for Non-Photorealistic Videos. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 2294–2304). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3611839
Mendeley helps you to discover research relevant for your work.