Learning and Evaluating Human Preferences for Conversational Head Generation

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A reliable and comprehensive evaluation metric that aligns with manual preference assessments is crucial for conversational head video synthesis methods development. Existing quantitative evaluations often fail to capture the full complexity of human preference, as they only consider limited evaluation dimensions. Qualitative evaluations and user studies offer a solution but are time-consuming and labor-intensive. This limitation hinders the advancement of conversational head generation algorithms and systems. In this paper, we propose a novel learning-based evaluation metric named Preference Score (PS) for fitting human preference according to the quantitative evaluations across different dimensions. PS can serve as a quantitative evaluation without the need for human annotation. Experimental results validate the superiority of Preference Score in aligning with human perception, and also demonstrate robustness and generalizability to unseen data, making it a valuable tool for advancing conversation head generation. We expect this metric could facilitate new advances in conversational head generation. Project page: https://github.com/dc3ea9f/PreferenceScore.

Cite

CITATION STYLE

APA

Zhou, M., Bai, Y., Zhang, W., Yao, T., Zhao, T., & Mei, T. (2023). Learning and Evaluating Human Preferences for Conversational Head Generation. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 9615–9619). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3612831

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free