REACT2023: The First Multiple Appropriate Facial Reaction Generation Challenge

11Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

The Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions. The goal of the challenge is to provide the first benchmark test set for multi-modal information processing and to foster collaboration among the audio, visual, and audio-visual behaviour analysis and behaviour generation (a.k.a generative AI) communities, to compare the relative merits of the approaches to automatic appropriate facial reaction generation under different spontaneous dyadic interaction conditions. This paper presents: (i) the novelties, contributions and guidelines of the REACT2023 challenge; (ii) the dataset utilized in the challenge; and (iii) the performance of the baseline systems on the two proposed sub-challenges: Offline Multiple Appropriate Facial Reaction Generation and Online Multiple Appropriate Facial Reaction Generation, respectively. The challenge baseline code is publicly available at https://github.com/reactmultimodalchallenge/baseline-react2023.

Cite

CITATION STYLE

APA

Song, S., Spitale, M., Luo, C., Barquero, G., Palmero, C., Escalera, S., … Gunes, H. (2023). REACT2023: The First Multiple Appropriate Facial Reaction Generation Challenge. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 9620–9624). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3612832

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free