Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models

7Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.

Cite

CITATION STYLE

APA

Bian, Y., Küster, D., Liu, H., & Krumhuber, E. G. (2024, January 1). Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models. Sensors. Multidisciplinary Digital Publishing Institute (MDPI). https://doi.org/10.3390/s24010126

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free