Multi-view emotional expressions dataset using 2D pose estimation

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Human body expressions convey emotional shifts and intentions of action and, in some cases, are even more effective than other emotion models. Despite many datasets of body expressions incorporating motion capture available, there is a lack of more widely distributed datasets regarding naturalized body expressions based on the 2D video. In this paper, therefore, we report the multi-view emotional expressions dataset (MEED) using 2D pose estimation. Twenty-two actors presented six emotional (anger, disgust, fear, happiness, sadness, surprise) and neutral body movements from three viewpoints (left, front, right). A total of 4102 videos were captured. The MEED consists of the corresponding pose estimation results (i.e., 397,809 PNG files and 397,809 JSON files). The size of MEED exceeds 150 GB. We believe this dataset will benefit the research in various fields, including affective computing, human-computer interaction, social neuroscience, and psychiatry.

Cite

CITATION STYLE

APA

Zhang, M., Zhou, Y., Xu, X., Ren, Z., Zhang, Y., Liu, S., & Luo, W. (2023). Multi-view emotional expressions dataset using 2D pose estimation. Scientific Data, 10(1). https://doi.org/10.1038/s41597-023-02551-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free