DyCoDa: A Multi-modal Data Collection of Multi-user Remote Survival Game Recordings

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present a novel multi-modal multi-party conversation corpus called DyCoDa (Dynamic Conversational Dataset). It consists of remote intensive conversations among three interaction partners within a collaborative problem-solving scenario. The fundamental aim of building up this corpus is to investigate how humans interact with each other online via a video conferencing tool in a cooperation setting and which audio-visual cues are conveyed and perceived during this interaction. Apart from the high-quality audio and video recordings, the depth and infrared information recorded using the Microsoft Azure Kinect is also provided. Furthermore, various self-evaluation questionnaires are used to get socio-demographic information as well as the personality structure and the individual team role. In total, 30 native German-speaking participants have taken part in the experiment carried out at Magdeburg University. Overall, the DyCoDa consists of 10 h of recorded interactions and will be beneficial for researchers in the field of human-computer interaction.

Cite

CITATION STYLE

APA

Dresvyanskiy, D., Sinha, Y., Busch, M., Siegert, I., Karpov, A., & Minker, W. (2022). DyCoDa: A Multi-modal Data Collection of Multi-user Remote Survival Game Recordings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13721 LNAI, pp. 163–177). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20980-2_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free