In this paper, we present a novel multi-modal multi-party conversation corpus called DyCoDa (Dynamic Conversational Dataset). It consists of remote intensive conversations among three interaction partners within a collaborative problem-solving scenario. The fundamental aim of building up this corpus is to investigate how humans interact with each other online via a video conferencing tool in a cooperation setting and which audio-visual cues are conveyed and perceived during this interaction. Apart from the high-quality audio and video recordings, the depth and infrared information recorded using the Microsoft Azure Kinect is also provided. Furthermore, various self-evaluation questionnaires are used to get socio-demographic information as well as the personality structure and the individual team role. In total, 30 native German-speaking participants have taken part in the experiment carried out at Magdeburg University. Overall, the DyCoDa consists of 10 h of recorded interactions and will be beneficial for researchers in the field of human-computer interaction.
CITATION STYLE
Dresvyanskiy, D., Sinha, Y., Busch, M., Siegert, I., Karpov, A., & Minker, W. (2022). DyCoDa: A Multi-modal Data Collection of Multi-user Remote Survival Game Recordings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13721 LNAI, pp. 163–177). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20980-2_15
Mendeley helps you to discover research relevant for your work.