Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments

7Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.

References Powered by Scopus

Simple online and realtime tracking with a deep association metric

3543Citations
N/AReaders
Get full text

Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research

1443Citations
N/AReaders
Get full text

Rethinking transfer: A simple proposal with multiple implications

1373Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A First Step in Using Machine Learning Methods to Enhance Interaction Analysis for Embodied Learning Environments

3Citations
N/AReaders
Get full text

Identifying Gaze Behavior Evolution via Temporal Fully-Weighted Scanpath Graphs

3Citations
N/AReaders
Get full text

Prediction of Students’ Self-confidence Using Multimodal Features in an Experiential Nurse Training Environment

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Vatral, C., Biswas, G., Cohn, C., Davalos, E., & Mohammed, N. (2022). Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments. Frontiers in Artificial Intelligence, 5. https://doi.org/10.3389/frai.2022.941825

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 33

77%

Professor / Associate Prof. 6

14%

Lecturer / Post doc 2

5%

Researcher 2

5%

Readers' Discipline

Tooltip

Engineering 5

36%

Computer Science 4

29%

Medicine and Dentistry 3

21%

Nursing and Health Professions 2

14%

Save time finding and organizing research with Mendeley

Sign up for free