Do Speech-Based Collaboration Analytics Generalize Across Task Contexts?

39Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigated the generalizability of language-based analytics models across two collaborative problem solving (CPS) tasks: an educational physics game and a block programming challenge. We analyzed a dataset of 95 triads (N=285) who used videoconferencing to collaborate on both tasks for an hour. We trained supervised natural language processing classifiers on automatic speech recognition transcripts to predict the human-coded CPS facets (skills) of constructing shared knowledge, negotiation / coordination, and maintaining team function. We tested three methods for representing collaborative discourse: (1) deep transfer learning (using BERT), (2) n-grams (counts of words/phrases), and (3) word categories (using the Linguistic Inquiry Word Count [LIWC] dictionary). We found that the BERT and LIWC methods generalized across tasks with only a small degradation in performance (Transfer Ratio of .93 with 1 indicating perfect transfer), while the n-grams had limited generalizability (Transfer Ratio of .86), suggesting overfitting to task-specific language. We discuss the implications of our findings for deploying language-based collaboration analytics in authentic educational environments.

Cite

CITATION STYLE

APA

Pugh, S. L., Rao, A., Stewart, A. E. B., & D’Mello, S. K. (2022). Do Speech-Based Collaboration Analytics Generalize Across Task Contexts? In ACM International Conference Proceeding Series (pp. 208–218). Association for Computing Machinery. https://doi.org/10.1145/3506860.3506894

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free