Augmenting Multi-Party Face-to-Face Interactions Amongst Strangers with User Generated Content

N/ACitations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

We present the results of an investigation into the role of curated representations of self, which we term Digital Selfs, in augmented multi-party face-to-face interactions. Advancements in wearable technologies (such as Head-Mounted Displays) have renewed interest in augmenting face-to-face interaction with digital content. However, existing work focuses on algorithmic matching between users, based on data-mining shared interests from individuals’ social media accounts, which can cause information that might be inappropriate or irrelevant to be disclosed to others. An alternative approach is to allow users to manually curate the digital augmentation they wish to present to others, allowing users to present those aspects of self that are most important to them and avoid undesired disclosure. Through interviews, video analysis, questionnaires and device logging, of 23 participants in 6 multi-party gatherings where individuals were allowed to freely mix, we identified how users created Digital Selfs from media largely outside existing social media accounts, and how Digital Selfs presented through HMDs were employed in multi-party interactions, playing key roles in facilitating strangers to interact with each other. We present guidance for the design of future multi-party digital augmentations in collaborative scenarios.

Cite

CITATION STYLE

APA

Kytö, M., & McGookin, D. (2017). Augmenting Multi-Party Face-to-Face Interactions Amongst Strangers with User Generated Content. Computer Supported Cooperative Work: CSCW: An International Journal, 26(4–6), 527–562. https://doi.org/10.1007/s10606-017-9281-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free