Towards Leveraging AI-based Moderation to Address Emergent Harassment in Social Virtual Reality

16Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Extensive HCI research has investigated how to prevent and mitigate harassment in virtual spaces, particularly by leveraging human-based and Artificial Intelligence (AI)-based moderation. However, social Virtual Reality (VR) constitutes a novel social space that faces both intensified harassment challenges and a lack of consensus on how moderation should be approached to address such harassment. Drawing on 39 interviews with social VR users with diverse backgrounds, we investigate the perceived opportunities and limitations for leveraging AI-based moderation to address emergent harassment in social VR, and how future AI moderators can be designed to enhance such opportunities and address limitations. We provide the first empirical investigation into re-envisioning AI's new roles in innovating content moderation approaches to better combat harassment in social VR. We also highlight important principles for designing future AI-based moderation incorporating user-human-AI collaboration to achieve safer and more nuanced online spaces.

Cite

CITATION STYLE

APA

Schulenberg, K., Li, L., Freeman, G., Zamanifard, S., & McNeese, N. J. (2023). Towards Leveraging AI-based Moderation to Address Emergent Harassment in Social Virtual Reality. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3544548.3581090

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free