Major open platforms, such as Facebook, Twitter, Instagram, and Tik Tok, are bombarded with postings that violate platform community standards, offend societal norms, and cause harm to individuals and groups. To manage such sites requires identification of content and behavior that is anti-social and action to remove content and sanction posters. This process is not as straightforward as it seems: what is offensive and to whom varies by individual, group, and community; what action to take depends on stated standards, community expectations, and the extent of the offense; conversations can create and sustain anti-social behavior (ASB); networks of individuals can launch coordinated attacks; and fake accounts can side-step sanctioning behavior. In meeting the challenges of moderating extreme content, two guiding questions stand out: how do we define and identify ASB online? And, given the quantity and nuances of offensive content: how do we make the best use of automation and humans in the management of offending content and ASB? To address these questions, existing studies on ASB online were reviewed, and a detailed examination was made of social media moderation practices on major media. Pros and cons of automated and human review are discussed in a framework of three layers: environment, community, and crowd. Throughout, the article adds attention to the network impact of ASB, emphasizing the way ASB builds a relation between perpetrator(s) and victim(s), and can make ASB more or less offensive.
CITATION STYLE
Haythornthwaite, C. (2023). Moderation, Networks, and Anti-Social Behavior Online. Social Media and Society, 9(3). https://doi.org/10.1177/20563051231196874
Mendeley helps you to discover research relevant for your work.