This article reports two studies conducted in the United States, Germany, South Korea, and China to examine how online content providers (OCPs) exercise their responsibility in dealing with harmful online communication (HOC) by moderating user-generated content. The first study employed content analysis of 547 HOC policy documents. In the second study, 41 representatives of OCPs were interviewed regarding the implementation of these policies. We show that HOC policies are most often communicated through user-unfriendly terms of service. Only Korean OCPs present their policies very vividly. Few organizations, mainly United States and German, encourage counter-speech. The most common organizational actions against HOC mentioned in the policies are deleting posts or blocking accounts. The interviews reveal, however, that organizations—apart from those from China—are cautious in implementing such reactive actions. They fear accusations of censorship and acknowledge the tension between free speech and their content moderation practice. What emerged as the “gold standard” for identifying HOC was manual inspection. However, organizations operating large platforms widely apply machine-learning technology or artificial intelligence. In sum, our research suggests that OCPs are not proactive enough in their communication for HOC prevention and often focus more on avoiding legal ramifications than on educating users when handling HOC.
CITATION STYLE
Einwiller, S. A., & Kim, S. (2020). How Online Content Providers Moderate User-Generated Content to Prevent Harmful Online Communication: An Analysis of Policies and Their Implementation. Policy and Internet, 12(2), 184–206. https://doi.org/10.1002/poi3.239
Mendeley helps you to discover research relevant for your work.