One of the key challenges facing online communities, from social networks to the comment sections of news sites, is low-quality discussions. During the print era, with space carrying high cost, publications used strong editorial control in deciding which letters from readers should be published. As news moved online, comments sections were often open and unmoderated, and sometimes outsourced to other companies. One notable exception is the New York Times which has, since 2007, employed a staff of full-time moderators to review all comments submitted to their Web site (Etim, 2017). Exemplary comments representing a range of views are highlighted and tagged as NYT Picks. The importance of moderators was unanticipated by many publishers, with many removing comments sections because they could not adequately handle comments. With vast numbers of online comments, and growing challenges of how social networks manage toxic language, the role of moderators is becoming much more demanding (Gillespie, 2018; Roberts, 2019). There is thus growing interest in developing automation to help filter and organize online comments for both moderators and readers (Park, et al., 2016). Comment moderation is often a task of filtering out, i.e., deleting toxic and abusive comments, the ‘nasty’ part of the Internet (Chen, 2017). Research presented at the Abusive Language Workshops [1] often explores methods to automate the task of detecting and filtering abusive and toxic comments, what Seering, et al. (2019) define as reactive interventions. We propose the flipside of that task, the promotion of constructive comments, a form of proactive intervention (see also Jurgens, et al., 2019). While filtering will always be necessary, we like to think that what we define as constructive comments are promoted and highlighted, a positive contagion effect will emerge (Meltzer, 2015; West and Stone, 2014). There is, in fact, evidence that nudges and interventions have an impact on the civility of online conversations and that a critical mass effect takes place with enough polite contributors. Stroud (2011) showed that a ‘respect’ button (instead of ‘like’ and ‘dislike’) encouraged commenters to engage with political views they disagreed with. Experiments indicate that having more polite posts highlighted leads to an increased perception of civility (Grevet, 2016) and that commenters exposed to thoughtful comments produce, in turn, higher-quality thoughtful comments (Sukumaran, et al., 2011). Evolutionary game models also support the hypothesis that a critical mass of civil users results in the spread of politeness in online interactions (Antoci, et al., 2016). Shanahan (2017) argued that news organizations ought to be engaged in collecting and amplifying news comments, including seeking out diverse participants and including varied perspectives. The identification of constructive comments can become another tool to help news outlets in fostering better conversations online. The dataset and experiments that we present in this paper contribute to that effort. To illustrate the various choices one can make when considering the quality of a comment, we show a potential spectrum of comments and their quality in Table 1. At the bottom of the table, we see both negative and positive comments that are non-constructive, because they do not seem to contribute to the conversation. The middle comment is not necessarily constructive; it provides an only opinion, and no rationale for that opinion. Such non-constructive, but also not toxic comments, are little more than back channels and do not contribute much to the conversation (Gautam and Taboada, 2019)
CITATION STYLE
Kolhatkar, V., Thain, N., Sorensen, J., Dixon, L., & Taboada, M. (2023). Classifying constructive comments. First Monday, 28(4). https://doi.org/10.5210/fm.v28i4.13163
Mendeley helps you to discover research relevant for your work.