Reducing Exposure to Hateful Speech Online

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It has been observed that regular exposure to hateful content online can reduce levels of empathy in individuals, as well as affect the mental health of targeted groups. Research shows that a significant number of young people fall victim to hateful speech online. Unfortunately, such content is often poorly controlled by online platforms, leaving users to mitigate the problem by themselves. It’s possible that Machine Learning and browser extensions could be used to identify hateful content and assist users in reducing their exposure to hate speech online. A proof-of-concept extension was developed for the Google Chrome web browser, using both a local word blocker and a cloud-based model, to explore how effective browser extensions could be in identifying and managing exposure to hateful speech online. The extension was evaluated by 124 participants regarding the usability and functionality of the extension, to gauge the feasibility of this approach. Users responded positively on the usability of the extension, as well as giving feedback regarding where the proof-of-concept could be improved. The research demonstrates the potential for a browser extension aimed at average users to reduce individuals’ exposure to hateful speech online, using both word blocking and cloud-based Machine Learning techniques.

Cite

CITATION STYLE

APA

Bowker, J., & Ophoff, J. (2022). Reducing Exposure to Hateful Speech Online. In Lecture Notes in Networks and Systems (Vol. 508 LNNS, pp. 630–645). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-10467-1_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free