Toxic Comment Detection using LSTM

25Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While online communication media acts as a platform for people to connect, collaborate and discuss, overcoming the barriers for communication, some take it as a medium to direct hateful and abusive comments that may prejudice an individual's emotional and mental well being. Explosion of online communication makes it virtually impossible for filtering out the hateful tweets manually, and hence there is a need for a method to filter out the hate-speech and make social media cleaner and safer to use. The paper aims to achieve the same by text mining and making use of deep learning models constructed using LSTM neural networks that can near accurately identify and classify hate-speech and filter it out for us. The model that we have developed is able to classify given comments as toxic or nontoxic with 94.49% precision, 92.79% recall and 94.94% Accuracy score.

Cite

CITATION STYLE

APA

Dubey, K., Nair, R., Khan, M. U., & Shaikh, P. S. (2020). Toxic Comment Detection using LSTM. In Proceedings of 2020 3rd International Conference on Advances in Electronics, Computers and Communications, ICAECC 2020. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICAECC50550.2020.9339521

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free