For school teachers and Designated Safeguarding Leads (DSLs), computers and other schoolowned communication devices are both indispensable and deeply worrisome. For their education, children require access to the internet, as well as a standard institutional ICT infrastructure, including e-mail and other forms of online communication technology. Given the sheer volume of data being generated and shared on a daily basis within schools, most teachers and DSLs can no longer monitor the safety and wellbeing of their students without the use of specialist safeguarding software. In this paper, we experiment with the use of stateof-the-art neural network models on the modelling of a dataset of almost 9,000 anonymised child-generated chat messages on the Microsoft Teams platform. The dataset was manually annotated into two binary classes: true positives (real safeguarding concerns) and false positives (false alarms) that a monitoring program would be interested in. These classes were then further annotated into eight fine-grained classes of safeguarding concerns (or false alarms). For the binary classification, we achieved a macro F1 score of 87.32, while for the fine-grained classification, our models achieved a macro F1 score of 73.56. This first experiment into the use of Deep Learning for detecting safeguarding concerns represents an important step towards achieving high-accuracy and reliable monitoring information for busy teachers and safeguarding leads.
CITATION STYLE
Franklin, E., & Ranasinghe, T. (2023). Deep Learning Approaches to Detecting Safeguarding Concerns in Schoolchildren’s Online Conversations. In International Conference Recent Advances in Natural Language Processing, RANLP (pp. 364–372). Incoma Ltd. https://doi.org/10.26615/978-954-452-092-2_041
Mendeley helps you to discover research relevant for your work.