Towards explainability in using deep learning for the detection of anorexia in social media

20Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Explainability of deep learning models has become increasingly important as neural-based approaches are now prevalent in natural language processing. Explainability is particularly important when dealing with a sensitive domain application such as clinical psychology. This paper focuses on the quantitative assessment of user-level attention mechanism in the task of detecting signs of anorexia in social media users from their posts. The assessment is done through monitoring the performance measures of a neural classifier, with and without user-level attention, when only a limited number of highly-weighted posts are provided. Results show that the weights assigned by the user-level attention strongly correlate with the amount of information that posts provide in showing if their author is at risk of anorexia or not, and hence can be used to explain the decision of the neural classifier.

Cite

CITATION STYLE

APA

Amini, H., & Kosseim, L. (2020). Towards explainability in using deep learning for the detection of anorexia in social media. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12089 LNCS, pp. 225–235). Springer. https://doi.org/10.1007/978-3-030-51310-8_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free