Towards Identifying Fine-Grained Depression Symptoms from Memes

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The past decade has observed significant attention toward developing computational methods for classifying social media data based on the presence or absence of mental health conditions. In the context of mental health, for clinicians to make an accurate diagnosis or provide personalized intervention, it is crucial to identify fine-grained mental health symptoms. To this end, we conduct a focused study on depression disorder and introduce a new task of identifying fine-grained depressive symptoms from memes. Toward this, we create a high-quality dataset (RESTORE) annotated with 8 fine-grained depression symptoms based on the clinically adopted PHQ-9 questionnaire. We benchmark RESTORE on 20 strong monomodal and multimodal methods. Additionally, we show how imposing orthogonal constraints on textual and visual feature representations in a multimodal setting can enforce the model to learn non-redundant and de-correlated features leading to a better prediction of fine-grained depression symptoms. Further, we conduct an extensive human analysis and elaborate on the limitations of existing multimodal models that often overlook the implicit connection between visual and textual elements of a meme.

Cite

CITATION STYLE

APA

Yadav, S., Caragea, C., Zhao, C., Kumari, N., Solberg, M., & Sharma, T. (2023). Towards Identifying Fine-Grained Depression Symptoms from Memes. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 8890–8905). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.495

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free