Training Models to Generate, Recognize, and Reframe Unhelpful Thoughts

12Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many cognitive approaches to well-being, such as recognizing and reframing unhelpful thoughts, have received considerable empirical support over the past decades, yet still lack truly widespread adoption in self-help format. A barrier to that adoption is a lack of adequately specific and diverse dedicated practice material. This work examines whether current language models can be leveraged to both produce a virtually unlimited quantity of practice material illustrating standard unhelpful thought patterns matching specific given contexts, and generate suitable positive reframing proposals. We propose PATTERNREFRAME, a novel dataset of about 10k examples of thoughts containing unhelpful thought patterns conditioned on a given persona, accompanied by about 27k positive reframes. By using this dataset to train and/or evaluate current models, we show that existing models can already be powerful tools to help generate an abundance of tailored practice material and hypotheses, with no or minimal additional model training required.

Cite

CITATION STYLE

APA

Maddela, M., Ung, M., Xu, J., Madotto, A., Foran, H., & Boureau, Y. L. (2023). Training Models to Generate, Recognize, and Reframe Unhelpful Thoughts. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 13641–13660). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.763

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free