In many nations, demand for mental health services currently outstrips supply, especially in the area of talk-based psychological interventions. Within this context, chatbots (software applications designed to simulate conversations with human users) are increasingly explored as potential adjuncts to traditional mental healthcare service delivery with a view to improving accessibility and reducing waiting times. However, the effectiveness and acceptability of such chatbots remains under-researched. This study evaluates mental health professionals’ perceptions of Pi, a relational Artificial Intelligence (AI) chatbot, in the early stages of the psychotherapeutic process (problem exploration). We asked 63 therapists to assess therapy transcripts between a human client and Pi (human-AI) versus traditional therapy transcripts between therapists and clients (human-human). Therapists were unable to reliably discriminate between human-AI and human-human therapy transcripts. Therapists were accurate only 53.9% of the time, no better than chance, and rated the human-AI transcripts as higher quality on average. This study has potentially profound implications for the treatment of mental health problems, adding tentative support for the use of relational AI chatbots in providing initial assistance for mild to moderate psychological issues, especially when access to human therapists is constrained.
CITATION STYLE
Kuhail, M. A., Alturki, N., Thomas, J., Alkhalifa, A. K., & Alshardan, A. (2024). Human-Human vs Human-AI Therapy: An Empirical Study. International Journal of Human-Computer Interaction. https://doi.org/10.1080/10447318.2024.2385001
Mendeley helps you to discover research relevant for your work.