Ethical Dilemmas, Mental Health, Artificial Intelligence, and LLM-Based Chatbots

8Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The present study analyzes the bioethical dilemmas related to the use of chatbots in the field of mental health. A rapid review of scientific literature and media news was conducted, followed by systematization and analysis of the collected information. A total of 24 moral dilemmas were identified, cutting across the four bioethical principles and responding to the context and populations that create, use, and regulate them. Dilemmas were classified according to specific populations and their functions in mental health. In conclusion, bioethical dilemmas in mental health can be categorized into four areas: quality of care, access and exclusion, responsibility and human supervision, and regulations and policies for LLM-based chatbot use. It is recommended that chatbots be developed specifically for mental health purposes, with tasks complementary to the therapeutic care provided by human professionals, and that their implementation be properly regulated and has a strong ethical framework in the field at a national and international level.

Cite

CITATION STYLE

APA

Cabrera, J., Loyola, M. S., Magaña, I., & Rojas, R. (2023). Ethical Dilemmas, Mental Health, Artificial Intelligence, and LLM-Based Chatbots. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13920 LNBI, pp. 313–326). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-34960-7_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free