Abstract
Background: Computer perception (CP) technologies-including digital phenotyping, affective computing, and related passive sensing approaches-offer unprecedented opportunities to personalize health care, especially mental health care, yet they also provoke concerns about privacy, bias, and the erosion of empathic, relationship-centered practice. At present, it remains elusive what stakeholders who design, deploy, and experience these tools in real-world settings perceive as the risks and benefits of CP technologies. Objective: This study aims to explore key stakeholder perspectives on the potential benefits, risks, and concerns associated with integrating CP technologies into patient care. A better understanding of these concerns is crucial for responding to and mitigating such concerns via design implementation strategies that augment, rather than compromise, patient-centered and humanistic care and associated outcomes. Methods: We conducted in-depth, semistructured interviews with 102 stakeholders involved at key points in CP's development and implementation: adolescent patients (n=20) and their caregivers (n=20); frontline clinicians (n=20); technology developers (n=21); and ethics, legal, policy, or philosophy scholars (n=21). Interviews (∼45 minutes each) explored perceived benefits, risks, and implementation challenges of CP in clinical care. Transcripts underwent thematic analysis by a multidisciplinary team; reliability was enhanced through double coding and consensus adjudication. Results: Stakeholders raised concerns across 7 themes: (1) Data Privacy and Protection (88/102, 86.3%); (2) Trustworthiness and Integrity of CP Technologies (72/102, 70.6%); (3) direct and indirect Patient Harms (65/102, 63.7%); (4) Utility and Implementation Challenges (60/102, 58.8%); (5) Patient-Specific Relevance (24/102, 23.5%); (6) Regulation and Governance (17/102, 16.7%); and (7) Philosophical Critiques of reductionism (13/102, 12.7%). A cross-cutting insight was the primacy of context and subjective meaning in determining whether CP outputs are clinically valid and actionable. Participants warned that without attention to these factors, algorithms risk misclassification and dehumanization of care. Conclusions: To operationalize humanistic safeguards, we propose "personalized road maps": co-designed plans that predetermine which metrics will be monitored, how and when feedback is shared, thresholds for clinical action, and procedures for reconciling discrepancies between algorithmic inferences and lived experience. Road maps embed patient education, dynamic consent, and tailored feedback, thereby aligning CP deployment with patient autonomy, therapeutic alliance, and ethical transparency. This multistakeholder study provides the first comprehensive, evidence-based account of relational, technical, and governance challenges raised by CP tools in clinical care. By translating these insights into personalized road maps, we offer a practical framework for developers, clinicians, and policy makers seeking to harness continuous behavioral data while preserving the humanistic core of care.
Author supplied keywords
Cite
CITATION STYLE
Kostick-Quenet, K. M., Hurley, M. E., Ayaz, S., Herrington, J. D., Zampella, C. J., Parish-Morris, J., … Storch, E. A. (2026). Stakeholder Perspectives on Humanistic Implementation of Computer Perception in Health Care: Qualitative Study. JMIR Mental Health, 13. https://doi.org/10.2196/79182
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.