How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child Welfare

20Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Child welfare (CW) agencies use risk assessment tools as a means to achieve evidence-based, consistent, and unbiased decision-making. These risk assessments act as data collection mechanisms and have been further developed into algorithmic systems in recent years. Moreover, several of these algorithms have reinforced biased theoretical constructs and predictors because of the easy availability of structured assessment data. In this study, we critically examine the Washington Assessment of Risk Model (WARM), a prominent risk assessment tool that has been adopted by over 30 states in the United States and has been repurposed into more complex algorithmic systems. We compared WARM against the narrative coding of casenotes written by caseworkers who used WARM. We found significant discrepancies between the casenotes and WARM data where WARM scores did not not mirror caseworkers' notes about family risk. We provide the SIGCHI community with some initial findings from the quantitative de-construction of a child-welfare risk assessment algorithm.

Cite

CITATION STYLE

APA

Saxena, D., Repaci, C., Sage, M. D., & Guha, S. (2022). How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child Welfare. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3491101.3519771

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free