Evaluating algorithmic bias on biomarker classification of breast cancer pathology reports

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Objectives: This work evaluated algorithmic bias in biomarkers classification using electronic pathology reports from female breast cancer cases. Bias was assessed across 5 subgroups: cancer registry, race, Hispanic ethnicity, age at diagnosis, and socioeconomic status. Materials and Methods: We utilized 594 875 electronic pathology reports from 178 121 tumors diagnosed in Kentucky, Louisiana, New Jersey, New Mexico, Seattle, and Utah to train 2 deep-learning algorithms to classify breast cancer patients using their biomarkers test results. We used balanced error rate (BER), demographic parity (DP), equalized odds (EOD), and equal opportunity (EOP) to assess bias. Results: We found differences in predictive accuracy between registries, with the highest accuracy in the registry that contributed the most data (Seattle Registry, BER ratios for all registries >1.25). BER showed no significant algorithmic bias in extracting biomarkers (estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2) for race, Hispanic ethnicity, age at diagnosis, or socioeconomic subgroups (BER ratio <1.25). DP, EOD, and EOP all showed insignificant results. Discussion: We observed significant differences in BER by registry, but no significant bias using the DP, EOD, and EOP metrics for sociodemographic or racial categories. This highlights the importance of employing a diverse set of metrics for a comprehensive evaluation of model fairness. Conclusion: A thorough evaluation of algorithmic biases that may affect equality in clinical care is a critical step before deploying algorithms in the real world. We found little evidence of algorithmic bias in our biomarker classification tool. Artificial intelligence tools to expedite information extraction from clinical records could accelerate clinical trial matching and improve care.

Cite

CITATION STYLE

APA

Tschida, J., Chandrashekar, M., Peluso, A., Fox, Z., Krawczuk, P., Murdock, D., … Hanson, H. A. (2025). Evaluating algorithmic bias on biomarker classification of breast cancer pathology reports. JAMIA Open, 8(3). https://doi.org/10.1093/jamiaopen/ooaf033

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free