MLHCBugs: A Framework to Reproduce Real Faults in Healthcare Machine Learning Applications

0Citations
Citations of this article
N/AReaders
Mendeley users who have this article in their library.
Get full text

Abstract

Machine Learning (ML) is the field of study that allows computers to learn from experiences without being explicitly programmed [1]. ML models are currently used in many safety-critical applications in healthcare [2]-[4] and survival analyses [5]. Thus, faults in this software can directly impact the quality of human life. In an ML application, the program logic is typically derived by a ML algorithm using the currently available data (i.e., training data) rather than explicitly being programmed [6]. Therefore, the program's behavior would evolve as it is exposed to new data. Further, healthcare ML applications are inherently complex and typically constructed by the interconnection of several components, such as data that is used to derive the logic, the ML framework that contains the algorithms used by the program, and the program itself that is written by the programmer for a specific task involved with healthcare [7]. Faults in any of these components may produce an observable incorrect output or the statistical nature of these programs may mask the incorrect output altogether, making it more challenging to understand the root causes of these failures.

Cite

CITATION STYLE

APA

Jaganathan, G. S., Kazi, N., Kahanda, I., & Kanewala, U. (2024). MLHCBugs: A Framework to Reproduce Real Faults in Healthcare Machine Learning Applications. In Proceedings - 2024 IEEE Conference on Software Testing, Verification and Validation, ICST 2024 (pp. 445–447). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICST60714.2024.00050

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free