Hepatitis B is a potentially deadly liver infection caused by the hepatitis B virus. It is a serious public health problem globally. Substantial efforts have been made to apply machine learning in detecting the virus. However, the application of model interpretability is limited in the existing literature. Model interpretability makes it easier for humans to understand and trust the machine-learning model. Therefore, in this study, we used SHapley Additive exPlanations (SHAP), a game-based theoretical approach to explain and visualize the predictions of machine learning models applied for hepatitis B diagnosis. The algorithms used in building the models include decision tree, logistic regression, support vector machines, random forest, adaptive boosting (AdaBoost), and extreme gradient boosting (XGBoost), and they achieved balanced accuracies of 75%, 82%, 75%, 86%, 92%, and 90%, respectively. Meanwhile, the SHAP values showed that bilirubin is the most significant feature contributing to a higher mortality rate. Consequently, older patients are more likely to die with elevated bilirubin levels. The outcome of this study can aid health practitioners and health policymakers in explaining the result of machine learning models for health-related problems.
CITATION STYLE
Obaido, G., Ogbuokiri, B., Swart, T. G., Ayawei, N., Kasongo, S. M., Aruleba, K., … Esenogho, E. (2022). An Interpretable Machine Learning Approach for Hepatitis B Diagnosis. Applied Sciences (Switzerland), 12(21). https://doi.org/10.3390/app122111127
Mendeley helps you to discover research relevant for your work.