Deep Learning (DL) based classification algorithms have been shown to achieve top results in clinical diagnosis, namely with lung cancer datasets. However, the complexity and opaqueness of the models together with the still scant training datasets call for the development of explainable modeling methods enabling the interpretation of the results. To this end, in this paper we propose a novel interpretability approach and demonstrate how it can be used on a malignancy lung cancer DL classifier to assess its stability and congruence even when fed a low amount of image samples. Additionally, by disclosing the regions of the medical images most relevant to the resulting classification the approach provides important insights to the correspondent clinical meaning apprehended by the algorithm. Explanations of the results provided by ten different models against the same test sample are compared. These attest the stability of the approach and the algorithm focus on the same image regions.
CITATION STYLE
Malafaia, M., Silva, F., Neves, I., Pereira, T., & Oliveira, H. P. (2022). Robustness Analysis of Deep Learning-Based Lung Cancer Classification Using Explainable Methods. IEEE Access, 10, 112731–112741. https://doi.org/10.1109/ACCESS.2022.3214824
Mendeley helps you to discover research relevant for your work.