Machine-learned classifiers are important components of many datamining and knowledge discovery systems. In several application domains,an explanation of the classifier's reasoning is critical for theclassifier’s acceptance by the end-user. We describe a framework,ExplainD, for explaining decisions made by classifiers that use additiveevidence. ExplainD applies to many widely used classifiers, includinglinear discriminants and many additive models. We demonstrate ourExplainD framework using implementations of naïve Bayes, linear supportvector machine, and logistic regression classifiers on example applications.ExplainD uses a simple graphical explanation of the classificationprocess to provide visualizations of the classifier decisions, visualizationof the evidence for those decisions, the capability to speculateon the effect of changes to the data, and the capability, whereverpossible, to drill down and audit the source of the evidence. Wedemonstrate the effectiveness of ExplainD in the context of a deployedweb-based system (Proteome Analyst) and using a downloadable Python-basedimplementation.
CITATION STYLE
de Vries, E., Pardo, C., Henríquez, G., & Piñeros, M. (2016). Discrepancias en manejo de cifras de cáncer en Colombia. Revista Colombiana de Cancerología, 20(1), 45–47. https://doi.org/10.1016/j.rccan.2016.02.001
Mendeley helps you to discover research relevant for your work.