Interpretation of black-box predictive models

14Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many machine learning applications involve predictive data-analytic modeling using black-box techniques. A common problem in such studies is understanding/ interpretation of estimated nonlinear high-dimensional models. Whereas human users naturally favor simple interpretable models, such models may not be practically feasible with modern adaptive methods such as Support Vector Machines (SVMs),Multilayer Perceptron Networks (MLPs), AdaBoost, etc. This chapter provides a brief survey of the current techniques for visualization and interpretation of SVM-based classification models, and then highlights potential problems with such methods. We argue that, under the VC-theoretical framework, model interpretation cannot be achieved via technical analysis of predictive data-analytic models. That is, any meaningful interpretation should incorporate application domain knowledge outside data analysis.We also describe a simple graphical technique for visualization of SVM classification models.

Cite

CITATION STYLE

APA

Cherkassky, V., & Dhar, S. (2015). Interpretation of black-box predictive models. In Measures of Complexity: Festschrift for Alexey Chervonenkis (pp. 267–286). Springer International Publishing. https://doi.org/10.1007/978-3-319-21852-6_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free