Abstract
Machine Learning (ML) interpretability is a growing field of computational research, of which the goal is to shine a light on black-box predictive models. We present an evolutionary framework to improve upon existing post-hoc interpretability metrics, by quantifying feature synergy, or the strength of feature interactions in high-dimensional prediction problems. In two problem instances from bioinformatics and climate science, we validate our results with existing domain research, to show that feature synergy is a valuable metric for post-hoc interpretability.
Author supplied keywords
Cite
CITATION STYLE
Robertson, J., & Hu, T. (2021). An evolutionary approach to interpretable learning. In GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion (pp. 167–168). Association for Computing Machinery, Inc. https://doi.org/10.1145/3449726.3459460
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.