An evolutionary approach to interpretable learning

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine Learning (ML) interpretability is a growing field of computational research, of which the goal is to shine a light on black-box predictive models. We present an evolutionary framework to improve upon existing post-hoc interpretability metrics, by quantifying feature synergy, or the strength of feature interactions in high-dimensional prediction problems. In two problem instances from bioinformatics and climate science, we validate our results with existing domain research, to show that feature synergy is a valuable metric for post-hoc interpretability.

Cite

CITATION STYLE

APA

Robertson, J., & Hu, T. (2021). An evolutionary approach to interpretable learning. In GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion (pp. 167–168). Association for Computing Machinery, Inc. https://doi.org/10.1145/3449726.3459460

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free