Finding Interpretable Class-Specific Patterns through Efficient Neural Search

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Discovering patterns in data that best describe the differences between classes allows to hypothesize and reason about class-specific mechanisms. In molecular biology, for example, this bears promise of advancing the understanding of cellular processes differing between tissues or diseases, which could lead to novel treatments. To be useful in practice, methods that tackle the problem of finding such differential patterns have to be readily interpretable by domain experts, and scalable to the extremely high-dimensional data. In this work, we propose a novel, inherently interpretable binary neural network architecture DIFFNAPS that extracts differential patterns from data. DIFFNAPS is scalable to hundreds of thousands of features and robust to noise, thus overcoming the limitations of current state-of-the-art methods in large-scale applications such as in biology. We show on synthetic and real world data, including three biological applications, that, unlike its competitors, DIFFNAPS consistently yields accurate, succinct, and interpretable class descriptions.

Cite

CITATION STYLE

APA

Walter, N. P., Fischer, J., & Vreeken, J. (2024). Finding Interpretable Class-Specific Patterns through Efficient Neural Search. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 9071–9079). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i8.28757

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free