TabNet: Attentive Interpretable Tabular Learning

1.4kCitations
Citations of this article
1.6kReaders
Mendeley users who have this article in their library.

Abstract

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into its global behavior. Finally, we demonstrate self-supervised learning for tabular data, significantly improving performance when unlabeled data is abundant.

Cite

CITATION STYLE

APA

Arık, S., & Pfister, T. (2021). TabNet: Attentive Interpretable Tabular Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 8A, pp. 6679–6687). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i8.16826

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free