Adversarial training of gradient-boosted decision trees

24Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adversarial training is a prominent approach to make machine learning (ML) models resilient to adversarial examples. Unfortunately, such approach assumes the use of differentiable learning models, hence it cannot be applied to relevant ML techniques, such as ensembles of decision trees. In this paper, we generalize adversarial training to gradient-boosted decision trees (GBDTs). Our experiments show that the performance of classifiers based on existing learning techniques either sharply decreases upon attack or is unsatisfactory in absence of attacks, while adversarial training provides a very good trade-off between resiliency to attacks and accuracy in the unattacked setting.

Cite

CITATION STYLE

APA

Calzavara, S., Lucchese, C., & Tolomei, G. (2019). Adversarial training of gradient-boosted decision trees. In International Conference on Information and Knowledge Management, Proceedings (pp. 2429–2432). Association for Computing Machinery. https://doi.org/10.1145/3357384.3358149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free