Adversarial Training of Gradient-Boosted Decision Trees

Stefano Calzavara and Claudio Lucchese, University Ca’ Foscari of Venice
Gabriele Tolomei, University of Padua

Aug. 09 2019

Accepted as Short Paper at CIKM ’19: ACM International Conference on Information and Knowledge Management [1].

Abstract. Adversarial training is a prominent approach to make machine learning (ML) models resilient to adversarial examples. Unfortunately, such approach assumes the use of differentiable learning models, hence it cannot be applied to relevant ML techniques, such as ensembles of decision trees. In this paper, we generalize adversarial training to gradient-boosted decision trees (GBDTs). Our experiments show that the performance of classifiers based on existing learning techniques either sharply decreases upon attack or is unsatisfactory in absence of attacks, while adversarial training provides a very good trade-off between resiliency to attacks and accuracy in the unattacked setting.

References

[1]   Stefano Calzavara, Claudio Lucchese, and Gabriele Tolomei. Adversarial training of gradient-boosted decision trees. In CIKM ’19: Proceedings of the The 28th ACM International Conference on Information and Knowledge Management, 2019.

Share on