A new formulation of gradient boosting

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the setting of regression, the standard formulation of gradient boosting generates a sequence of improvements to a constant model. In this paper, we reformulate gradient boosting such that it is able to generate a sequence of improvements to a nonconstant model, which may contain prior knowledge or physical insight about the data generating process. Moreover, we introduce a simple variant of multi-target stacking that extends our approach to the setting of multi-target regression. An experiment on a real-world superconducting quantum device calibration dataset demonstrates that our approach outperforms the state-of-the-art calibration model even though it only receives a paucity of training examples. Further, it significantly outperforms a well-known gradient boosting algorithm, known as LightGBM, as well as an entirely data-driven reimplementation of the calibration model, which suggests the viability of our approach.

Cite

CITATION STYLE

APA

Wozniakowski, A., Thompson, J., Gu, M., & Binder, F. C. (2021). A new formulation of gradient boosting. Machine Learning: Science and Technology, 2(4). https://doi.org/10.1088/2632-2153/ac1ee9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free