Most boosting regression algorithms use the weighted average of base regressors as their final regresser. In this paper we analyze the choice of the weighted median. We propose a general boosting algorithm based on this approach. We prove boosting-type convergence of the algorithm and give clear conditions for the convergence of the robust training error. The algorithm recovers ADABOOST and ADABooste as special cases. For boosting confidence-rated predictions, it leads to a new approach that outputs a different decision and interprets robustness in a different manner than the approach based on the weighted average. In the general, non-binary case we suggest practical strategies based on the analysis of the algorithm and experiments.
CITATION STYLE
Kégl, B. (2003). Robust regression by boosting the median. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2777, pp. 258–272). Springer Verlag. https://doi.org/10.1007/978-3-540-45167-9_20
Mendeley helps you to discover research relevant for your work.