Additive Models, Trees, and Related Methods

  • Hastie T
  • Tibshirani R
  • Friedman J
N/ACitations
Citations of this article
64Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter we begin our discussion of some specific methods for supervised learning. These techniques each assume a (different) structured form for the unknown regression function, and by doing so they finesse the curse of dimensionality. Of course, they pay the possible price of misspecifying the model, and so in each case there is a tradeoff that has to be made. They take off where Chapters 3-6 left off. We describe five related techniques: generalized additive models, trees, multivariate adaptive regression splines, the patient rule induction method, and hierarchical mixtures of experts. 9.1 Generalized Additive Models Regression models play an important role in many data analyses, providing prediction and classification rules, and data analytic tools for understanding the importance of different inputs. Although attractively simple, the traditional linear model often fails in these situations: in real life, effects are often not linear. In earlier chapters we described techniques that used predefined basis functions to achieve nonlinearities. This section describes more automatic flexible statistical methods that may be used to identify and characterize nonlinear regression effects. These methods are called "generalized additive models." In the regression setting, a generalized additive model has the form E(Y |X 1 , X 2 ,. .. , X p) = α + f 1 (X 1) + f 2 (X 2) + · · · + f p (X p). (9.1)

Cite

CITATION STYLE

APA

Hastie, T., Tibshirani, R., & Friedman, J. (2009). Additive Models, Trees, and Related Methods (pp. 295–336). https://doi.org/10.1007/978-0-387-84858-7_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free