Hierarchical generalized linear models allow non-Normal data to be modelled in situations when there are several sources of error variation. They extend the familiar generalized linear models to include additional random terms in the linear predictor. However, they do not constrain these terms to follow a Normal distribution nor to have an identity link, as is the case in the more usual generalized linear mixed model. They thus provide a much richer set of models, that may seem more intuitively appealing. Another extension to generalized linear models allows nonlinear parameters to be included in the linear predictor. The fitting algorithm for these generalized nonlinear models operates by performing a nested optimization, in which a generalized linear model is fitted for each evaluation in an optimization over the nonlinear parameters. The optimization search thus operates only over the (usually relatively few) nonlinear parameters, and this should be much more efficient than a global optimization over the whole parameter space. This paper reviews the generalized nonlinear model algorithm, and explains how similar principles can be used to include nonlinear fixed parameters in the mean model of a hierarchical generalized linear model, thus defining a hierarchical generalized nonlinear model.
CITATION STYLE
Payne, R. W. (2014). Hierarchical Generalized Nonlinear Models (pp. 111–124). https://doi.org/10.1007/978-3-319-04579-5_9
Mendeley helps you to discover research relevant for your work.