The Generalized Linear Regression Model

N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Consider the linear regression model that represents the statistical dependence of study variable y on K explanatory variables X 1 ,. .. , X K and random error y = Xβ + (4.1) with the following assumptions: (i) E() = 0, (ii) E() = σ 2 W where W is positive definite, (iii) X is a nonstochastic matrix and (iv) rank(X) = K. This is termed as generalized linear regression model or generalized linear model. Note that in the classical regression model, E() = σ 2 I. If E() = σ 2 W where W is a known positive definite matrix, the generalized linear model can be reduced to the classical model: Because W is positive definite, so W has a positive definite inverse W −1. According to theorems (cf. Theorem A.41), product representations exist for W and W −1 : W = M M, W −1 = N N where M and N are the square and regular matrices. Thus (N N) = (M M) −1 , including N M M N = N W N = I. If the generalized linear model y = Xβ + is transformed by multiplication from the left with N , the trans

Cite

CITATION STYLE

APA

The Generalized Linear Regression Model. (2007). In Linear Models and Generalizations (pp. 143–221). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-74227-2_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free