15  Classic least squares

References: Angela Montanari (), Chapter 3. Gardini A. ().

15.1 Assumptions

Let’s start from the classic assumptions for a linear model. The working hypothesis are:

  1. The linear model approximate the conditional expectation, i.e. E{yix1,,xn}=E{yiX}=xib for i=1,n.
  2. The conditional variance of the response variable y is constant, i.e. V{yix1,,xn}=V{yiX}=σe2 with 0<σe2<.
  3. The conditional covariance of the response variable y is zero, i.e. Cv{yi,yjx1,,xn}=Cv{yi,yjX}=0 with ij and i,j{1,,n}.

Equivalently the formulation in terms of the stochastic component reads

  1. yi=xib+ei for i=1,n.
  2. The residuals and the regressors are uncorrelated, i.e. E{eix1,,xn}=E{eiX}=0.
  3. The conditional variance of the residuals is constant, i.e. V{eix1,,xn}=V{eiX}=σe2 with 0<σe2<.
  4. The conditional covariance of the residuals is zero, i.e. Cv{ei,ejx1,,xn}=Cv{ei,ejX}=0 with ij and i,j{1,,n}.

Hence, in this setup the error terms e are assumed to be independent and identically distributed with equal variance σe. Thus, the general expression of the covariance matrix in reduces to Σ=σe2In.

15.2 Estimator of b

Proposition 15.1 (Ordinary Least Squares (OLS) estimator)
The ordinary least squares estimator (OLS) is the function QOLS that minimize the sum of the squared residuals and return an estimate of the true parameter b, i.e.  (15.1)QOLS(b)=e(b)e(b). Formally, the OLS estimator is the solution of the following minimization problem, i.e.  (15.2)bOLS=argminbΘb{QOLS(b)}. Notably, if X is non-singular one obtain an analytic expression, i.e.
(15.3)bOLS=(XX)1Xy. Equivalently, it is possible to express in terms of the covariance matrix of the X and y, i.e.  (15.4)bOLS=Cv{X}1Cv{X,Y}=Cv{X,Y}V{X}.

Singularity of X

Note that the solution is available if and only if X is non-singular. Hence, the columns should not be linearly dependent. In fact, one of the k-variables can be written as a linear combination of the others, then the determinant of the matrix XX is zero and the inversion is not possible. Moreover, to have that rank(XX)=k it is necessary that the number of observations have to be greater or equal than the number of regressors, i.e. nk.

Proof. Developing the product of the residuals in : (15.5)QOLS(b)=e(b)e(b)==(yXb)(yXb)==yy(b)XyyXb+(b)XXb==yy2(b)Xy+(b)XXb To find the minimum, let’s compute the derivative of QOLS with respect to b, set it equal to zero and solve for b=bOLS, i.e. dQOLS(b)db=2Xy+2XXb=0Xy=XXbbOLS=(XX)1Xy To establish if the above solution corresponds also to a global minimum, one must check the sign of the second derivative, i.e.  d2QOLS(b)dbdb=2XX>0, that in this case being always positive denotes a global minimum. An alternative derivation of this estimator, as in , is obtained by substituting 1ni=1nxixi=V{X},1ni=1nxiyi=Cv{X,Y}, in .

Intercept estimate

If in the data matrix X was included a column with ones, then the intercept parameter is obtained from or . However, if it was not included, it is computed as: αOLS=E{Y}bOLSE{X}.

15.2.1 Projection matrices

Substituting the OLS solution () in we obtain the matrix H, that project the vector y on the sub space of Rn generated by the matrix of the regressors X, i.e.  (15.6)H=X(XX)1X,

The projection matrix H satisfies the following three properties, i.e. 

  1. H is an n×n symmetric matrix.
  2. H H=H is idempotent.
  3. H X=X.

Substituting the OLS solution () in the residuals () we obtain another projection matrix M, that projects the vector y on the orthogonal sub-space with respect to the sub-space generated by the matrix of the regressors X, i.e.  (15.7)M=InH

The projection matrix M satisfies the following 3 properties, i.e. 

  1. M is n×n and symmetric.
  2. M M=H is idempotent.
  3. M X=0.

By definition M and H are orthogonal, i.e. H M=0. Hence, the fitted values defined as y^=Hy are the projection of the empiric values on the sub-space generated by X. Symmetrically, the fitted residuals e^=My are the projection of the empiric values on the sub-space orthogonal to the sub-space generated by X.

Proof. Let’s consider the property 2. of H, i.e. H H=(X(XX)1X)(X(XX)1X)==(XX)1X==H Let’s consider the property 3. of H, i.e. H X=(X(XX)1X)X=X. Let’s consider the property 2. of M, i.e. M M=(InH)(InH)==InH==M Let’s consider the property 3. of M, i.e. M X=(InH)X==(InX(XX)1X)X==XX=0 Finally, let’s prove the orthogonality between M and H, i.e.  H M=H(InH)=HH=0.

15.3 Properties OLS

Theorem 15.1 (Gauss-Markov theorem)
Under the Gauss-Markov hypothesis the Ordinary Least Square (OLS) estimate is BLUE (Best Linear Unbiased Estimator), where “best” stands for the estimator with minimum variance in the class of linear unbiased estimators of the unknown true population parameter b. More precisely, the Gauss-Markov hypothesis are:

  1. y=Xb+e.
  2. E{e}=0.
  3. E{ee}=σe2In, i.e. omoskedasticity.
  4. X is non-stochastic and independent from the errors for all n’s.

Proposition 15.2 (Properties OLS estimator)
1. Unbiased: bOLS is correct and it’s conditional expectation is equal to true parameter in population, i.e.  (15.8)E{bOLSX}=b. 2. Linear in the sense that it can be written as a linear combination of y and X, i.e. bOLS=Axy, where Ax do not depend on y, i.e.
(15.9)bOLS=Axy,Ax=(XX)1X. 3. Under the Gauss-Markov hypothesis () bOLS is the estimator that has the minimum variance in the class of the unbiased linear estimators of b and it’s variance reads: (15.10)V{bOLSX}=σe2(XX)1.

Proof.

  1. The OLS estimator is correct: it’s expected value is computed from and substituting , is equal to the true parameter in population, i.e.
    E{bOLSX}=E{(XX)1XyX}==E{(XX)1X(Xb+e)X}==(XX)1XXb+(XX)1XE{eX}==b

  2. In general, applying the properties of the variance operator, the variance of bOLS is computed as: V{bOLSX}=V{(XX)1XyX}==V{(XX)1X(Xb+e)X}==V{(XX)1XXb+(XX)1XeX}==V{b+(XX)1XeX}==V{(XX)1XeX} Then, since X is non-stochastic one can bring it outside the variance thus obtaining: (15.11)V{bOLSX}=(XX)1XV{eX}X(XX)1==(XX)1XE{eeX}X(XX)1 Under the Gauss Markov hypothesis () the conditional variance V{eX}=σIn and therefore the reduces to: V{bOLSX}=σe2(XX)1XX(XX)1==σe2(XX)1

15.4 Variance decomposition

In a linear model, the deviance (or total variance) of the dependent variable y can be decomposed into the sum of the regression variance and the dispersion variance. This decomposition helps us understand how much of the total variability in the data is explained by the model and how much is due to unexplained variability (residuals).

  • Total Deviance (Dev{y}): represents the total variability of the dependent variable y. It is calculated as the sum of the squared difference of yi from its mean y¯.

  • Regression Deviance (DevReg{y}): represents the portion of variability that is explained by the regression model. It is computed as the sum of the squared differences between the fitted values y^i and y¯.

  • Dispersion Deviance (DevDisp{y}): represents the portion of variability that is not explained by the model. It is computed as the sum of the squared differences between the observed values yi and the fitted values y^i ().

Hence, the total deviance of y can be decomposed as follows: (15.12)Dev{y}= DevReg{y}+DevDisp{y}i=1n(yiy¯)2= i=1n(y^iy¯)2+i=1n(y^iyi)2yyny¯2 =bXXbny¯2+ee

Proof. Let’s prove the expression for the regression deviance DevReg{y}, i.e.  DevReg{y}=Dev{y}DevDisp{y}==yyny¯2ee==yyny¯2(yXb)(yXb)==yyny¯2+yyyXbybX+bXXb==2yyny¯22y(Xb)+bXXb==bXXbny¯2

15.4.1 Estimator of σe2

The OLS estimator do not depend on variance of the residuals σe2 and it is not possible to obtain in one step both the estimators. As far as we know σe2 is the variance of the residuals of which we know the realized values on the sample e^={e^1,e^2,,e^n}. Hence, let’s define an unbiased estimator of the population variance σe2 as: s^e2=e^e^nk1. In general the regression variance overestimate the true variance σe2, i.e. s^r2=kσe2+g(b,X),g(b,X)0. Only in the special case where b1=b2==bk=0 in population, then g(b,X)=0 and also the regression variance produces a correct estimate of σe2.

Proof. By definition, the residuals can be computed pre multiplying the matrix M to y, i.e.  e^=yy^==yXbOLS==yX(XX)1Xy==(InH)y==My Substituting y=XbOLS+e, since MX=0, one obtain e^=M(XbOLS+e)==MXbOLS+Me==Me Then, being M symmetric and idempotent: e^e^=(Me)(Me)==eMMe==eMe Thus, since e^Me^ is a scalar, the expected value of the deviance of dispersion read E{e^e^}=E{eMe}==E{trace(eMe)}==E{trace(Mee)}==trace(ME{ee})==E{ee}trace(MIn)==σe2trace(MIn)==σe2trace(M) where the trace of the matrix M is: trace(M)=trace(InH)==trace(In)trace(X(XX)1X)==trace(In)trace(XX(XX)1)==trace(In)trace(Jk+1)==nk1 Hence the expectation of the deviance of dispersion is equal to E{DevDisp{y}}=σe2(nk1)s^e2=e^e^nk1

The decomposition of the deviance of y holds true also with respect to the correspondents degrees of freedom,

Table 15.1: Deviance and variance decomposition in a multivariate linear model
Deviance Degrees of freedom Variance
Dev{y}=i=1n(yiy¯)2 n1 s^y2=Dev{y}n1
DevReg{y}=i=1n(y^iy¯)2 k s^r2=DevReg{y}k
DevDisp{y}=i=1n(y^iyi)2 nk1 s^e2=DevDisp{y}nk1

15.5 R2

The R2 statistic, also known as the coefficient of determination, is a measure used to assess the goodness of fit of a regression model. In a multivariate context, it evaluates how well the independent variables explain the variability of the dependent variable.

Definition 15.1 (Multivariate R2)
The R2 represents the proportion of the variation in the dependent variable that is explained or predicted by the independent variables. Formally, it is defined as the ratio of the deviance explained by the model (DevReg{y}) to the total deviance (Dev{y}). It can also be expressed as one minus the ratio of the residual deviance (DevDisp{y}) to the total deviance, i.e.  (15.13)R2=DevReg{y}Dev{y}=1DevDisp{y}Dev{y}. Using the variance decomposition (), it is possible to write a multivariate version of the R2 as: R2=bXXbny¯2yyny¯2=1eeyyny¯2.

The numerator represents the variance explained by the regression model, while the denominator the total variance in the dependent variable. The term ee in the second expression represents the variance of the residuals, or the variance not explained by the model. A value of the R2 close to 1 denotes that a large proportion of the variability of the dependent variable has been explained by the regression model, while a value close to 0 indicates that the model explains very little of the variability.

Variance Inflation Factor (VIF)

The elements on the diagonal of the matrix (XX)1 determine the variances while the other elements the covariances. In general the variance of the j-th regressor is denoted as V{bj}=σe2cjj where cjj is the j-th element on the diagonal of (XX)1. An alternative expression for the variance is: V{bj}=σe2Dev{Xj}11Rj02, where Rj02 is the multivariate coefficient of determination on the regression of Xj on the other regressors. The term 11Rj02 is also called Variance Inflation Factor and denoted as VIFj.

Limitations of R2

The R2 statistic has some limitations. Firstly, it can be close to 1 even if the relationship between the variables is not linear. Additionally, R2 increases whenever a new regressor is added to the model, making it unsuitable for comparing models with different numbers of regressors.

Definition 15.2 A more robust indicator that does not always increase with the addition of a new regressor is the adjusted R2, which is computed as: R¯2=1n1nk1DevDisp{y}Dev{y}=1s^e2s^y2. The R¯2 can be negative, and its value will always be less than or equal to that of R2. Unlike R2, the adjusted version increases only when the new explanatory variable improves the model more than would be expected simply by adding another variable.

Proof. To arrive at the formulation of the adjusted R2 let’s consider that under the null hypothesis H0:b1=b2==bk the variance of regression s^r2 () is a correct estimate of the variance of the residuals σe2. Hence, under H0: n1kE{DevReg{y}Dev{y}}=1. This implies that the expectation of the R2 is not zero (as it should be under H0) but: E{R2}=kn1. Let’s rescale the R2 such that when H0 holds true it is equal to zero, i.e.  Rc2=R2kn1. However, the specification of Rc2 implies that when R2=1 (perfect linear relation between X and y) the value of Rc2<1, i.e. Rc2=nk1n1<1. Hence, let’s correct again the indicator such that it takes values in [0,1], i.e.  R¯2=(R2kn1)n1nk1==(R2(n1)kn1)n1nk1==n1nk1R2knk1 Remembering that R2 can be rewritten as in one obtain: R¯2=n1nk1(1DevDisp{y}Dev{y})knk1==(n1)Dev{y}(n1)DevDisp{y}Dev{y}(nk1)knk1==n1nk1n1nk1DevDisp{y}Dev{y}knk1==1n1nk1DevDisp{y}Dev{y}==1s^e2s^y2

15.6 Diagnostic

Let’s consider a linear model where the residuals e are IID normally distributed random variables. Hence, the working hypothesis of the Gauss Markov theorem holds true.

15.6.1 t-test for bj

A t-test valuates the significance of the parameter of a regressor, given the effect of the others k1 regressors, by testing the null hypothesis of linear independence between y and Xj, i.e.  H0:bj=0. Under the normality assumption on the distribution of the residuals, the vector of parameter b^ is distributed as a multivariate normal random vector, thus also the marginal distribution of b^j is normal. Therefore, given the expectation and variance of b^j one can standardize it to obtain b^jE{b^j}V{b^j}=b^jbjσe2cjjN(0,1). Substituting the unknown σe2 with it’s correct estimate σ^e2 one obtain the statistic, i.e. (15.14)tj=b^jbjσ^e2cjjtnk1. that is Student-t distributed () with ν=nk1 degrees of freedom. Under the null hypothesis H0:b^j=0 one obtain the t-test statistic, i.e. (15.15)tj=H0b^jσ^e2cjjtnk1.

15.6.2 Confidence intervals for b

Under the assumption of normality, from , one can build a confidence interval for bj, i.e.  bj=b^j±tnk1α/2σe2cjj. where α is the confidence level, tnk1α/2 is the quantile at level α/2 of a Student-t distribution with nk1 and cjj is the j-th element on the diagonal of (XX)1.

15.6.3 F-test for the regression

The Ftest evaluates the significance of the entire regression model by testing the null hypothesis of linear independence between y and X, i.e.  H0:b1=b2==bk=0. where the only coefficient different from zero is the intercept. In this case, the test statistic reads Ftest=s^r2s^e2=DevReg(y)(nk1)kDevDisp(y)Fk,nk1, that is distributed with an F-Fischer () with ν1=k and ν2=nk1 degrees of freedom. s^r2 is the regression variance and s^e2 is the dispersion variance. By fixing a significance level α, the null hypothesis H0 is rejected if Ftest>Fk,nk1α. Remembering the relation between the deviance and the R2, i.e. DevReg(y)=R2Dev(y) and DevDisp(y)=(1R2)Dev(y), it is possible to express the F-test in terms of the multivariate R2 as: Ftest=R21R2nk1kFk,nk1.

Interpretation F-test

If the null hypothesis H0 is rejected then:

  • The variability of Y explained by the model is significantly greater than the residual variability.
  • At least one of the k regressors has a coefficient bk that is significantly different from zero in the population.

On contrary if H0 is not rejected, then the model is not adequate and there is no evidence of a linear relation between y and X.