References: Angela Montanari (2017), Chapter 3. Gardini A. (2007).
15.1 Assumptions
Let’s start from the classic assumptions for a linear model. The working hypothesis are:
The linear model approximate the conditional expectation, i.e. .
The conditional variance of the response variable is constant, i.e. with .
The response variables are uncorrelated, i.e. with and .
Equivalently the formulation in terms of the stochastic component reads
The residuals have mean zero, i.e. for all with .
The conditional variance of the residuals is constant, i.e. with .
The residuals and the regressors are uncorrelated, i.e. for all and .
Hence, in this setup the error terms are assumed to be independent and identically distributed with mean zero and equal variances for all . Thus, the general expression of the covariance matrix in Equation 14.15 reduces to .
Proposition 15.1 ()
The ordinary least squares estimator (OLS) is obtained by minimizing the sum of the squared residuals expressed by the function that return an estimate of the parameter . In this context, the fitted residuals are seen as function of the unknown (Equation 14.14).
Formally, the OLS estimator is the solution of the following minimization problem, i.e. where is the parameter space. Notably, if is non-singular one obtain an analytic expression, i.e. Equivalently, it is possible to express Equation 15.3 in terms of the covariance matrix of the and , i.e.
Proof. Developing the product of the residuals in Equation 15.2: To find the minimum, let’s compute the derivative of with respect to , set it equal to zero and solve for , i.e. To establish if the above solution corresponds also to a global minimum, one must check the sign of the second derivative, i.e. that in this case being always positive defined and denotes a global minimum. An alternative derivation of this estimator, as in Equation 15.4, is obtained by estimate the variance-covariance matrix as in Equation 13.4 and the variance matrix as in Equation 13.5.
OLS estimator of beta
b <-solve(t(X) %*% X) %*%t(X) %*% y
Singularity of
Note that the solution is available if and only if is non-singular. Hence, the columns should not be linearly dependent. In fact, one of the -variables can be written as a linear combination of the others, then the determinant of the matrix is zero and the inversion is not possible. Moreover, to have that it is necessary that the number of observations have to be greater or equal than the number of regressors, i.e. .
Intercept estimate
If in the data matrix was included a column with ones, then the intercept parameter is obtained from Equation 15.3 or Equation 15.4. However, if it was not included, it is computed as:
OLS estimator of the intercept
# Vector of ones J_1n <-matrix(1, nrow =1, ncol = n)# Vector of means x_bar <-t((J_1n %*% X)/n)# OLS estimator a <-mean(y - X %*% b)
15.2.1 Projection matrices
Substituting the OLS solution (Equation 15.3) in Equation 14.11 we obtain the matrix , that project the vector on the sub space of generated by the matrix of the regressors , i.e.
Projection matrix
P_x <- X %*%solve(t(X) %*% X) %*%t(X)
Proposition 15.2 ()
The projection matrix satisfies the following three properties, i.e.
is an symmetric matrix.
is idempotent.
.
Properties of matrix
# Property 2. # sum((P_x %*% P_x - P_x)^2) # close to zero# Property 3.# sum((P_x %*% X - X)^2) # close to zero
Proof. Let’s consider the property 2. of , i.e. Let’s consider the property 3. of , i.e.
Substituting the OLS solution (Equation 15.3) in the residuals (Equation 14.14) we obtain another projection matrix , that projects the vector on the orthogonal sub-space with respect to the sub-space generated by the matrix of the regressors , i.e. where is identity matrix (Equation 31.3).
Projection matrix
M_x <-diag(1, n, n) - P_x
Proposition 15.3 ()
The projection matrix satisfies the following 3 properties, i.e.
is and symmetric.
is idempotent.
.
Properties of matrix
# Property 2. # sum((M_x %*% M_x - M_x)^2) # close to zero# Property 3.# sum((M_x %*% X)^2) # close to zero
Proof. Let’s consider the property 2. of , i.e. Let’s consider the property 3. of , i.e.
Remark 15.1. By definition and are orthogonal, i.e. . Hence, the fitted values defined as are the projection of the empiric values on the sub-space generated by . Symmetrically, the fitted residuals are the projection of the empiric values on the sub-space orthogonal to the sub-space generated by .
Proof. Let’s prove the orthogonality between and , i.e.
Orthogonality of matrix and
# sum(M_x %*% P_x) # close to zero
15.2.2 Properties OLS
Theorem 15.1 ()
Under the Gauss-Markov hypothesis the Ordinary Least Square (OLS) estimate is (Best Linear Unbiased Estimator), where “best” stands for the estimator with minimum variance in the class of linear unbiased estimators of the unknown true population parameter . More precisely, the Gauss-Markov hypothesis are:
.
.
, i.e. omoskedasticity.
is non-stochastic and independent from the errors for all ’s.
Proposition 15.4 ()
1. Unbiased: is correct and it’s conditional expectation is equal to true parameter in population, i.e. 2. Linear in the sense that it can be written as a linear combination of and , i.e. , where do not depend on , i.e. 3. Under the Gauss-Markov hypothesis (Theorem 15.1) is the estimator that has the minimum variance in the class of the unbiased linear estimators of and it’s variance reads: Denoting with the -th element on the diagonal of , the variance of the -th regressor reads where denotes the element on the diagonal in the position .
The OLS estimator is correct: it’s expected value is computed from Equation 15.3 and substituting Equation 14.11, is equal to the true parameter in population, i.e.
In general, applying the properties of the variance operator, the variance of is computed as: Then, since is non-stochastic one can bring it outside the variance thus obtaining: Under the Gauss Markov hypothesis (Theorem 15.1) the conditional variance and therefore the Equation 15.12 reduces to:
15.3 Variance decomposition
In a linear model, the deviance (or total variance) of the dependent variable can be decomposed into the sum of the regression variance and the dispersion variance. This decomposition helps us understand how much of the total variability in the data is explained by the model and how much is due to unexplained variability (residuals).
Total Deviance (): represents the total variability of the dependent variable . It is calculated as the sum of the squared difference of from its mean .
Regression Deviance (): represents the portion of variability that is explained by the regression model. It is computed as the sum of the squared differences between the fitted values and .
Dispersion Deviance (): represents the portion of variability that is not explained by the model. It is computed as the sum of the squared differences between the observed values and the fitted values (Equation 14.13).
Hence, the total deviance of can be decomposed as follows: where is the sample mean (Equation 10.1).
Variance decomposition
y_bar <-mean(y)# Fitted values y_hat <- a + X %*% b# Residuals u <- y - y_hat# Deviance of ydev_y <-sum(y^2) - n*y_bar^2# 6.642875 # Deviance of regressiondev_reg <-sum((y_hat - y_bar)^2) # 6.162379# Deviance of dispersiondev_disp <-sum(u^2) # 0.4804956# equal to zero # dev_y - (dev_reg + dev_disp)
Proof: regression deviance
Proof. Let’s prove the expression for the regression deviance , i.e.
The decomposition of the deviance of holds true also with respect to the correspondents degrees of freedom.
Deviance
Degrees of freedom
Variance
Table 15.1: Deviance and variance decomposition in a multivariate linear model
15.3.1 Estimator of
The OLS estimator do not depend on variance of the residuals and it is not possible to obtain in one step both the estimators.
Proposition 15.5 ()
Let’s define an unbiased estimator of the population variance as: Note that if also an intercept is included the denominator became . In general the regression variance overestimate the true variance , i.e. Only in the special case where in population, then and also the regression variance produces a correct estimate of .
Proof. By definition, the residuals can be computed pre multiplying the matrix (Equation 15.7) to , i.e. Substituting the true relation in population, i.e. in population, one obtain since . Being the matrix symmetric and idempotent (Proposition 15.3): Thus, since is a scalar, the expected value of the deviance of dispersion read The trace (Equation 31.8) of the matrix reads where implicitely we consider a column of 1 for the intercept. Hence, Equivalently, th expectation of the deviance of dispersion is equal to
15.3.2
The statistic, also known as the coefficient of determination, is a measure used to assess the goodness of fit of a regression model. In a multivariate context, it evaluates how well the independent variables explain the variability of the dependent variable.
Definition 15.1 ()
The represents the proportion of the variation in the dependent variable that is explained or predicted by the independent variables. Formally, it is defined as the ratio of the deviance explained by the model () to the total deviance (). It can also be expressed as one minus the ratio of the residual deviance () to the total deviance, i.e. Using the variance decomposition (Equation 15.13), it is possible to write a multivariate version of the as:
R2 <-1- dev_disp / dev_y # 0.9276675
The numerator represents the variance explained by the regression model, while the denominator the total variance in the dependent variable. The term numerator in the second expression represents the variance of the residuals, or the variance not explained by the model. A value of the close to 1 denotes that a large proportion of the variability of the dependent variable has been explained by the regression model, while a value close to 0 indicates that the model explains very little of the variability.
Variance Inflation Factor (VIF)
An alternative expression for the variance of the -th regressor (Equation 15.11) reads where is the deviance of the regressor and is the multivariate coefficient of determination on the regression of on the other regressors.
Proposition 15.6 ()
A more robust indicator that does not always increase with the addition of a new regressor is the adjusted, which is computed as: The can be negative, and its value will always be less than or equal to that of . Unlike , the adjusted version increases only when the new explanatory variable improves the model more than would be expected simply by adding another variable.
Proof. To arrive at the formulation of the adjusted let’s consider that under the null hypothesis the variance of regression (Table 15.1) is a correct estimate of the variance of the residuals . Hence, under : This implies that the expectation of the is not zero (as it should be under ) but: Let’s rescale the such that when holds true it is equal to zero, i.e. However, the specification of implies that when (perfect linear relation between and ) the value of , i.e. . Hence, let’s correct again the indicator such that it takes values in , i.e. Remembering that can be rewritten as in Equation 15.14 one obtain:
Limitations of
The statistic has some limitations. Firstly, it can be close to 1 even if the relationship between the variables is not linear. Additionally, increases whenever a new regressor is added to the model, making it unsuitable for comparing models with different numbers of regressors.
15.4 Diagnostic
Let’s consider a linear model where the residuals are IID normally distributed random variables. Hence, the working hypothesis of the Gauss Markov theorem holds true.
15.4.1-test for
A t-test evaluate if a parameter in a regression is stastically different from zero, given the effect of the others regressors. The test is built under the null hypothesis of linear independence in population between and , i.e. If the residuals are normally distributed, then the vector of parameter is distributed as a multivariate normal, thus also the marginal distribution of each will be normal.
Using the expectation (Equation 15.8) and variance (Equation 15.10) of , we can standardize the estimated parameter to obtain a Student-t distributed statistic (Equation 32.2), i.e. where the unknown is replaced with its correct estimator . Under and one obtain Given a confidence level the test is rejected if the test statistic falls in the rejection area, i.e. where is the quantile function of a Student-t with degrees of freedom.
Under the assumption of normality, the statistic (Equation 15.16) can be used to build a confidence interval around
Under the assumption of normality, from Equation 15.16, one can build a confidence interval for , i.e. where is the confidence level, is the quantile at level of a Student-t distribution with and reads as in Equation 15.11.
Confidence intervals
conf.int <-cbind(b +qt(0.05, n-k) *sqrt(v_b_ols), b +qt(0.95, n-k) *sqrt(v_b_ols))# With 90% probability the true b is inside the bounds# [b1] 0.05049223 < b < 0.4159749# [b2] 0.11779346 < b < 0.6129894# [b3] -0.62083599 < b < -0.3705442
15.4.3 F-test for the regression
The evaluates the significance of the entire regression model by testing the null hypothesis of linear independence between and , i.e. where the only coefficient different from zero is the intercept. In this case, the test statistic reads that is distributed with an -Fischer (Equation 32.3) with and degrees of freedom. is the regression variance and is the dispersion variance. By fixing a significance level , the null hypothesis is rejected if . Remembering the relation between the deviance and the , i.e. and , it is possible to express the -test in terms of the multivariate as:
The variability of explained by the model is significantly greater than the residual variability.
At least one of the regressors has a coefficient that is significantly different from zero in the population.
On contrary if is not rejected, then the model is not adequate and there is no evidence of a linear relation between and .
15.5 Multi-equations OLS
Proposition 15.7 ()
Let’s consider a multivariate linear model, i.e. with in (Equation 14.3), then the model in matrix notation reads: then the OLS estimate of is obtained from Equation 15.4, i.e. and similarly for the intercept The variance covariance matrix of the residuals is computed as:
Example: fit a multi-equations model with OLS
Example 15.1 Let’s simulate observations for the regressors from a multivariate normal distribution with parameters Then, to construct two dependent variables we simulate a matrix for the parameters from a standard normal, i.e. for , and the intercept parameters from a uniform distribution in [0,1], i.e. Thus, for , one obtain a multi-equation model of the form: where and are simulated from a multivariate normal random variables with true covariance matrix equal to:
Setup
library(dplyr)######################## Setup ########################set.seed(1) # random seed n <-500# number of observationsp <-2# number of dependent variables k <-3# number of regressors # True regressor's mean true_e_x <-matrix(rep(0.5, k), ncol =1)# True regressor's covariance matrix true_cv_x <-matrix(c(v_z1 =0.5, cv_12 =0.2, cv_13 =0.1, cv_21 =0.2, v_z2 =1.2, cv_23 =0.1, cv_31 =0.1, cv_32 =0.1, v_z3 =0.3), nrow = k, byrow =FALSE)# True covariance of the residuals true_cv_e <-matrix(c(0.55, 0.3, 0.3, 0.70), nrow = p)########################################################### Generate a synthetic data set ## Regressors X <- mvtnorm::rmvnorm(n, true_e_x, true_cv_x) ## Slope (Beta)true_beta <-rnorm(p*k)true_beta <-matrix(true_beta, ncol = k, byrow =TRUE) ## Intercept (Alpha)true_alpha <-runif(p, min =0, max =1)true_alpha <-matrix(true_alpha, ncol =1) ## Matrix of 1 for matrix multiplication ones <-matrix(rep(1, n), ncol =1)## Fitted response variable Y <- ones %*%t(true_alpha) + X %*%t(true_beta)## Simulated error eps <- mvtnorm::rmvnorm(n, sigma = true_cv_e)## Perturbed response variable Y_tilde <- Y + eps