library(dplyr)# required for figures library(ggplot2)library(gridExtra)# required for render latex library(backports)library(latex2exp)# required for render tableslibrary(knitr)library(kableExtra)# Random seed set.seed(1)
19.1 MA(q)
In time series analysis an univariate time series is defined as Moving Average process with order q (MA(q)), when it satisfy the equations of the differences, i.e. where (Definition 18.5). An MA(q) process can be equivalently expressed as a polynomial in of the Lag operator (Equation 18.7), i.e. where is a polynomial of the form . Given this representation it is clear that an MA(q) process is stationary independently on the value of the parameters. Moreover, the stationary process admits has a infinite moving average or MA() representation (see Wold (1939)) if it satisfies the equations of differences, i.e. under the following condition the process is stationary and ergodic, i.e. If the above condition holds, then the process MA() can be written in compact form as:
Proposition 19.1 ()
In general, the expected value of an MA(q) process depends on the distribution of . Under the standard assumption that is a White Noise (Equation 18.1), then it’s expected value is equal to zero for every , i.e.
Proof. Given a process such that , it is always possible to simply reparametrize the Equation 19.1 as: or rescale the process, i.e and work under a process with zero mean. Then, let’s consider a process an MA process of order q, then the expectation of the process is computed as Hence, the expected value of depends on the expected value of the residuals , that under the White Noise assumption is zero for every .
Proposition 19.2 ()
For every lag , the autocovariance function, denoted as , is defined as: where by convention .
The covariance is different from zero only when the lag is lower than the order of the process . Setting one obtain the variance , i.e. It follows that, the autocorrelation function is bounded up to the lag , i.e. where .
19.1.1 MA(1)
Proposition 19.3 ()
Let’s consider a process , i.e. Independently, from the specific distribution of , the process has to be a White Noise, hence with an expected value equal to zero. Therefore, the expectation of an MA(1) process is equal to , i.e. . The variance instead is equal to In general, the auto covariance function for the order is defined as It follows that, the auto covariance function is bounded up to the first lag, i.e. and therefore the process is always stationary without requiring any condition on the parameter . Also the autocorrelation is different from zero only between the first two lags, i.e. the process is said to have a short memory
Proof. Let’s consider an MA(1) process , where is a White Noise process (Equation 18.1). The expected value of depends on the intercept , i.e. Under the White Noise assumption the residuals are uncorrelated, hence the variance is computed as By definition, the autocovariance function between time and a generic lagged value reads This is a consequence of being a White Noise and so uncorrelated in time, i.e. for every . This implies that, also the correlation between two lags is zero if .
Example: stationary MA(1)
Example 19.1 Under the assumption that the residuals are Gaussian, i.e. , we can simulate scenarios of a moving-average process of order 1 of the form
MA(1) simulation
set.seed(1) # random seed# *************************************************# Inputs # *************************************************# Number of steps t_bar <-100000# Long term meanmu <-1# MA parameterstheta <-c(theta1 =0.15)# Variance residuals sigma2_u <-1# *************************************************# Simulation # *************************************************# InitializationYt <-c(mu) # Simulated residualsu_t <-rnorm(t_bar, 0, sqrt(sigma2_u)) # Simulated MA(1) processfor(t in2:t_bar){ Yt[t] <- mu + theta[1]*u_t[t-1] + u_t[t]}
Table 19.1: Empiric and theoric expectation, variance, covariance and correlation (first lag) for a stationary MA(1) process.
19.2 AR(P)
In time series analysis an univariate time series is defined as Autoregressive process of order p (AR(p)), when it satisfy the equations of the differences, i.e. where defines the order of the process and . In compact form: An Autoregressive process can be equivalently expressed in terms of the polynomial operator, i.e. From Section 18.3.1 it follows that it exists a stationary AR(p) process if and only if all the solutions of the characteristic equations, i.e. , are greater than 1 in absolute value. In such case the AR(p) process admits an equivalent representation in terms of MA(), i.e.
Proposition 19.4 ()
The unconditional expected value of a stationary AR(p) process reads
Proof. Let’s consider an AR(p) process , then the unconditional expectation of the process is computed as Since, under the assumption of stationarity the long term expectation of is the same as the long term expectation of . Hence, solving for the expected value one obtain:
If the AR(p) process is stationary, the covariance function satisfies the recursive relation, i.e. where . For the above equations forms a system of linear equations in unknowns , also known as Yule-Walker equations.
Proposition 19.5 ()
Let’s consider an AR(1) model without intercept so that . Then, its covariance function reads:
Proof. Let’s consider an AR(1) model without intercept so that , i.e. The proof of the covariance function is divided in two parts. Firstly, we compute the variance and covariances of the AR(1) model. Then, we set the system and we solve it. Notably, the variance of reads: remembering that . The covariance with first lag, namely is computed as: The Yule-Walker system is given by Equation 19.4, Equation 19.5, i.e. In order to solve the system, let’s substitute (Equation 19.5) in (Equation 19.4) and solve for , i.e. Hence, by the relation the covariance reads explicitly:
Proposition 19.6 ()
Let’s consider an AR(2) model without intercept so that . Then, its covariance function reads: where
Proof. Let’s consider an AR(2) model without intercept so that , i.e. The proof of the covariance function is divided in two parts. Firstly, we compute the variance and covariances of the AR(2) model. Then, we set the system and we solve it. Notably, the variance of reads: remembering that . The covariance with first lag, namely is computed as: The covariance with second lag, namely is computed as:
Proof. Let’s consider an AR(3) model without intercept so that , i.e. where . Notably, the variance is computed as: remembering that . The covariance with first lag, namely is computed as: The covariance with second lag, namely is computed as: The covariance with third lag, namely is computed as: The Yule-Walker system is given by Equation 19.11, Equation 19.12, Equation 19.13 and Equation 19.14 reads Let’s start by expressing (Equation 19.13) in terms of and from , i.e. and let’s substitute the above expression of (Equation 19.15) in (Equation 19.12) from , i.e. At this point depends only on , hence we can solve it: With solved, one can come back to the expression of (Equation 19.15) and substitute the result in (Equation 19.16) obtaining an explicit expression for , i.e. Substituting the explicit expressions of (Equation 19.17) and (Equation 19.16) into (Equation 19.14) completes the system, i.e. Finally, substituting (Equation 19.16), (Equation 19.17) and (Equation 19.18) in (Equation 19.11) gives the variance, i.e. and
# *************************************************# Inputs # *************************************************# Numer of simulationsnsim <-500# Horizon for each simulation t_bar <-100000# AR(2) parametersphi <-c(0.3, 0.15, 0.1, 0.03)# Variance of the residuals sigma2 <-1# *************************************************# Simulations # *************************************************scenarios <- purrr::map(1:nsim, ~AR4_simulate(t_bar, phi, sigma2))# *************************************************# Moments # *************************************************# Compute variance and covariances with Monte Carlov_mc <-mean(purrr::map_dbl(scenarios, ~var(.x)))cv_mc_L1 <-mean(purrr::map_dbl(scenarios, ~cov(.x[-1], lag(.x, 1)[-1])))cv_mc_L2 <-mean(purrr::map_dbl(scenarios, ~cov(.x[-c(1,2)], lag(.x, 2)[-c(1,2)])))cv_mc_L3 <-mean(purrr::map_dbl(scenarios, ~cov(.x[-c(1:3)], lag(.x, 3)[-c(1:3)])))cv_mc_L4 <-mean(purrr::map_dbl(scenarios, ~cov(.x[-c(1:4)], lag(.x, 4)[-c(1:4)])))cv_mc_L5 <-mean(purrr::map_dbl(scenarios, ~cov(.x[-c(1:5)], lag(.x, 5)[-c(1:5)])))# Compute variance and covariances with Formulas v_mod <-AR4_covariance(phi, sigma2, lags =0)[1]cv_mod_L1 <-AR4_covariance(phi, sigma2, lags =1)[2]cv_mod_L2 <-AR4_covariance(phi, sigma2, lags =2)[3]cv_mod_L3 <-AR4_covariance(phi, sigma2, lags =3)[4]cv_mod_L4 <-AR4_covariance(phi, sigma2, lags =4)[5]cv_mod_L5 <-AR4_covariance(phi, sigma2, lags =5)[6]
Covariance
Formula
MonteCarlo
1.2543907
1.2509967
0.5084245
0.5002624
0.4036375
0.3996579
0.3380467
0.3349660
0.2504338
0.2482111
0.1814536
0.1797934
Table 19.4: Theoric long term variance and covariances computed on 500 Monte Carlo simulations (t = 100000).
19.2.1 Stationary AR(1)
Let’s consider an AR(1) process, i.e. Through recursion up to time 0 it is possible to express an AR(1) model as an , i.e. where the process is stationary if and only if . In fact, independently from the specific distribution of the residuals , the unconditional expectation of an AR(1) converges if and only if , i.e. The variance instead is computed as: The auto covariance decays exponentially fast depending on the parameter , i.e. where in general for the lag Finally, the autocorrelation function where in general for the lag An example of a simulated AR(1) process (, and and Normally distributed residuals) with its covariance function is shown in Figure 19.2.
Figure 19.2: AR(1) simulation and expected value (red) on the top. Empirical autocovariance (gray) and fitted exponential decay (blue) at the bottom.
AR(1) simulation
set.seed(1) # random seed# *************************************************# Inputs # *************************************************# Number of steps t_bar <-100000# Long term meanmu <-0.5# AR parametersphi <-c(phi1 =0.95)# Variance residuals sigma2_u <-1# *************************************************# Simulation # *************************************************# InitializationYt <-c(mu) # Simulated residualsu_t <-rnorm(t_bar, 0, sqrt(sigma2_u)) # Simulated MA(1) processfor(t in2:t_bar){ Yt[t] <- mu + phi[1]*Yt[t-1] + u_t[t]}
Example: sampling from a stationary AR(1)
Example 19.2 Sampling the process for different we expect that, on a large number of simulations, the distribution will be normal with stationary moments, i.e. for all
Simulate and sample a stationary AR(1)
# *************************************************# Inputs # *************************************************set.seed(3) # random seedt_bar <-120# number of steps ahead j_bar <-1000# number of simulations sigma <-1# standard deviation of epsilon par <-c(mu =0, phi1 =0.7) # parametersy0 <- par[1]/(1-par[2]) # initial point t_sample <-c(30,60,90) # sample times # *************************************************# Simulations # *************************************************# Process Simulations scenarios <-list()for(j in1:j_bar){ Y <-c(y0)# Simulated residuals eps <-rnorm(t_bar, 0, sigma) # Simulated AR(1) for(t in2:t_bar){ Y[t] <- par[1] + par[2]*Y[t-1] + eps[t] } scenarios[[j]] <- dplyr::tibble(j =rep(j, t_bar), t =1:t_bar, y = Y, eps = eps)}scenarios <- dplyr::bind_rows(scenarios) # Trajectory j df_j <- dplyr::filter(scenarios, j ==6)# Sample at different times df_t1 <- dplyr::filter(scenarios, t == t_sample[1])df_t2 <- dplyr::filter(scenarios, t == t_sample[2])df_t3 <- dplyr::filter(scenarios, t == t_sample[3])
Figure 19.3: Stationary AR(1) simulation with , one possible and at different times.
Figure 19.4: Stationary AR(1) histograms for different sampled times with normal pdf from and normal pdf with .
Statistic
Theoric
Empiric
0.000000
-0.0027287
1.960784
1.9530614
1.372549
1.3586670
0.700000
0.6956660
Table 19.5: Empiric and theoric expectation, variance, covariance and correlation (first lag) for a stationary AR(1) process.
19.2.2 Non-stationary AR(1): random walk
A non-stationary process has expectation and/or variance that changes over time. Considering the setup of an AR(1), if the process degenerates into a so called random walk process. Formally, if it is called random walk with drift, i.e. Considering its representation it is easy to see that the expectation depends on the starting point and on time and the shocks never decays. In fact, computing the expectation and variance of a random walk process it emerges a clear dependence on time, i.e. and the variance tends to explode to as .
Stochastic trend of a Random walk
Let’s define the stochastic trend as , then
The expectation of if are all martingale difference sequences, is zero, i.e. and therefore
The variance of , if are all martingale difference sequences, is time-dependent, i.e. while the covariance between two times and depends on the lag, i.e. and so the correlation tends to one as , in fact:
Random Walk simulation
set.seed(1) # random seed# *************************************************# Inputs # *************************************************# Number of steps t_bar <-100000# Long term meanmu <-0.02# Variance residuals sigma2_u <-1# *************************************************# Simulation # *************************************************# InitializationYt <-c(mu) # Simulated residualsu_t <-rnorm(t_bar, 0, sqrt(sigma2_u)) # Simulated MA(1) processfor(t in2:t_bar){ Yt[t] <- mu + Yt[t-1] + u_t[t] }
Figure 19.5: Random walk simulation and expected value (red) on the top. Empirical autocorrelation (gray).
Example: sampling from non-stationary AR(1)
Example 19.3 Let’s simulate an random walk process with drift with a Gaussian error, namely .
Simulation of a non-stationary AR(1)
# *************************************************# Inputs # *************************************************set.seed(5) # random seedt_bar <-100# number of steps ahead j_bar <-2000# number of simulations sigma0 <-1# standard deviation of epsilon par <-c(mu =0.3, phi1 =1) # parametersX0 <-c(0) # initial point t_sample <-c(30,60,90) # sample times # *************************************************# Simulation # *************************************************# Process Simulations scenarios <-tibble()for(j in1:j_bar){# Initialize X0 and variance Xt <-rep(X0, t_bar) sigma <-rep(sigma0, t_bar)# Simulated residuals eps <-rnorm(t_bar, mean=0, sd=sigma0)for(t in2:t_bar){ sigma[t] <- sigma0*sqrt(t) Xt[t] <- par[1] + par[2]*Xt[t-1] + eps[t] } df <-tibble(j =rep(j, t_bar), t =1:t_bar, Xt = Xt, sigma = sigma, eps = eps) scenarios <- dplyr::bind_rows(scenarios, df)}# Compute simulated moments scenarios <- scenarios %>%group_by(t) %>%mutate(e_x =mean(Xt), sd_x =sd(Xt))# Trajectory j df_j <- dplyr::filter(scenarios, j ==6)# Sample at different times df_t1 <- dplyr::filter(scenarios, t == t_sample[1])df_t2 <- dplyr::filter(scenarios, t == t_sample[2])df_t3 <- dplyr::filter(scenarios, t == t_sample[3])
Figure 19.6: Non stationary AR(1) simulation with expected value (red), a possible trajectory (green) and samples for different times (magenta) on the top. Theoretic (blue) and empiric (black) std. deviation at the bottom.
Sampling the process for different we expect that, on a large number of simulations, the distribution will be still normal but with non-stationary moments, i.e.
Figure 19.7: Non-stationary AR(1) histograms for different sampled times with normal pdf with empiric moments (blue) and normal pdf with theoretic moments (magenta).
t
30
8.7
8.698311
29
29.69114
60
17.7
17.517805
59
60.09756
90
26.7
26.268452
89
90.05715
Table 19.6: Empiric and theoric expectation, variance, covariance and correlation (first lag) for a stationary AR(1) process.
Wold, Herman. 1939. “A Study in the Analysis of Stationary Time Series. By Herman Wold.”Journal of the Institute of Actuaries 70 (1): 113–15. https://doi.org/10.1017/S0020268100011574.