Autocovariances of Vector Processes

Vector Autocovariance

Given an \(\smash{n}\) -dimensional, weakly stationary vector process, \(\smash{\boldsymbol{Y}_{t}}\), the \(\smash{j}\) th autocovariance matix is defined as:

\[\smash{\Gamma_{j,t} = E[(\boldsymbol{Y}_{t}-\boldsymbol{\mu})(\boldsymbol{Y}_{t-j} - \boldsymbol{\mu})^{'}]}.\]

Since \(\smash{Y_{1,t}}\) is different from \(\smash{Y_{2,t}}\), \(\smash{\Gamma_{j} \neq \Gamma_{-j}}\):

\[\smash{\Gamma_{j}(1,2) = Cov(Y_{1,t},Y_{2,t-j}) \neq Cov(Y_{1,t},Y_{2,t+j}) = \Gamma_{-j}(1,2).}\]

Vector Autocovariance

It is true that \(\smash{\Gamma_{j} = \Gamma_{-j}^{'}}\):

\[\smash{\Gamma_{j}(1,2) = Cov(Y_{1,t},Y_{2,t-j}) = Cov(Y_{2,t},Y_{1,t+j}) = \Gamma_{-j}(2,1).}\]
  • Stationarity does impose \(\smash{Cov(Y_{1,t},Y_{2,t-j}) = Cov(Y_{1,t+j},Y_{2,t})}\).

Vector MA(q) Process

A vector moving average process of order \(\smash{q}\) is

\[\boldsymbol{Y}_{t} = \boldsymbol{\mu} + \boldsymbol{\varepsilon}_{t} + \Theta_{1}\boldsymbol{\varepsilon}_{t-1} + \Theta_{2}\boldsymbol{\varepsilon}_{t-2} + \ldots + \Theta_{q}\boldsymbol{\varepsilon}_{t-q}, \,\,\, \boldsymbol{\varepsilon}_{t}\overset{i.i.d}{\sim} WN(\boldsymbol{0},\Omega),\]

where \(\smash{\Theta_{j}}\) is an \(\smash{N \times N}\) matrix of \(\smash{MA}\) coefficients for \(\smash{j = 1,\ldots,q}\).

  • We can define \(\smash{\Theta_{0} = I_{n}}\).
  • Clearly \(\smash{E[\boldsymbol{Y}_{t}] = \boldsymbol{\mu} \,\,\, \forall \, t}\).

Vector MA(q) Autocovariances

The jth autocovariance matrix is:

\[\begin{split}\begin{align} \Gamma_{j} & = E[(\boldsymbol{Y}_{t}-\boldsymbol{\mu})(\boldsymbol{Y}_{t-j} - \boldsymbol{\mu})^{'}] \\ & = E[(\Theta_{0}\boldsymbol{\varepsilon}_{t} + \Theta_{1}\boldsymbol{\varepsilon}_{t-1} + \ldots + \Theta_{q}\boldsymbol{\varepsilon}_{t-q}) \\ & \hspace{1.5in} \times (\Theta_{0}\boldsymbol{\varepsilon}_{t-j} + \Theta_{1}\boldsymbol{\varepsilon}_{t-j-1} + \ldots + \Theta_{q}\boldsymbol{\varepsilon}_{t-j-q})^{'}] \end{align}\end{split}\]

Vector MA(q) Autocovariances

  • For \(\smash{|j| > q: \Gamma_{j} = \boldsymbol{0}_{N \times N}}\).
  • For \(\smash{j = 0}:\)
\[\begin{split}\begin{align} \Gamma_{0} & = \Theta_{0}\Omega \Theta_{0}^{'} + \Theta_{1} \Omega \Theta_{1}^{'} + \ldots + \Theta_{q} \Omega \Theta_{q}^{'} \\ & = \sum_{i=0}^{q} \Theta_{i} \Omega \Theta_{i}^{'}. \end{align}\end{split}\]
  • For \(\smash{j = 1,\ldots,q}\):
\[\begin{split}\begin{align} \Gamma_{j} & = \Theta_{j}\Omega \Theta_{0}^{'} + \Theta_{j+1} \Omega \Theta_{1}^{'} + \ldots + \Theta_{q} \Omega \Theta_{q-j}^{'} \\ & = \sum_{i=0}^{q-j} \Theta_{j+i}\Omega \Theta_{i}^{'}. \end{align}\end{split}\]

Vector MA(q) Autocovariances

  • For \(\smash{j = -1,\ldots, -q}\):
\[\begin{split}\begin{align} \Gamma_{j} & = \Theta_{0}\Omega \Theta_{-j}^{'} + \Theta_{1} \Omega \Theta_{-j+1}^{'} + \ldots + \Theta_{q+j} \Omega \Theta_{q}^{'} \\ & = \sum_{i=0}^{q+j} \Theta_{i}\Omega \Theta_{-j+i}^{'}. \end{align}\end{split}\]
  • \(\smash{\Gamma_{j}^{'} = \Gamma_{-j}}\).
  • Because 1st and 2nd moments of \(\smash{\boldsymbol{Y}_{t}}\) are independent of time, the vector \(\smash{MA(q)}\) process is weakly stationary.

Vector \(\smash{MA(\infty)}\) Autocovariances

The vector \(\smash{MA(\infty)}\) is the limit of the vector \(\smash{MA(q)}\):

\[\smash{\boldsymbol{Y}_{t} = \boldsymbol{\mu} + \boldsymbol{\varepsilon}_{t} + \Theta_{1}\boldsymbol{\varepsilon}_{t-1} + \Theta_{2}\boldsymbol{\varepsilon}_{t-2} + \ldots}\]
  • The sequence of matrices \(\smash{\{\Theta_{s}\}_{s=0}^{\infty}}\) is absolutely summable if each component sequence is absolutely summable.

Vector \(\smash{MA(\infty)}\) Autocovariances

If \(\smash{\{\Theta_{s}\}_{s=0}^{\infty}}\) is absolutely summable:

  • \(\smash{E[\boldsymbol{Y}_{t}] = \boldsymbol{\mu}}\).
  • \(\smash{\Gamma_{j} = \sum_{i=0}^{\infty} \Theta_{j+i}\Omega \Theta_{i}^{'}, \,\,\,\, j = 0,1,2,...}\)
  • \(\smash{\boldsymbol{Y}_{t}}\) is ergodic for 1st and 2nd moments.
  • \(\smash{\boldsymbol{Y}_{t}}\) is stationary.

Vector \(\smash{VAR(p)}\) Autocovariances

When a stationary \(\smash{VAR(p)}\) is expressed as a vector \(\smash{MA(\infty)}\), it satisfies the absolute summability condition.

  • \(\smash{\Theta_{s} = F^{s}= T\Lambda^{s} T^{-1}}\).
  • The component-wise sum of absolute values over \(\smash{s = 0,1,2,...}\) will be a weighted average of absolute values of eigenvalues raised to powers.
  • Because of stationarity, \(\smash{|\lambda_{i}| < 1, i = 1,...,np}\), which means \(\smash{\{F^{s}\}_{s=0}^{\infty}}\) is absolutely summable.

Vector \(\smash{VAR(p)}\) Autocovariances

Recall that a \(\smash{VAR(p)}\) can be expressed as:

\[\smash{\boldsymbol{\xi}_{t} = F\boldsymbol{\xi}_{t-1} + \boldsymbol{v}_{t}}\]

In this case

\[\begin{split}\smash{\Sigma = E[\boldsymbol{\xi}_{t}\boldsymbol{\xi}_{t}^{'}] = \left[\begin{array}{cccc} \Gamma_{0} & \Gamma_{1} & \ldots & \Gamma_{p-1} \\ \Gamma_{1}^{'} & \Gamma_{0} & \ldots & \Gamma_{p-2} \\ \vdots & \vdots & \ddots & \vdots \\\Gamma_{p-1}^{'} & \Gamma_{p-2}^{'} & \ldots & \Gamma_{0} \\ \end{array} \right]}.\end{split}\]
\[\,\,\]

Vector \(\smash{VAR(p)}\) Autocovariances

By the definition of \(\smash{\boldsymbol{\xi}_{t}}\),

\[\begin{split}\begin{align} \Sigma & = E[\boldsymbol{\xi}_{t}\boldsymbol{\xi}_{t}^{'}] \\ & = E\left[(F\boldsymbol{\xi}_{t-1} + \boldsymbol{v}_{t})(F\boldsymbol{\xi}_{t-1} + \boldsymbol{v}_{t})^{'}\right] \\ & = F\underset{\Sigma}{\underbrace{E[\boldsymbol{\xi}_{t-1} \boldsymbol{\xi}_{t-1}^{'}]}}F^{'} + \underset{Q}{\underbrace{E[\boldsymbol{v}_{t}\boldsymbol{v}_{t}^{'}]}} \\ & = F\Sigma F^{'} + Q. \end{align}\end{split}\]

Using the Vec Operator

In general

\[\smash{Vec(ABC) = C^{'}\bigotimes A \cdot Vec(B)}.\]

Thus,

\[\begin{split}\begin{gather} Vec(\Sigma) = F \bigotimes F\cdot Vec(\Sigma) + Vec(Q) \\ \implies Vec(\Sigma) = [I - F\bigotimes F]^{-1} \cdot Vec(Q). \end{gather}\end{split}\]
  • \(\smash{F \bigotimes F}\) is an \(\smash{(np)^{2} \times (np)^{2}}\) matrix.
  • Because all eigenvalues of \(\smash{F}\) lie inside the unit circle, so do all eigenvalues of \(\smash{F\bigotimes F}\), which means \(\smash{F\bigotimes F}\) is invertible.

Vector \(\smash{VAR(p)}\) Autocovariances

\[\begin{split}\begin{align} \Sigma_{j} & = E[\boldsymbol{\xi}_{t}\boldsymbol{\xi}_{t-j}^{'}] \\ & = FE[\boldsymbol{\xi}_{t-1}\boldsymbol{\xi}_{t-j}^{'}] \\ & = F\Sigma_{j-1}, j= 1,2,3,... \\ \implies \Sigma_{j} & = F^{j}\Sigma. \end{align}\end{split}\]