- Show that we can write Equation (4.15) as a polynomial of order \(p\) in terms of the backward shift operator: \[
\left( 1 - \alpha_1 \mathbf{B} - \alpha_2 \mathbf{B}^2 - \cdots - \alpha_p \mathbf{B}^p \right) x_t = w_t
\]
Solution:
\[
x_t = \alpha_1 x_{t-1} + \alpha_2 x_{t-2} + \alpha_3 x_{t-3} + \cdots + \alpha_{p-1} x_{t-(p-1)} + \alpha_p x_{t-p} + w_t
\] After subtracting, we have: \[
\begin{align*}
w_t
&= x_t - \alpha_1 x_{t-1} - \alpha_2 x_{t-2} - \alpha_3 x_{t-3} - \cdots - \alpha_{p-1} x_{t-(p-1)} - \alpha_p x_{t-p} \\
&= x_t - \alpha_1 \mathbf{B} x_{t} - \alpha_2 \mathbf{B}^2 x_{t} - \alpha_3 \mathbf{B}^3 x_{t} - \cdots - \alpha_{p-1} \mathbf{B}^{p-1} x_{t} - \alpha_p \mathbf{B}^p x_{t} \\
&= \left( 1 - \alpha_1 \mathbf{B} - \alpha_2 \mathbf{B}^2 - \alpha_3 \mathbf{B}^3 - \cdots - \alpha_{p-1} \mathbf{B}^{p-1} - \alpha_p \mathbf{B}^p \right) x_t
\end{align*}
\]
- Give another name for an \(AR(0)\) model.
Solution:
White noise
Solution:
If we let \(\alpha_1=1\) in an \(AR(1)\) model, we get: \[
\begin{align*}
x_t &= \alpha_1 x_{t-1} + \alpha_2 x_{t-2} + \alpha_3 x_{t-3} + \cdots + \alpha_{p-1} x_{t-(p-1)} + \alpha_p x_{t-p} + w_t \\
&= \alpha_1 x_{t-1} + w_t \\
&= x_{t-1} + w_t \\
\end{align*}
\] which is the definition of a random walk, as given in Chapter 4, Lesson 1.
- Show that the exponential smoothing model is the special case where \[\alpha_i = \alpha(1-\alpha)^i\] for \(i = 1, 2, \ldots\) and \(p \rightarrow \infty\). (See Chapter 3, Lesson 2.)
Solution: \[\begin{align*}
x_t &= \alpha_1 x_{t-1} + \alpha_2 x_{t-2} + \alpha_3 x_{t-3} + \cdots + \alpha_{p-1} x_{t-(p-1)} + \alpha_p x_{t-p} + w_t \\
&= \alpha(1-\alpha)^1 x_{t-1} + \alpha(1-\alpha)^2 x_{t-2} + \alpha(1-\alpha)^3 x_{t-3} + \cdots + \alpha(1-\alpha)^{p-1} x_{t-(p-1)} + \alpha(1-\alpha)^p x_{t-p} + w_t \\
\end{align*}\]
This is Equation (3.18) in Chapter 3, Lesson 2.
- Show that the \(AR(p)\) model is a regression of \(x_t\) on previous terms in the series. (This is why it is called an “autoregressive model.”) Hint: write the \(AR(p)\) model in more familiar terms, letting \[y_i = x_t, ~~ x_1 = x_{t-1}, ~~ x_2 = x_{t-2}, ~~ \ldots, ~~ x_p = x_{t-p}, ~~ \epsilon_i = w_t, ~~ \text{and} ~~ \beta_j = \alpha_j\]
Solution: \[
\begin{align*}
x_t &= \alpha_1 x_{t-1} + \alpha_2 x_{t-2} + \alpha_3 x_{t-3} + \cdots + \alpha_{p-1} x_{t-(p-1)} + \alpha_p x_{t-p} + w_t \\
y_i &= \beta_1 x_{1i} ~+~ \beta_2 x_{2i} ~+~ \beta_3 x_{3i} ~+ \cdots +~ \beta_{p-1,i} x_{p-1,i} ~~~+~ \beta_p x_{p,i} + \epsilon_i \\
\end{align*}
\]
This is a multiple linear regression equation with zero intercept.
- Explain why the prediction at time \(t\) is given by \[
\hat x_t = \hat \alpha_1 x_{t-1} + \hat \alpha_2 x_{t-2} + \cdots + \hat \alpha_{p-1} x_{t-(p-1)} + \hat \alpha_p x_{t-p}
\]
Solution:
The prediction at time \(t\) in a multiple regression setting would be: \[
\hat y_i = \hat \beta_1 x_{1i} ~~~+~ \hat \beta_2 x_{2i} ~+~ \hat \beta_3 x_{3i} ~~+ \cdots +~~ \hat \beta_{p-1} x_{p-1,i} ~~+~ \hat \beta_p x_{p,i}
\] Translated to the \(AR(p)\) setting, this becomes: \[
\hat x_t = \hat \alpha_1 x_{t-1} + \hat \alpha_3 x_{t-2} + \hat \alpha_3 x_{t-3} + \cdots + \hat \alpha_{p-1} x_{t-(p-1)} + \hat \alpha_p x_{t-p}
\]
- Explain why the model parameters (the \(\alpha\)’s) can be estimated by minimizing the sum of the squared error terms: \[\sum_{t=1}^n \left( \hat w_t \right)^2 = \sum_{t=1}^n \left( x_t - \hat x_t \right)^2\]
Solution:
This is exactly how the multiple linear regression coefficients are estimated…minimizing the sum of the squared error terms.
- What is the reason this is called an autoregressive model?
Solution:
This is called an autoregressive model because we regress the current tern on the previous terms in the series.