Transformations and Non-Linear Models

Chapter 5: Lesson 4

Learning Outcomes

Apply logarithmic transformations to time series
  • Explain when to use a log-transformation
  • Estimate a harmonic seasonal model using GLS with a log-transformed series
  • Explain how to use logarithms to linearize certain non-linear trends
Apply non-linear models to time series
  • Explain when to use non-linear models
  • Simulate a time series with an exponential trend
  • Fit a time series model with an exponential trend

Preparation

  • Read Sections 5.7-5.8

Learning Journal Exchange (10 min)

  • Review another student’s journal

  • What would you add to your learning journal after reading another student’s?

  • What would you recommend the other student add to their learning journal?

  • Sign the Learning Journal review sheet for your peer

Class Activity: Simulate an Exponential Trend with a Seasonal Component (15 min)

We will simulate code that has a seasonal component and impose an exponential trend.

Figures

Figure 1 shows the simulated time series and the time series after the natural logarithm is applied.

Show the code
set.seed(12345)

n_years <- 9 # Number of years to simulate
n_months <- n_years * 12 # Number of months
sigma <- .05 # Standard deviation of random term

z_t <- rnorm(n = n_months, mean = 0, sd = sigma)

dates_seq <- seq(floor_date(now(), unit = "year"), length.out=n_months + 1, by="-1 month") |>
  floor_date(unit = "month") |> sort() |> head(n_months)

sim_ts <- tibble(
  t = 1:n_months,
  dates = dates_seq,
  random = arima.sim(model=list(ar=c(.5,0.2)), n = n_months, sd = 0.02),
  x_t = exp(2 + 0.015 * t +
      0.03 * sin(2 * pi * 1 * t / 12) + 0.04 * cos(2 * pi * 1 * t / 12) + 
      0.05 * sin(2 * pi * 2 * t / 12) + 0.03 * cos(2 * pi * 2 * t / 12) +
      0.01 * sin(2 * pi * 3 * t / 12) + 0.005 * cos(2 * pi * 3 * t / 12) +
      random
    )
  ) |>
  mutate(
    cos1 = cos(2 * pi * 1 * t / 12),
    cos2 = cos(2 * pi * 2 * t / 12),
    cos3 = cos(2 * pi * 3 * t / 12),
    cos4 = cos(2 * pi * 4 * t / 12),
    cos5 = cos(2 * pi * 5 * t / 12),
    cos6 = cos(2 * pi * 6 * t / 12),
    sin1 = sin(2 * pi * 1 * t / 12),
    sin2 = sin(2 * pi * 2 * t / 12),
    sin3 = sin(2 * pi * 3 * t / 12),
    sin4 = sin(2 * pi * 4 * t / 12),
    sin5 = sin(2 * pi * 5 * t / 12),
    sin6 = sin(2 * pi * 6 * t / 12)) |>
  mutate(std_t = (t - mean(t)) / sd(t)) |>
  as_tsibble(index = dates)

plot_raw <- sim_ts |>
    autoplot(.vars = x_t) +
    labs(
      x = "Month",
      y = "x_t",
      title = "Simulated Time Series"
    ) +
    theme_minimal() +
    theme(
      plot.title = element_text(hjust = 0.5)
    )

plot_log <- sim_ts |>
    autoplot(.vars = log(x_t)) +
    labs(
      x = "Month",
      y = "log(x_t)",
      title = "Logarithm of Simulated Time Series"
    ) +
    theme_minimal() +
    theme(
      plot.title = element_text(hjust = 0.5)
    )

plot_raw | plot_log
Figure 1: Time plot of the time series (left) and the natural logarithm of the time series (right)

We will compute the (natural) logarithm of the time series values before fitting any linear models. So, our response variable will be \(\log(x_t)\), rather than \(x_t\).

Even though there is no visual evidence of curvature in the trend for the logarithm of the time series, we will start by fitting a model that allows for a cubic trend. (In practice, we would probably not fit this model. However, there are some things that will occur that are helpful to understand the underlying process.)

Cubic Model

Cubic Model

After taking the (natural) logarithm of \(x_t\), we fit a cubic model to the log-transformed time series.

Full Cubic Model

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) + \beta_2 \left( \frac{t - \mu_t}{\sigma_t} \right)^2 + \beta_3 \left( \frac{t - \mu_t}{\sigma_t} \right)^3 \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_4 \sin \left( \frac{2\pi \cdot 1 t}{12} \right) + \beta_5 \cos \left( \frac{2\pi \cdot 1 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_6 \sin \left( \frac{2\pi \cdot 2 t}{12} \right) + \beta_7 \cos \left( \frac{2\pi \cdot 2 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_8 \sin \left( \frac{2\pi \cdot 3 t}{12} \right) + \beta_9 \cos \left( \frac{2\pi \cdot 3 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_{10} \sin \left( \frac{2\pi \cdot 4 t}{12} \right) + \beta_{11} \cos \left( \frac{2\pi \cdot 4 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_{12} \sin \left( \frac{2\pi \cdot 5 t}{12} \right) + \beta_{13} \cos \left( \frac{2\pi \cdot 5 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \phantom{+ \beta_{15} \sin \left( \frac{2\pi \cdot 6 t}{12} \right)} + \beta_{14} \cos \left( \frac{2\pi \cdot 6 t}{12} \right) + z_t \end{align*}\]

Show the code
# Cubic model with standardized time variable

full_cubic_lm <- sim_ts |>
  model(full_cubic = TSLM(log(x_t) ~ std_t + I(std_t^2) + I(std_t^3) +
    sin1 + cos1 + sin2 + cos2 + sin3 + cos3 
    + sin4 + cos4 + sin5 + cos5 + cos6 )) # Note sin6 is omitted
full_cubic_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 15 × 7
   .model     term         estimate std.error statistic   p.value sig  
   <chr>      <chr>           <dbl>     <dbl>     <dbl>     <dbl> <lgl>
 1 full_cubic (Intercept)  2.82       0.00327   865.    2.08e-183 TRUE 
 2 full_cubic std_t        0.473      0.00550    86.0   1.90e- 90 TRUE 
 3 full_cubic I(std_t^2)  -0.00488    0.00246    -1.98  5.02e-  2 FALSE
 4 full_cubic I(std_t^3)   0.000860   0.00285     0.302 7.64e-  1 FALSE
 5 full_cubic sin1         0.0328     0.00312    10.5   1.68e- 17 TRUE 
 6 full_cubic cos1         0.0389     0.00308    12.6   7.52e- 22 TRUE 
 7 full_cubic sin2         0.0496     0.00309    16.1   1.33e- 28 TRUE 
 8 full_cubic cos2         0.0297     0.00308     9.65  1.10e- 15 TRUE 
 9 full_cubic sin3         0.0111     0.00308     3.60  5.15e-  4 TRUE 
10 full_cubic cos3         0.00765    0.00308     2.48  1.49e-  2 TRUE 
11 full_cubic sin4         0.00155    0.00308     0.504 6.15e-  1 FALSE
12 full_cubic cos4         0.00185    0.00308     0.600 5.50e-  1 FALSE
13 full_cubic sin5         0.00169    0.00308     0.550 5.84e-  1 FALSE
14 full_cubic cos5         0.00186    0.00308     0.602 5.48e-  1 FALSE
15 full_cubic cos6         0.00184    0.00218     0.845 4.00e-  1 FALSE

Note that neither the quadratic nor the cubic terms are statistically significant in this model. We will eliminate the cubic term and fit a model with a quadratic trend.

Quadratic Model

Quadratic Model

We now fit a quadratic model to the log-transformed time series.

Full Quadratic Model

The full model with a quadratic trend is written as:

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) + \beta_2 \left( \frac{t - \mu_t}{\sigma_t} \right)^2 \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_3 \sin \left( \frac{2\pi \cdot 1 t}{12} \right) + \beta_4 \cos \left( \frac{2\pi \cdot 1 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_5 \sin \left( \frac{2\pi \cdot 2 t}{12} \right) + \beta_6 \cos \left( \frac{2\pi \cdot 2 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_7 \sin \left( \frac{2\pi \cdot 3 t}{12} \right) + \beta_8 \cos \left( \frac{2\pi \cdot 3 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_9 \sin \left( \frac{2\pi \cdot 4 t}{12} \right) + \beta_{10} \cos \left( \frac{2\pi \cdot 4 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_{11} \sin \left( \frac{2\pi \cdot 5 t}{12} \right) + \beta_{12} \cos \left( \frac{2\pi \cdot 5 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \phantom{+ \beta_{14} \sin \left( \frac{2\pi \cdot 6 t}{12} \right)} + \beta_{13} \cos \left( \frac{2\pi \cdot 6 t}{12} \right) + z_t \end{align*}\]

Show the code
full_quad_lm <- sim_ts |>
  model(full_quad = TSLM(log(x_t) ~ std_t + I(std_t^2) + 
    sin1 + cos1 + sin2 + cos2 + sin3 + cos3 
    + sin4 + cos4 + sin5 + cos5 + cos6 )) # Note sin6 is omitted
full_quad_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05) 
# A tibble: 14 × 7
   .model    term        estimate std.error statistic   p.value sig  
   <chr>     <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
 1 full_quad (Intercept)  2.82      0.00325   869.    2.41e-185 TRUE 
 2 full_quad std_t        0.475     0.00219   217.    1.02e-128 TRUE 
 3 full_quad I(std_t^2)  -0.00488   0.00244    -1.99  4.90e-  2 TRUE 
 4 full_quad sin1         0.0326    0.00307    10.6   9.09e- 18 TRUE 
 5 full_quad cos1         0.0389    0.00306    12.7   4.29e- 22 TRUE 
 6 full_quad sin2         0.0496    0.00307    16.2   6.71e- 29 TRUE 
 7 full_quad cos2         0.0298    0.00306     9.72  7.30e- 16 TRUE 
 8 full_quad sin3         0.0111    0.00306     3.61  4.97e-  4 TRUE 
 9 full_quad cos3         0.00768   0.00306     2.51  1.39e-  2 TRUE 
10 full_quad sin4         0.00153   0.00306     0.500 6.18e-  1 FALSE
11 full_quad cos4         0.00188   0.00306     0.614 5.41e-  1 FALSE
12 full_quad sin5         0.00168   0.00306     0.550 5.84e-  1 FALSE
13 full_quad cos5         0.00189   0.00306     0.616 5.39e-  1 FALSE
14 full_quad cos6         0.00186   0.00217     0.857 3.93e-  1 FALSE

Now, note that the quadratic term is statistically significant. It was not significant when the cubic term was included in the model, but it is now.

Reduced Quadratic Trend: Model 1

This model omits all the Fourier (sine and cosine) terms that are not significant in the previous model.

Show the code
reduced_quadratic_lm1 <- sim_ts |>
  model(reduced_quadratic1 = TSLM(log(x_t) ~ std_t + I(std_t^2) + 
    sin1 + cos1 + sin2 + cos2 + sin3 + cos3))
reduced_quadratic_lm1 |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 9 × 7
  .model             term        estimate std.error statistic   p.value sig  
  <chr>              <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
1 reduced_quadratic1 (Intercept)  2.82      0.00320    882.   1.18e-194 TRUE 
2 reduced_quadratic1 std_t        0.475     0.00216    220.   4.66e-135 TRUE 
3 reduced_quadratic1 I(std_t^2)  -0.00487   0.00241     -2.02 4.57e-  2 TRUE 
4 reduced_quadratic1 sin1         0.0326    0.00303     10.8  2.18e- 18 TRUE 
5 reduced_quadratic1 cos1         0.0389    0.00302     12.9  6.79e- 23 TRUE 
6 reduced_quadratic1 sin2         0.0496    0.00302     16.4  5.12e- 30 TRUE 
7 reduced_quadratic1 cos2         0.0298    0.00302      9.87 2.15e- 16 TRUE 
8 reduced_quadratic1 sin3         0.0111    0.00302      3.66 4.02e-  4 TRUE 
9 reduced_quadratic1 cos3         0.00768   0.00302      2.54 1.25e-  2 TRUE 

All the terms are statistically significant in this model.

Reduced Quadratic Trend: Model 2

This model only includes the Fourier series terms where \(i=1\) or \(i=2\).

Show the code
reduced_quadratic_lm2 <- sim_ts |>
  model(reduced_quadratic2 = TSLM(log(x_t) ~ std_t + I(std_t^2) + 
    sin1 + cos1 + sin2 + cos2))
reduced_quadratic_lm2 |>
  tidy() |>
  mutate(sig = p.value < 0.05) 
# A tibble: 7 × 7
  .model             term        estimate std.error statistic   p.value sig  
  <chr>              <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
1 reduced_quadratic2 (Intercept)  2.82      0.00347    813.   1.54e-194 TRUE 
2 reduced_quadratic2 std_t        0.475     0.00234    203.   9.18e-134 TRUE 
3 reduced_quadratic2 I(std_t^2)  -0.00486   0.00261     -1.86 6.57e-  2 FALSE
4 reduced_quadratic2 sin1         0.0326    0.00329      9.93 1.25e- 16 TRUE 
5 reduced_quadratic2 cos1         0.0389    0.00327     11.9  6.98e- 21 TRUE 
6 reduced_quadratic2 sin2         0.0496    0.00328     15.1  1.03e- 27 TRUE 
7 reduced_quadratic2 cos2         0.0298    0.00327      9.09 8.89e- 15 TRUE 

As we would expect, all the terms are statistically significant. (They were all significant in the previous model, so it is not surprising that they are still significant.)

Reduced Quadratic Trend: Model 3

This model is reduced to include only the Fourier series terms for \(i=1\).

Show the code
reduced_quadratic_lm3 <- sim_ts |>
  model(reduced_quadratic3 = TSLM(log(x_t) ~ std_t + I(std_t^2) + 
    sin1 + cos1)) 
reduced_quadratic_lm3 |>
  tidy() |>
  mutate(sig = p.value < 0.05) 
# A tibble: 5 × 7
  .model             term        estimate std.error statistic   p.value sig  
  <chr>              <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
1 reduced_quadratic3 (Intercept)  2.82      0.00695   406.    7.13e-167 TRUE 
2 reduced_quadratic3 std_t        0.474     0.00467   101.    5.16e-105 TRUE 
3 reduced_quadratic3 I(std_t^2)  -0.00475   0.00523    -0.908 3.66e-  1 FALSE
4 reduced_quadratic3 sin1         0.0325    0.00658     4.95  2.97e-  6 TRUE 
5 reduced_quadratic3 cos1         0.0389    0.00656     5.94  3.97e-  8 TRUE 

All the terms in this parsimonious model are statistically significant.

Linear Model

Linear Model

Even though the quadratic terms were statistically significant, there is no visual indication that there is a quadratic trend in the time series after taking the logarithm. Hence, we will now fit a linear model to the log-transformed time series. We want to be able to compare the fit of models with a linear trend to the models with quadratic trends.

Full Linear Model

First, we fit a full model with a linear trend. We can express this model as:

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_2 \sin \left( \frac{2\pi \cdot 1 t}{12} \right) + \beta_3 \cos \left( \frac{2\pi \cdot 1 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_4 \sin \left( \frac{2\pi \cdot 2 t}{12} \right) + \beta_5 \cos \left( \frac{2\pi \cdot 2 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_6 \sin \left( \frac{2\pi \cdot 3 t}{12} \right) + \beta_7 \cos \left( \frac{2\pi \cdot 3 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_8 \sin \left( \frac{2\pi \cdot 4 t}{12} \right) + \beta_9 \cos \left( \frac{2\pi \cdot 4 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_{10} \sin \left( \frac{2\pi \cdot 5 t}{12} \right) + \beta_{11} \cos \left( \frac{2\pi \cdot 5 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \phantom{+ \beta_{13} \sin \left( \frac{2\pi \cdot 6 t}{12} \right)} + \beta_{12} \cos \left( \frac{2\pi \cdot 6 t}{12} \right) + z_t \end{align*}\]

Show the code
full_linear_lm <- sim_ts |>
  model(full_linear = TSLM(log(x_t) ~ std_t + 
    sin1 + cos1 + sin2 + cos2 + sin3 + cos3 
    + sin4 + cos4 + sin5 + cos5 + cos6 )) # Note sin6 is omitted
full_linear_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 13 × 7
   .model      term        estimate std.error statistic   p.value sig  
   <chr>       <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
 1 full_linear (Intercept)  2.82      0.00220  1281.    4.18e-203 TRUE 
 2 full_linear std_t        0.475     0.00222   214.    3.22e-129 TRUE 
 3 full_linear sin1         0.0326    0.00312    10.4   1.83e- 17 TRUE 
 4 full_linear cos1         0.0388    0.00311    12.5   9.93e- 22 TRUE 
 5 full_linear sin2         0.0496    0.00311    15.9   1.47e- 28 TRUE 
 6 full_linear cos2         0.0298    0.00311     9.56  1.41e- 15 TRUE 
 7 full_linear sin3         0.0110    0.00311     3.55  5.99e-  4 TRUE 
 8 full_linear cos3         0.00767   0.00311     2.47  1.54e-  2 TRUE 
 9 full_linear sin4         0.00153   0.00311     0.492 6.24e-  1 FALSE
10 full_linear cos4         0.00188   0.00311     0.604 5.47e-  1 FALSE
11 full_linear sin5         0.00168   0.00311     0.541 5.90e-  1 FALSE
12 full_linear cos5         0.00189   0.00311     0.607 5.45e-  1 FALSE
13 full_linear cos6         0.00186   0.00220     0.844 4.01e-  1 FALSE
Reduced Linear Trend: Model 1

This model excludes the terms that were not significant in the full model with a linear trend.

Show the code
reduced_linear_lm1 <- sim_ts |>
  model(reduced_linear1 = TSLM(log(x_t) ~ std_t + 
    sin1 + cos1 + sin2 + cos2 + sin3 + cos3))
reduced_linear_lm1 |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 8 × 7
  .model          term        estimate std.error statistic   p.value sig  
  <chr>           <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
1 reduced_linear1 (Intercept)  2.82      0.00217   1301.   2.91e-213 TRUE 
2 reduced_linear1 std_t        0.475     0.00219    217.   1.58e-135 TRUE 
3 reduced_linear1 sin1         0.0326    0.00307     10.6  4.53e- 18 TRUE 
4 reduced_linear1 cos1         0.0388    0.00306     12.7  1.63e- 22 TRUE 
5 reduced_linear1 sin2         0.0496    0.00307     16.2  1.18e- 29 TRUE 
6 reduced_linear1 cos2         0.0298    0.00306      9.71 4.25e- 16 TRUE 
7 reduced_linear1 sin3         0.0111    0.00306      3.61 4.86e-  4 TRUE 
8 reduced_linear1 cos3         0.00767   0.00306      2.50 1.39e-  2 TRUE 

All of the terms are significant in this model.

Reduced Linear Trend: Model 2

We reduce the model to see if a more parsimonious model will suffice. This model contains a linear trend and the Fourier series terms associated with \(i=1\) and \(i=2\).

Show the code
reduced_linear_lm2 <- sim_ts |>
  model(reduced_linear2 = TSLM(log(x_t) ~ std_t + 
    sin1 + cos1 + sin2 + cos2))
reduced_linear_lm2 |>
  tidy() |>
  mutate(sig = p.value < 0.05) 
# A tibble: 6 × 7
  .model          term        estimate std.error statistic   p.value sig  
  <chr>           <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
1 reduced_linear2 (Intercept)   2.82     0.00234   1203.   1.37e-213 TRUE 
2 reduced_linear2 std_t         0.475    0.00237    201.   2.51e-134 TRUE 
3 reduced_linear2 sin1          0.0326   0.00332      9.81 2.14e- 16 TRUE 
4 reduced_linear2 cos1          0.0388   0.00331     11.7  1.35e- 20 TRUE 
5 reduced_linear2 sin2          0.0496   0.00332     14.9  1.88e- 27 TRUE 
6 reduced_linear2 cos2          0.0298   0.00331      8.98 1.46e- 14 TRUE 

All the terms in this model are statistically significant.

Reduced Linear Trend: Model 3

Finally, we fit a more reduced model that includes a linear trend and only the Fourier terms associated with \(i=1\).

Show the code
reduced_linear_lm3 <- sim_ts |>
  model(reduced_linear3 = TSLM(log(x_t) ~ std_t + 
    sin1 + cos1)) 
reduced_linear_lm3 |>
  tidy() |>
  mutate(sig = p.value < 0.05) 
# A tibble: 4 × 7
  .model          term        estimate std.error statistic   p.value sig  
  <chr>           <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
1 reduced_linear3 (Intercept)   2.82     0.00463    609.   1.56e-186 TRUE 
2 reduced_linear3 std_t         0.474    0.00467    101.   7.71e-106 TRUE 
3 reduced_linear3 sin1          0.0325   0.00657      4.95 2.92e-  6 TRUE 
4 reduced_linear3 cos1          0.0389   0.00655      5.93 3.98e-  8 TRUE 

Each of these terms is significant.

Model Comparison

Comparison of Fitted Models

AIC, AICc, and BIC

We will now compare the models we fitted above. #tbl-ModelComparison gives the AIC, AICc, and BIC of the models fitted above.

Show the code
model_combined <- sim_ts |>
  model(
    full_cubic = TSLM(log(x_t) ~ std_t + I(std_t^2) + I(std_t^3) +
                    sin1 + cos1 + sin2 + cos2 + sin3 + cos3 
                  + sin4 + cos4 + sin5 + cos5 + cos6 ),
    full_quad = TSLM(log(x_t) ~ std_t + I(std_t^2) + 
                       sin1 + cos1 + sin2 + cos2 + sin3 + cos3 
                     + sin4 + cos4 + sin5 + cos5 + cos6 ),
    reduced_quadratic1 = TSLM(log(x_t) ~ std_t + I(std_t^2) + sin1 + cos1 + sin2 + cos2 + sin3 + cos3),
    reduced_quadratic2 = TSLM(log(x_t) ~ std_t + I(std_t^2) + sin1 + cos1 + sin2 + cos2),
    reduced_quadratic3 = TSLM(log(x_t) ~ std_t + I(std_t^2) + sin1 + cos1),
    full_linear = TSLM(log(x_t) ~ std_t + 
                         sin1 + cos1 + sin2 + cos2 + sin3 + cos3 
                       + sin4 + cos4 + sin5 + cos5 + cos6 ),
    reduced_linear1 = TSLM(log(x_t) ~ std_t + sin1 + cos1 + sin2 + cos2 + sin3 + cos3),
    reduced_linear2 = TSLM(log(x_t) ~ std_t + sin1 + cos1 + sin2 + cos2),
    reduced_linear3 = TSLM(log(x_t) ~ std_t + sin1 + cos1) 
  )

glance(model_combined) |>
  select(.model, AIC, AICc, BIC)
Table 1: Comparison of the AIC, AICc, and BIC values for the models fitted to the logarithm of the simulated time series.
Model AIC AICc BIC
full_cubic -802.6 -796.6 -759.6
full_quad -804.5 -799.2 -764.2
reduced_quadratic1 **-812.1** **-809.9** -785.3
reduced_quadratic2 -796.3 -794.9 -774.9
reduced_quadratic3 -648.3 -647.4 -632.2
full_linear -802 -797.5 -764.4
reduced_linear1 -809.8 -807.9 **-785.6**
reduced_linear2 -794.7 -793.6 -775.9
reduced_linear3 -649.4 -648.8 -636

The model with the lowest AIC and AICc values is the reduced quadratic trend model 1. This can be written as:

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) + \beta_2 \left( \frac{t - \mu_t}{\sigma_t} \right)^2 \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_3 \sin \left( \frac{2\pi \cdot 1 t}{12} \right) + \beta_4 \cos \left( \frac{2\pi \cdot 1 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_5 \sin \left( \frac{2\pi \cdot 2 t}{12} \right) + \beta_6 \cos \left( \frac{2\pi \cdot 2 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_7 \sin \left( \frac{2\pi \cdot 3 t}{12} \right) + \beta_8 \cos \left( \frac{2\pi \cdot 3 t}{12} \right) + z_t \end{align*}\]

The model with the lowest BIC is the reduced linear trend model 1. We express this model as:

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_2 \sin \left( \frac{2\pi \cdot 1 t}{12} \right) + \beta_3 \cos \left( \frac{2\pi \cdot 1 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_4 \sin \left( \frac{2\pi \cdot 2 t}{12} \right) + \beta_5 \cos \left( \frac{2\pi \cdot 2 t}{12} \right) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \beta_6 \sin \left( \frac{2\pi \cdot 3 t}{12} \right) + \beta_7 \cos \left( \frac{2\pi \cdot 3 t}{12} \right) + z_t \end{align*}\]

Both of these models have very low AIC, AICc, and BIC values.

We will take a deeper look at the residuals of these two models to assess if there is evidence of autocorrelation in the random terms. We compute the autocorrelation function and the partial autocorrelation function of the residuals for both. If there is evidence of autocorrelation, we will use the GLS algorithm to fit the models, since it will take into account the autocorrelation in the terms.

Autocorrelation of the Random Component

Investigating Autocorrelation of the Random Component

Recall that if there is autocorrelation in the random component, the standard error of the parameter estimates tends to be underestimated. We can account for this autocorrelation using Generalized Least Squares, GLS, if needed.

Reduced Quadratic Trend: Model 1

Figure 2 illustrates the ACF of the reduced model 1 with quadratic trend.

Show the code
resid_q1_ts <- reduced_quadratic_lm1 |>
  residuals() 

acf(resid_q1_ts$.resid, plot=TRUE, lag.max = 25)
Figure 2: ACF of reduced model 1 with a quadratic trend

Notice that the residual correlogram indicates a positive autocorrelation in the values. This suggests that the standard errors of the regression coefficients will be underestimated, which means that some predictors can appear to be statistically significant when they are not.

Figure 3 illustrates the PACF of the reduced model 1 with quadratic trend.

Show the code
pacf(resid_q1_ts$.resid, plot=TRUE, lag.max = 25)
Figure 3: PACF of reduced model 1 with a quadratic trend

Only the first partial autocorrelation is statistically significant. The partial autocorrelation plot indicates that an \(AR(1)\) model could adequately model the random component of the logarithm of the time series. Recall that in Chapter 5, Lesson 1, we fitted a linear regression model using the value of the partial autocorrelation function for \(k=1\). This helps account for the autocorrelation in the residuals.

The first few partial autocorrelation values are:

Show the code
pacf(resid_q1_ts$.resid, plot=FALSE, lag.max = 10)

Partial autocorrelations of series 'resid_q1_ts$.resid', by lag

     1      2      3      4      5      6      7      8      9     10 
 0.479  0.091 -0.004 -0.101 -0.018 -0.129 -0.051  0.016  0.037  0.069 

The partial autocorrelation when \(k=1\) is approximately 0.479. We will use this value as we recompute the regression coefficients.

Show the code
# Load additional packages
pacman::p_load(tidymodels, multilevelmod,
  nlme, broom.mixed)

temp_spec <- linear_reg() |>
  set_engine("gls", correlation = nlme::corAR1(0.479))

temp_gls <- temp_spec |>
  fit(log(x_t) ~ std_t + I(std_t^2) + sin1 + cos1 + sin2 + cos2 + sin3 + cos3, data = sim_ts)

tidy(temp_gls) |>
  mutate(
    lower = estimate + qnorm(0.025) * std.error,
    upper = estimate + qnorm(0.975) * std.error
  ) 
# A tibble: 9 × 7
  term        estimate std.error statistic   p.value    lower   upper
  <chr>          <dbl>     <dbl>     <dbl>     <dbl>    <dbl>   <dbl>
1 (Intercept)  2.82      0.00590    479.   2.33e-168  2.81    2.84   
2 std_t        0.475     0.00389    122.   9.61e-110  0.467   0.482  
3 I(std_t^2)  -0.00497   0.00427     -1.16 2.48e-  1 -0.0133  0.00341
4 sin1         0.0325    0.00442      7.36 5.38e- 11  0.0239  0.0412 
5 cos1         0.0389    0.00436      8.92 2.45e- 14  0.0303  0.0474 
6 sin2         0.0495    0.00307     16.1  1.99e- 29  0.0435  0.0555 
7 cos2         0.0298    0.00306      9.75 3.77e- 16  0.0238  0.0358 
8 sin3         0.0110    0.00235      4.68 9.18e-  6  0.00639 0.0156 
9 cos3         0.00773   0.00235      3.29 1.40e-  3  0.00312 0.0123 

The quadratic term is not statistically significant in this model! After accounting for the autocorrelation in the random component, the quadratic component of the trend is not statistically significant. This is a great example of an instance where ordinary linear regression leads to errant results.

We now consider the reduced model 1 where the trend is linear.

Reduced Linear Trend: Model 1

Figure 4 illustrates the ACF of the reduced model 1 with linear trend.

resid_lin1_ts <- reduced_linear_lm1 |>
  residuals() 

acf(resid_lin1_ts$.resid, plot=TRUE, lag.max = 25)
Figure 4: ACF of reduced model 1 with a linear trend

We observe evidence of autocorrelation in the random terms.

Figure 5 illustrates the PACF of the reduced model 1 with linear trend.

Show the code
alphas_lin <- pacf(resid_lin1_ts$.resid, plot=FALSE, lag.max = 25) 
pacf(resid_lin1_ts$.resid, plot=TRUE, lag.max = 25)
Figure 5: ACF of reduced model 1 with a linear trend

We will use the PACF when \(k=1\) to apply the GLS algorithm. The first few partial autocorrelation values are:

Show the code
pacf(resid_lin1_ts$.resid, plot=FALSE, lag.max = 10)

Partial autocorrelations of series 'resid_lin1_ts$.resid', by lag

     1      2      3      4      5      6      7      8      9     10 
 0.497  0.122  0.017 -0.088  0.006 -0.100 -0.018  0.042  0.060  0.087 

The partial autocorrelation when \(k=1\) is approximately 0.497. We will use this value as we recompute the regression coefficients.

Show the code
# Load additional packages
pacman::p_load(tidymodels, multilevelmod,
  nlme, broom.mixed)

temp_spec <- linear_reg() |>
  set_engine("gls", correlation = nlme::corAR1(0.497))

temp_gls <- temp_spec |>
  fit(log(x_t) ~ std_t + sin1 + cos1 + sin2 + cos2 + sin3 + cos3, data = sim_ts)

tidy(temp_gls) |>
  mutate(
    lower = estimate + qnorm(0.025) * std.error,
    upper = estimate + qnorm(0.975) * std.error
  ) 
# A tibble: 8 × 7
  term        estimate std.error statistic   p.value   lower  upper
  <chr>          <dbl>     <dbl>     <dbl>     <dbl>   <dbl>  <dbl>
1 (Intercept)  2.82      0.00398    708.   8.15e-187 2.81    2.83  
2 std_t        0.475     0.00393    121.   3.80e-110 0.467   0.482 
3 sin1         0.0324    0.00444      7.30 6.96e- 11 0.0237  0.0412
4 cos1         0.0386    0.00438      8.82 3.85e- 14 0.0300  0.0472
5 sin2         0.0494    0.00308     16.1  1.83e- 29 0.0434  0.0555
6 cos2         0.0297    0.00306      9.71 4.31e- 16 0.0237  0.0357
7 sin3         0.0110    0.00235      4.66 9.69e-  6 0.00635 0.0156
8 cos3         0.00769   0.00235      3.27 1.47e-  3 0.00308 0.0123

Figure 6 illustrates the original time series (in black) and the fitted model (in blue). For reference, a dotted line illustrating the simple least squares line is plotted on this figure for reference. It helps highlight the exponential shape of the trend.

Show the code
forecast_df <- reduced_linear_lm1 |> 
  forecast(sim_ts) |> # computes the anti-log of the predicted values and returns them as .mean
  as_tibble() |> 
  dplyr::select(std_t, .mean) |> 
  rename(pred = .mean)

sim_ts |>
  left_join(forecast_df, by = "std_t") |>
  as_tsibble(index = dates) |>
  autoplot(.vars = x_t) +
  geom_smooth(method = "lm", formula = 'y ~ x', se = FALSE, color = "#E69F00", linewidth = 0.5, linetype = "dotted") +
  geom_line(aes(y = pred), color = "#56B4E9", alpha = 0.75) +
    labs(
      x = "Month",
      y = "Simulated Time Series",
      title = "Time Plot of Simulated Time Series with an Exponential Trend",
      subtitle = "Predicted values based on the full cubic model are given in blue"
    ) +
    theme_minimal() +
    theme(
      plot.title = element_text(hjust = 0.5),
      plot.subtitle = element_text(hjust = 0.5)
    )
Figure 6: Time plot of the time series and the fitted linear regression model
Comparison of the Fitted Coefficients to the Simulation Parameters

Comparison of Model Coefficients

Note that the mean of the \(x_t\) values is \(\bar x_t = 18.699\), and the standard deviation is \(s_t = 8.691\).

The fitted model is: \[\begin{align*} \hat x_t &= e^{ 2.819 + 0.475 ~ \left( \frac{t - 18.699}{8.691} \right) + 0.032 ~ \sin \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.039 ~ \cos \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.049 ~ \sin \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.03 ~ \cos \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.011 ~ \sin \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) + 0.008 ~ \cos \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) } \\ &= e^{ 2.819 + 0.475 ~ \left( \frac{t}{8.691} \right) - 0.475 ~ \left( \frac{18.699}{8.691} \right) + 0.032 ~ \sin \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.039 ~ \cos \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.049 ~ \sin \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.03 ~ \cos \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.011 ~ \sin \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) + 0.008 ~ \cos \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) } \\ &= e^{ 2.819 + 0.055 ~ t - 1.021 + 0.032 ~ \sin \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.039 ~ \cos \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.049 ~ \sin \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.03 ~ \cos \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.011 ~ \sin \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) + 0.008 ~ \cos \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) } \\ &= e^{ 1.797 + 0.055 ~ t + 0.032 ~ \sin \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.039 ~ \cos \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.049 ~ \sin \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.03 ~ \cos \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.011 ~ \sin \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) + 0.008 ~ \cos \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) } \end{align*}\]

Notice how well this matches the original model used to simulate the data.

\[\begin{align*} x_t &= e^{ 2 + 0.015 t + 0.03 ~ \sin \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.04 ~ \cos \left( \frac{2 \pi \cdot 1 t }{ 12 } \right) + 0.05 ~ \sin \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.03 ~ \cos \left( \frac{2 \pi \cdot 2 t }{ 12 } \right) + 0.01 ~ \sin \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) + 0.005 ~ \cos \left( \frac{2 \pi \cdot 3 t }{ 12 } \right) + z_t } \end{align*}\] where \(z_t = 0.5 z_{t-1} + 0.2 z_{t-2} + w_t\) and \(w_t\) is a white noise process with standard deviation of 0.02.

Small-Group Activity: Retail Sales (20 min)

Figure 7 gives the total sales (in millions of U.S. dollars) for the category “all other general merchandise stores (45299).”

Show the code
# Read in retail sales data for "all other general merchandise stores"
retail_ts <- rio::import("https://byuistats.github.io/timeseries/data/retail_by_business_type.parquet") |>
  filter(naics == 45299) |>
  filter(as_date(month) >= my("Jan 1998")) |>
  as_tsibble(index = month)

retail_ts |>
  autoplot(.vars = sales_millions) +
    labs(
      x = "Month",
      y = "Sales (Millions of U.S. Dollars)",
      title = paste0(retail_ts$business[1], " (", retail_ts$naics[1], ")")
    ) +
    theme_minimal() +
    theme(plot.title = element_text(hjust = 0.5))
Figure 7: Time plot of the total monthly retail sales for all other general merchandise stores (45299)
Check Your Understanding

Use Figure 7 to explain the following questions to a partner.

  • What is the shape of the trend of this time series?
  • Which decomposition would be more appropriate: additive or multiplicative? Justify your answer.
  • Apply the appropriate transformation to the time series.
  • Fit appropriate models utilizing the Fourier terms for seasonal components.
  • Determine the “best” model for these data. Justify your decision.
  • Plot the fitted values and the time series on the same figure.

Do the following to explore a cubic trend model with all the seasonal Fourier terms.

  • Fit a full model with a cubic trend and the logarithm of the time series for the response, including all the Fourier seasonal terms. (Be sure to include the linear and quadratic components of the trend.)
  • Fit a full model with a cubic trend and the logarithm of the time series for the response, but use indicator seasonal variables. (Be sure to include the linear and quadratic components of the trend.)
  • Compare the coefficients on the linear, quadratic, and cubic terms between the two models above. Why would we observe this result?
  • Why does the coefficient on the intercept term differ in the two models?

Class Activity: Simulated Non-Linear Series (10 min)

In Section 5.8 of the textbook, we are introduced to the possibility that a time series could have an exponential trend but also exhibit negative values. In this case, the logarithm of the negative values is not defined, so we need a different approach to fit the model.

We will fit the non-linear time series

\[ x_t = -c_0 + e^{\alpha_0 + \alpha_1 t} + z_t \]

where the time series can assume negative values.

We present the simulation from the textbook here:

Show the code
set.seed(1)
dat <- tibble(w = rnorm(100, sd = 10)) |>
    mutate(
        Time = 1:n(),
         z = purrr::accumulate2(
            lag(w), w, 
            \(acc, nxt, w) 0.7 * acc + w,
            .init = 0)[-1],
          x = exp(1 + 0.05 * Time) + z) |>
    tsibble::as_tsibble(index = Time)

autoplot(dat, .var = x) +
  geom_hline(yintercept = 0) +
  labs(
    x = "Time",
    y = "Value",
    title = "Time Plot of Simulated Non-Linear Series"
  ) +
  theme_minimal() +
  theme(plot.title = element_text(hjust = 0.5))
Figure 8: Time plot of a simulated non-linear series

We will fit the non-linear model given above.

equation <- x ~ exp(alp0 + alp1 * Time)

dat_nls <- nls(equation, data = dat,
  start = list(alp0 = 0.1, alp1 = 0.5))

summary(dat_nls)$parameters
       Estimate   Std. Error  t value     Pr(>|t|)
alp0 1.17299000 0.0743864844 15.76886 1.232132e-28
alp1 0.04832158 0.0008197862 58.94413 2.432828e-78

The model we fitted was

\[ x_t = e^{1 + 0.05 t} + z_t \]

where

\[ z_t = 0.7 z_{t-1} + w_t \] and \(w_t\) is a white noise process.

Our fitted model is: \[ \hat x_t = e^{1.173 + 0.048 t} \]

Notice that our estimates are quite close to the parameters used in the simulation.

Check Your Understanding
  • Modify the time series and repeat the simulation.
  • Fit your model using the nls function in R.
  • Compare the parameter estimates with the actual values from your simulation.

Homework Preview (5 min)

  • Review upcoming homework assignment
  • Clarify questions
Download Homework

Small-Group Activity