Transformations, Forecasting and Bias Correction

Chapter 5: Lesson 2

Learning Outcomes

#{{< include outcomes/_chapter_5_lesson_4_outcomes.qmd >}}

Preparation

  • Read Sections 5.7, 5.9-5.11

Learning Journal Exchange (10 min)

  • Review another student’s journal

  • What would you add to your learning journal after reading another student’s?

  • What would you recommend the other student add to their learning journal?

  • Sign the Learning Journal review sheet for your peer

Class Activity: Simulate an Exponential Trend with a Seasonal Component (15 min)

There are a lot of processes that grow over time at a regular rate. These growing variables may be modeled as exponential functions. Exponential functions can be difficult to model in a linear regression, so we can use a logarithmic transformation to make the function linear and easier to work with.

We will simulate code that has a seasonal component and impose an exponential trend.

Figures

Figure 1 shows the simulated time series and it’s natural logarithm.

Show the code
n_years  <- 9
n_months <- n_years * 12
sigma    <- 0.05

# Dates
dates_seq <- seq(
  floor_date(now(), unit = "year"),
  length.out = n_months + 1,
  by = "-1 month"
) |>
  floor_date(unit = "month") |>
  sort() |>
  head(n_months)

t <- 1:n_months

# Month-of-year index: 1 = Jan, ..., 12 = Dec
month_index  <- ((t - 1) %% 12) + 1
month_factor <- factor(month_index, levels = 1:12, labels = month.abb)

# ----- KEY PART: indicator-style seasonal effects by month -----

# Compute the seasonal effect for each month-of-year
# using your original harmonic coefficients, but evaluated
# once per month instead of for every t.
seasonal_effect_by_month <- sapply(1:12, function(m) {
  0.03 * sin(2 * pi * 1 * m / 12) + 0.04 * cos(2 * pi * 1 * m / 12) +
  0.05 * sin(2 * pi * 2 * m / 12) + 0.03 * cos(2 * pi * 2 * m / 12) +
  0.01 * sin(2 * pi * 3 * m / 12) + 0.005 * cos(2 * pi * 3 * m / 12)
})

# Map each time point to its month-specific seasonal effect
seasonal_term <- seasonal_effect_by_month[month_index]

# AR(2) noise as before
random_ar <- arima.sim(model = list(ar = c(0.5, 0.2)),
                       n = n_months, sd = 0.02)

# Build tsibble
sim_ts <- tibble(
  t       = t,
  dates   = dates_seq,
  month   = month_factor,
  random  = random_ar,
  seasonal = seasonal_term,
  x_t     = exp(2 + 0.015 * t + seasonal + random)
) |>
  mutate(std_t = (t - mean(t)) / sd(t)) |>
  as_tsibble(index = dates)

# (Optional) create explicit 0/1 indicator columns for each month
sim_ts <- sim_ts |>
  bind_cols(
    model.matrix(~ month - 1, data = sim_ts) |>
      as_tibble()
  )

# Plots
plot_raw <- sim_ts |>
  autoplot(x_t) +
  labs(
    x = "Month",
    y = "x_t",
    title = "Simulated Time Series"
  ) +
  theme_minimal() +
  theme(plot.title = element_text(hjust = 0.5))

plot_log <- sim_ts |>
  autoplot(log(x_t)) +
  labs(
    x = "Month",
    y = "log(x_t)",
    title = "Logarithm of Simulated Time Series"
  ) +
  theme_minimal() +
  theme(plot.title = element_text(hjust = 0.5))

plot_raw | plot_log
Figure 1: Time plot of the time series (left) and the natural logarithm of the time series (right)

We will use (natural) logarithm of the time series before fitting any linear models. So, our response variable will be \(\log(x_t)\), rather than \(x_t\).

Even though there is no visual evidence of curvature in the trend for the logarithm of the time series, we will start by fitting a model that allows for a cubic trend. (In practice, we would probably not fit this model. However, we want to understand how to evaluate among different model specifications.

Model Selection Information Criteria: Balancing Fit and Complexity

In a previous activity, we compared the Cubic, Quadratic, and Linear models. You might have noticed that adding more variables (like \(t^2\) and \(t^3\)) almost always increases the \(R^2\) value and reduces the residual sum of squares. However, a model that hugs the training data too tightly (overfitting) often fails when forecasting new data.The question that Inforcation Criteria answer is how to select the best forecasting model, we need a way to penalize complexity. We use Information Criteria to find the “Goldilocks” model: one that fits the data well but remains simple enough to generalize.

AIC, AICc, and BIC

The motivation behind Information Criteria is based on Parsimony. Parsimony is often associated with Occam’s Razor: Given two models with similar predictive power, the simpler one is preferred. We want the model that explains the signal with the fewest possible parameters.The structure of all Information Criteria follow the same basic structure:

\[\text{IC} = \text{Badness of Fit} + \text{Penalty for Complexity}\] We want to minimize this value. The “Badness of Fit” is measured by \(-2\log(\mathcal{L})\), where \(\mathcal{L}\) is the likelihood (how probable the data is given the model). The “Penalty” increases as we add parameters (\(k\)).1.

AIC (Akaike Information Criterion)

\[AIC = -2\log(\mathcal{L}) + 2k\]

The AIC is founded on Information Theory. It estimates the relative amount of information lost when a model is used to represent the process that generated the data (using Kullback-Leibler divergence). Intuitively, it tries to predict which model will have the lowest prediction error on future data. The AIC is xxcellent for forecasting. It tends to prefer slightly more complex models than BIC.

AICc (Corrected AIC)

\[AICc = AIC + \frac{2k(k+1)}{n - k - 1}\]

When the sample size (\(n\)) is small, the standard AIC tends to select models that are too complex (overfitting). In practice, use AICc instead of AIC when \(n\) is finite. As \(n \to \infty\), AICc converges to AIC. Since time series data is often limited, AICc is a great metric.

BIC (Bayesian Information Criterion)

\[BIC = -2\log(\mathcal{L}) + k\ln(n)\] The BIC is founded on Bayesian Inference. It attempts to estimate the probability that a specific model is the “true” model among the set of candidates.Notice the penalty is \(k\ln(n)\) rather than \(2k\). Since \(\ln(n) > 2\) for any dataset with \(n > 7\), BIC penalizes complexity much more strictly than AIC. Use BIC when you want your model choice to be conservative given uncertainty in modeling or forecasting. It is great for explanatory modeling (understanding relationships), but sometimes under-fits for forecasting purposes.Summary Rule of ThumbLower is Better. The absolute number means nothing; only the difference between models matters.If AICc and BIC disagree, rely on AICc for forecasting and BIC for explanation.

In our specific simulated example below, the Linear model had the lowest AICc and BIC. Even though the Cubic model had more parameters to “wiggle” and fit the training data, the Information Criteria penalized those extra parameters (\(\beta_2\) and \(\beta_3\)) because they did not contribute enough information to justify their “cost.”

Cubic Model

After taking the (natural) logarithm of \(x_t\), we fit a cubic model to the log-transformed time series.

Full Cubic Model

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) + \beta_2 \left( \frac{t - \mu_t}{\sigma_t} \right)^2 + \beta_3 \left( \frac{t - \mu_t}{\sigma_t} \right)^3 + s_t + z_t \end{align*}\]

\[\begin{align*} s_t= \begin{cases} \Delta\beta_2 , & t = 2, 14, 26, \ldots ~~~~ ~~(February) \\ \Delta\beta_3 , & t = 2, 14, 26, \ldots ~~~~ ~~(March) \\ ~~~~~~~~⋮ & ~~~~~~~~~~~~⋮ \\ \Delta\beta_{12} , & t = 12, 24, 36, \ldots ~~~~ (December) \end{cases} \end{align*}\]

Show the code
# Cubic model with standardized time variable

cubic_lm <- sim_ts |>
  model(cubic = TSLM(log(x_t) ~ std_t + I(std_t^2) + I(std_t^3) + month )) #Jan is ommited 
cubic_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 15 × 7
   .model term        estimate std.error statistic   p.value sig  
   <chr>  <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
 1 cubic  (Intercept)  2.93      0.00745    393.   1.39e-151 TRUE 
 2 cubic  std_t        0.456     0.00514     88.6  1.21e- 91 TRUE 
 3 cubic  I(std_t^2)   0.00673   0.00230      2.93 4.23e-  3 TRUE 
 4 cubic  I(std_t^3)   0.00659   0.00267      2.47 1.52e-  2 TRUE 
 5 cubic  monthFeb    -0.0475    0.00997     -4.77 6.82e-  6 TRUE 
 6 cubic  monthMar    -0.128     0.00997    -12.8  3.18e- 22 TRUE 
 7 cubic  monthApr    -0.172     0.00997    -17.3  9.75e- 31 TRUE 
 8 cubic  monthMay    -0.151     0.00998    -15.2  6.76e- 27 TRUE 
 9 cubic  monthJun    -0.127     0.00999    -12.8  3.88e- 22 TRUE 
10 cubic  monthJul    -0.108     0.00999    -10.8  4.78e- 18 TRUE 
11 cubic  monthAug    -0.131     0.0100     -13.1  7.74e- 23 TRUE 
12 cubic  monthSep    -0.171     0.0100     -17.1  2.10e- 30 TRUE 
13 cubic  monthOct    -0.194     0.0100     -19.3  2.72e- 34 TRUE 
14 cubic  monthNov    -0.136     0.0100     -13.6  8.51e- 24 TRUE 
15 cubic  monthDec    -0.0450    0.0101      -4.48 2.16e-  5 TRUE 

Note that neither the quadratic nor the cubic terms are statistically significant in this model.

Quadratic Model

We now fit a quadratic model to the log-transformed time series.

Full Quadratic Model

The full model with a quadratic trend is written as:

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) + \beta_2 \left( \frac{t - \mu_t}{\sigma_t} \right)^2 + s_t + z_t \end{align*}\]

\[\begin{align*} s_t= \begin{cases} \Delta\beta_2 , & t = 2, 14, 26, \ldots ~~~~ ~~(February) \\ \Delta\beta_3 , & t = 2, 14, 26, \ldots ~~~~ ~~(March) \\ ~~~~~~~~⋮ & ~~~~~~~~~~~~⋮ \\ \Delta\beta_{12} , & t = 12, 24, 36, \ldots ~~~~ (December) \end{cases} \end{align*}\]

Show the code
quad_lm <- sim_ts |>
  model(quad = TSLM(log(x_t) ~ std_t + I(std_t^2)  + month)) # Note sin6 is omitted
quad_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05) 
# A tibble: 14 × 7
   .model term        estimate std.error statistic   p.value sig  
   <chr>  <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
 1 quad   (Intercept)  2.93      0.00763    384.   5.41e-152 TRUE 
 2 quad   std_t        0.468     0.00211    221.   1.47e-129 TRUE 
 3 quad   I(std_t^2)   0.00673   0.00236      2.86 5.28e-  3 TRUE 
 4 quad   monthFeb    -0.0473    0.0102      -4.62 1.22e-  5 TRUE 
 5 quad   monthMar    -0.127     0.0102     -12.4  1.59e- 21 TRUE 
 6 quad   monthApr    -0.171     0.0102     -16.7  6.09e- 30 TRUE 
 7 quad   monthMay    -0.150     0.0102     -14.7  4.44e- 26 TRUE 
 8 quad   monthJun    -0.126     0.0102     -12.3  2.57e- 21 TRUE 
 9 quad   monthJul    -0.106     0.0102     -10.4  3.09e- 17 TRUE 
10 quad   monthAug    -0.129     0.0102     -12.6  6.06e- 22 TRUE 
11 quad   monthSep    -0.169     0.0102     -16.5  1.79e- 29 TRUE 
12 quad   monthOct    -0.191     0.0103     -18.7  2.32e- 33 TRUE 
13 quad   monthNov    -0.134     0.0103     -13.0  8.09e- 23 TRUE 
14 quad   monthDec    -0.0422    0.0103      -4.12 8.29e-  5 TRUE 

Note the quadratic term is not significant.

Linear Model

Even though the quadratic terms were statistically significant, there is no visual indication that there is a quadratic trend in the time series after taking the logarithm. Hence, we will now fit a linear model to the log-transformed time series. We want to be able to compare the fit of models with a linear trend to the models with quadratic trends.

Full Linear Model

First, we fit a full model with a linear trend. We can express this model as:

\[\begin{align*} \log(x_t) &= \beta_0 + \beta_1 \left( \frac{t - \mu_t}{\sigma_t} \right) + s_t + z_t \end{align*}\]

\[\begin{align*} s_t= \begin{cases} \Delta\beta_2 , & t = 2, 14, 26, \ldots ~~~~ ~~(February) \\ \Delta\beta_3 , & t = 2, 14, 26, \ldots ~~~~ ~~(March) \\ ~~~~~~~~⋮ & ~~~~~~~~~~~~⋮ \\ \Delta\beta_{12} , & t = 12, 24, 36, \ldots ~~~~ (December) \end{cases} \end{align*}\]

Show the code
linear_lm <- sim_ts |>
  model(linear = TSLM(log(x_t) ~ std_t + month)) # Note sin6 is omitted
linear_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 13 × 7
   .model term        estimate std.error statistic   p.value sig  
   <chr>  <chr>          <dbl>     <dbl>     <dbl>     <dbl> <lgl>
 1 linear (Intercept)   2.93     0.00751    390.   4.37e-154 TRUE 
 2 linear std_t         0.468    0.00219    214.   3.29e-129 TRUE 
 3 linear monthFeb     -0.0473   0.0106      -4.46 2.24e-  5 TRUE 
 4 linear monthMar     -0.127    0.0106     -12.0  1.05e- 20 TRUE 
 5 linear monthApr     -0.171    0.0106     -16.2  5.36e- 29 TRUE 
 6 linear monthMay     -0.151    0.0106     -14.2  3.43e- 25 TRUE 
 7 linear monthJun     -0.126    0.0106     -11.9  1.61e- 20 TRUE 
 8 linear monthJul     -0.106    0.0106     -10.0  1.54e- 16 TRUE 
 9 linear monthAug     -0.129    0.0106     -12.2  3.95e- 21 TRUE 
10 linear monthSep     -0.169    0.0106     -15.9  1.55e- 28 TRUE 
11 linear monthOct     -0.191    0.0106     -18.0  2.23e- 32 TRUE 
12 linear monthNov     -0.134    0.0106     -12.6  5.80e- 22 TRUE 
13 linear monthDec     -0.0422   0.0106      -3.97 1.40e-  4 TRUE 

Comparison of Fitted Models

AIC, AICc, and BIC

We will now compare the models we fitted above. Table 1 gives the AIC, AICc, and BIC of the models fitted above.

Show the code
model_combined <- sim_ts |>
  model(
    cubic = TSLM(log(x_t) ~ std_t + I(std_t^2) + I(std_t^3) +month),
    quad = TSLM(log(x_t) ~ std_t + I(std_t^2) + month),
    linear = TSLM(log(x_t) ~ std_t + month )
  )

glance(model_combined) |>
  select(.model, AIC, AICc, BIC)
Table 1: Comparison of the AIC, AICc, and BIC values for the models fitted to the logarithm of the simulated time series.
Model AIC AICc BIC
cubic **-817.1** **-811.2** **-774.2**
quad -812.3 -807 -772
linear -805.3 -800.8 -767.7

Investigating Autocorrelation of the Random Component

Recall that if there is autocorrelation in the random component, the standard error of the parameter estimates tends to be underestimated. We can account for this autocorrelation using Generalized Least Squares, GLS, if needed.

Linear Trend

Figure 2 illustrates the ACF of the model with a linear trend.

Show the code
linear_ts <- linear_lm |>
  residuals() 

acf(linear_ts$.resid, plot=TRUE, lag.max = 25)
Figure 2: ACF of model linear trend

Notice that the residual correlogram indicates a positive autocorrelation in the values. This suggests that the standard errors of the regression coefficients will be underestimated, which means that some predictors can appear to be statistically significant when they are not.

Figure 3 illustrates the PACF of the model with a linear trend.

Show the code
pacf(linear_ts$.resid, plot=TRUE, lag.max = 25)
Figure 3: PACF of model linear trend

Only the first partial autocorrelation is statistically significant. The partial autocorrelation plot indicates that an \(AR(1)\) model could adequately model the random component of the logarithm of the time series. Recall that in Chapter 5, Lesson 1, we fitted a linear regression model using the value of the partial autocorrelation function for \(k=1\). This helps account for the autocorrelation in the residuals.

The first few partial autocorrelation values are:

Show the code
pacf(linear_ts$.resid, plot=FALSE, lag.max = 10)

Partial autocorrelations of series 'linear_ts$.resid', by lag

     1      2      3      4      5      6      7      8      9     10 
 0.395  0.217  0.012  0.004 -0.030 -0.018 -0.109 -0.011  0.057 -0.192 

The partial autocorrelation when \(k=1\) is approximately 0.395. We will use this value as we recompute the regression coefficients.

Show the code
linear_pacf <- pacf(linear_ts$.resid, plot=FALSE, lag.max = 25)
lag_1 <- linear_pacf$acf[1]

# Load additional packages
pacman::p_load(tidymodels, multilevelmod,
  nlme, broom.mixed)

temp_spec <- linear_reg() |>
  set_engine("gls", correlation = nlme::corAR1(lag_1))

temp_gls <- temp_spec |>
  fit(log(x_t) ~ std_t +month, data = sim_ts)

tidy(temp_gls) |>
  mutate(
    lower = estimate + qnorm(0.025) * std.error,
    upper = estimate + qnorm(0.975) * std.error
  ) 
# A tibble: 13 × 7
   term        estimate std.error statistic   p.value   lower   upper
   <chr>          <dbl>     <dbl>     <dbl>     <dbl>   <dbl>   <dbl>
 1 (Intercept)   2.94     0.00752    391.   4.24e-154  2.92    2.95  
 2 std_t         0.468    0.00339    138.   3.03e-111  0.462   0.475 
 3 monthFeb     -0.0483   0.00813     -5.94 4.62e-  8 -0.0642 -0.0324
 4 monthMar     -0.129    0.00968    -13.3  2.23e- 23 -0.148  -0.110 
 5 monthApr     -0.173    0.0103     -16.9  2.58e- 30 -0.193  -0.153 
 6 monthMay     -0.152    0.0105     -14.5  7.99e- 26 -0.173  -0.132 
 7 monthJun     -0.128    0.0106     -12.1  6.38e- 21 -0.149  -0.107 
 8 monthJul     -0.108    0.0106     -10.2  6.77e- 17 -0.129  -0.0873
 9 monthAug     -0.131    0.0106     -12.4  1.57e- 21 -0.152  -0.110 
10 monthSep     -0.171    0.0105     -16.2  3.71e- 29 -0.192  -0.150 
11 monthOct     -0.193    0.0103     -18.7  1.14e- 33 -0.214  -0.173 
12 monthNov     -0.136    0.00979    -13.9  1.28e- 24 -0.155  -0.117 
13 monthDec     -0.0450   0.00837     -5.37 5.52e-  7 -0.0614 -0.0286

Figure 4 illustrates the original time series (in black) and the fitted model (in blue). For reference, a dotted line illustrating the simple least squares line is plotted on this figure for reference. It helps highlight the exponential shape of the trend.

Show the code
forecast_df <- linear_lm |> 
  forecast(sim_ts) |>
  as_tibble() |> 
  dplyr::select(std_t, .mean) |> 
  rename(pred = .mean)

sim_ts |>
  left_join(forecast_df, by = "std_t") |>
  as_tsibble(index = dates) |>
  autoplot(.vars = x_t) +
  geom_line(aes(y = pred), color = "#56B4E9", alpha = 0.75) +
    labs(
      x = "Month",
      y = "Simulated Time Series",
      title = "Time Plot of Simulated Time Series with an Exponential Trend",
      subtitle = "Predicted values based on the full linear model are given in blue"
    ) +
    theme_minimal() +
    theme(
      plot.title = element_text(hjust = 0.5),
      plot.subtitle = element_text(hjust = 0.5)
    )
Figure 4: Time plot of the time series and the fitted linear regression model

Small-Group Activity: Retail Sales (20 min)

Figure 5 gives the total sales (in millions of U.S. dollars) for the category “all other general merchandise stores (45299).”

Show the code
# Read in retail sales data for "all other general merchandise stores"
retail_ts <- rio::import("https://byuistats.github.io/timeseries/data/retail_by_business_type.parquet") |>
  filter(naics == 45299) |>
  filter(as_date(month) >= my("Jan 1998")) |>
  as_tsibble(index = month)

retail_ts |>
  autoplot(.vars = sales_millions) +
    labs(
      x = "Month",
      y = "Sales (Millions of U.S. Dollars)",
      title = paste0(retail_ts$business[1], " (", retail_ts$naics[1], ")")
    ) +
    theme_minimal() +
    theme(plot.title = element_text(hjust = 0.5))
Figure 5: Time plot of the total monthly retail sales for all other general merchandise stores (45299)
Check Your Understanding

Use Figure 5 to explain the following questions to a partner.

  • What is the shape of the trend of this time series?
  • Which decomposition would be more appropriate: additive or multiplicative? Justify your answer.
  • Apply the appropriate transformation to the time series.
  • Fit appropriate models utilizing the indicator terms for seasonal components.
  • Determine the “best” model for these data. Justify your decision.
  • Check for autocorrelation in the random terms.
  • Use GLS to fit the model, if there is evidence of autocorrelation in the random component.
  • Plot the fitted values and the time series on the same figure.

Class Activity: Anti-Log Transformation and Bias Correction on Simulated Data (10 min)

Forecasts for a Simulated Time Series

We can use the forecast() function to predict future values of this time series. The code fold below displays the output of the forecast() command. Note that the column labeled x_t (i.e. \(x_t\)), representing the time series is populated with information tied to a normal distribution. The mean and standard deviation specified are the estimated parameters for the distribution of the predicted values of \(\log(x_t)\). If you raise \(e\) to the power of the mean, you get the values in the .mean column.

Forecast of Simulated Data: Show the code
# Fit model (OLS)
sim_reduced_linear_lm1 <- sim_ts |>
  model(sim_reduced_linear1 = TSLM(log(x_t) ~ std_t + month))


# Number of years / months to forecast
n_years_forecast  <- 5
n_months_forecast <- 12 * n_years_forecast

# Last observed t and date from the simulated data
last_t    <- max(sim_ts$t)
last_date <- max(sim_ts$dates)

# Build new data consistent with the indicator-variable spec
new_dat <- tibble(
  # Future time index (no overlap with observed sample)
  t = last_t + seq_len(n_months_forecast),
  
  # Future monthly dates, starting one month after the last observed date
  dates = seq(
    from = last_date %m+% months(1),
    by   = "1 month",
    length.out = n_months_forecast
  )
) |>
  mutate(
    # Month-of-year index and factor (same as in sim_ts)
    month_index = ((t - 1) %% 12) + 1,
    month = factor(month_index, levels = 1:12, labels = month.abb),
    
    # Standardized time using the *training* t from sim_ts
    std_t = (t - mean(pull(sim_ts, t))) / sd(pull(sim_ts, t))
  ) |>
  as_tsibble(index = dates)

sim_reduced_linear_lm1 |> 
  forecast(new_data = new_dat)

Figure 6 illustrates the forecasted values for the time series.

Show the code
sim_forecast_plot_regular <- sim_reduced_linear_lm1 |> 
  forecast(new_data = new_dat) |>
  autoplot(sim_ts, level = 95) +
  labs(
    x = "Month",
    y = "x_t",
    title = "Simulated Time Series"
  ) +
  theme_minimal() +
  theme(legend.position = "inset") +
  theme(
    plot.title = element_text(hjust = 0.5)
  )

sim_forecast_plot_logged <- sim_reduced_linear_lm1 |> 
  forecast(new_data = new_dat) |>
  autoplot(sim_ts, level = 95) +
  scale_y_continuous(trans = "log", labels = trans_format("log")) +
  labs(
    x = "Month",
    y = "log(x_t)",
    title = "Logarithm of Simulated Time Series"
  ) +
  theme_minimal() +
  theme(legend.position = "inset")  +
  theme(
    plot.title = element_text(hjust = 0.5)
  )

sim_forecast_plot_regular | sim_forecast_plot_logged
Figure 6: Forecasted values of the time series with 95% confidence bands

Bias Correction

The forecasts presented on the left figure were computed by raising \(e\) to the power of the predicted log-values. Unfortunately, this introduces bias in the forecasted means. This bias tends to be large if the regression model does not fit the data closely.

The textbook points out that the bias correction should only be applied for means, not for simulated values. This means that if you are simulating transformed values, and you apply the inverse of your original transformation, the resulting values are inappropriate.

When we apply the inverse transform to the residual series, we introduce a bias. We can account for this bias applying one of two adjustments to our mean values. The theory behind this transformations is alluded to in the textbook, but is not essential.

There are two common patterns observed in the residual series: (1) Gaussian white noise or (2) Non-Normal values.

We can use the skewness statistic to assess the shape of the residual series. When the skewness is less than -1 or greater than 1, we say that the distribution is highly skewed. For skewness values between -1 and -0.5 or between 0.5 and 1, we say there is moderate skewness. If skewness lies between -0.5 and 0.5, the distribution is considered roughly symmetric.

Log-Normal Correction

Normally-Distributed Residual Series

If the residual series follows a normal distribution, we multiply the means of the forecasted values \(\hat x_t\) by the factor \(e^{\frac{1}{2} \sigma^2}\):

\[ \hat x_t' = e^{\frac{1}{2} \sigma^2} \cdot \hat x_t \]

where \(\left\{ \hat x_t: t = 1, \ldots, n \right\}\) gives the values of the forecasted series, and \(\left\{ \hat x_t': t = 1, \ldots, n \right\}\) is the adjusted forecasted values.

Emperical Correction

Non-Normally Distributed Residual Series

If the residual series lacks normality , then we can adjust the forecasts \(\left\{ \hat x_t \right\}\) as follows:

\[ \hat x_t' = e^{\widehat{\log x_t}} \sum_{t=1}^{n} \frac{e^{z_t}}{n} \]

where \(\left\{ \widehat{\log x_t}: t = 1, \ldots, n \right\}\) is the forecasted series obtained by fitting the log-regression model.

\(\left\{ z_t \right\}\) is the residual series from this fitted model in the log-transformed units.

The code given below can be used to compute the corrected mean values for the simulated data.

Show the code
sim_model_values <- sim_reduced_linear_lm1 |>
  glance()

sim_model_check <- sim_model_values |>
  mutate(
    sigma = sqrt(sigma2),
    lognorm_cf = exp((1/2) * sigma2),
    empirical_cf = sim_reduced_linear_lm1 |>
      residuals() |>
      pull(.resid) |>
      exp() |>
      mean()) |>
  select(lognorm_cf, empirical_cf)

# sim_pred <- sim_reduced_linear_lm1 |> 
#   forecast(new_data = new_dat) |>
#   mutate(.mean_correction = .mean * sim_model_check$empirical_cf) |>
#   select(t, x_t, .mean, .mean_correction)

The log-normal adjustment is \(1.00025\), and the emperical adjustment is \(1.00022\). Both of these values are extremely close to 1, so they will have a negligible impact on the predicted values.

This result does not generalize. In other situations, there can be a substantial effect of this bias on the predicted means.

Histogram of residuals

Figure 7 gives a histogram of the residuals and compute the skewness of the residual series.

Show the code
sim_resid_df <- sim_reduced_linear_lm1 |> 
  residuals() |> 
  as_tibble() |> 
  dplyr::select(.resid) |>
  rename(x = .resid) 
  
sim_resid_df |>
  mutate(density = dnorm(x, mean(sim_resid_df$x), sd(sim_resid_df$x))) |>
  ggplot(aes(x = x)) +
    geom_histogram(aes(y = after_stat(density)),
        color = "white", fill = "#56B4E9", binwidth = 0.01) +
    geom_line(aes(x = x, y = density)) +
    theme_bw() +
    labs(
      x = "Values",
      y = "Frequency",
      title = "Histogram of Residuals from the Reduced Linear Model"
    ) +
    theme(
      plot.title = element_text(hjust = 0.5)
    )
Figure 7: Histogram of the values in the residual series based on the model with a linear trend and seasonal Fourier terms where i≤3

We can use the command skewness(sim_resid_df$x) to compute the skewness of these residuals: -0.156. This number is close to zero (specifically between -0.5 and 0.5,) so we conclude that the residual series is approximately normally distributed. We can apply the log-normal correction to our mean forecast values.

Class Activity: Apple Revenue (10 min)

We take another look at the quarterly revenue reported by Apple Inc. from Q1 of 2005 through Q1 of 2012

Visualizing the Time Series

Figure 8 gives the time plot illustrating the quarterly revenue reported by Apple from the first quarter of 2005 through the first quarter of 2012.

Show the code
apple_raw <- rio::import("https://byuistats.github.io/timeseries/data/apple_revenue.csv") |>
  mutate(dates = round_date(mdy(date), unit = "quarter")) |>
  arrange(dates)

apple_ts <- apple_raw |>
  filter(dates <= my("Jan 2012")) |>
  dplyr::select(dates, revenue_billions) |>
  mutate(
    t     = row_number(),
    std_t = (t - mean(t)) / sd(t),
    # quarter-of-year as indicator-style factor
    qtr   = factor(quarter(dates), levels = 1:4, labels = paste0("Q", 1:4))
  ) |>
  as_tsibble(index = dates)

# Plots stay the same, now just using the updated apple_ts

apple_plot_regular <- apple_ts |>
  autoplot(.vars = revenue_billions) +
  stat_smooth(
    method  = "lm",
    formula = y ~ x,
    geom    = "smooth",
    se      = FALSE,
    color   = "#E69F00",
    linetype = "dotted"
  ) +
  labs(
    x = "Quarter",
    y = "Revenue (Billions USD)",
    title = "Apple Revenue (in Billions USD)"
  ) +
  theme_minimal() +
  theme(plot.title = element_text(hjust = 0.5))

apple_plot_transformed <- apple_ts |>
  autoplot(.vars = log(revenue_billions)) +
  stat_smooth(
    method  = "lm",
    formula = y ~ x,
    geom    = "smooth",
    se      = FALSE,
    color   = "#E69F00",
    linetype = "dotted"
  ) +
  labs(
    x = "Quarter",
    y = "Logarithm of Revenue",
    title = "Logarithm of Apple Revenue"
  ) +
  theme_minimal() +
  theme(plot.title = element_text(hjust = 0.5))

apple_plot_regular | apple_plot_transformed
Figure 8: Apple quarterly revenue figures (in billions of U.S. dollars) from Q1 of 2005 to Q1 of 2012; the figure on the left presents the revenue in dollars and the figure on the right gives the logarithm of the quarterly revenue; a simple linear regression line is given for reference

Finding a Suitable Model

We start by fitting a cubic trend to the logarithm of the quarterly revenues. The full model is fitted here:

Show the code
# Cubic model with standardized time variable

apple_cubic_lm <- apple_ts |>
  model(apple_cubic = TSLM(log(revenue_billions) ~ std_t + I(std_t^2) + I(std_t^3) +qtr )) 
apple_cubic_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 7 × 7
  .model      term        estimate std.error statistic  p.value sig  
  <chr>       <chr>          <dbl>     <dbl>     <dbl>    <dbl> <lgl>
1 apple_cubic (Intercept)   2.15      0.0888    24.2   2.38e-17 TRUE 
2 apple_cubic std_t         1.01      0.0972    10.4   6.41e-10 TRUE 
3 apple_cubic I(std_t^2)   -0.0158    0.0445    -0.355 7.26e- 1 FALSE
4 apple_cubic I(std_t^3)    0.0866    0.0516     1.68  1.07e- 1 FALSE
5 apple_cubic qtrQ2        -0.498     0.107     -4.66  1.20e- 4 TRUE 
6 apple_cubic qtrQ3        -0.435     0.107     -4.08  4.96e- 4 TRUE 
7 apple_cubic qtrQ4        -0.330     0.107     -3.09  5.36e- 3 TRUE 

The quadratic and cubic trend terms are not statistically significant. We now eliminate the cubic term and fit a full model with a quadratic trend.

Show the code
apple_quad_lm <- apple_ts |>
  model(apple_quad = TSLM(log(revenue_billions) ~ std_t + I(std_t^2) + qtr )) 
apple_quad_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05) 
# A tibble: 6 × 7
  .model     term        estimate std.error statistic  p.value sig  
  <chr>      <chr>          <dbl>     <dbl>     <dbl>    <dbl> <lgl>
1 apple_quad (Intercept)   2.15      0.0923    23.3   1.68e-17 TRUE 
2 apple_quad std_t         1.16      0.0403    28.7   1.64e-19 TRUE 
3 apple_quad I(std_t^2)   -0.0158    0.0462    -0.341 7.36e- 1 FALSE
4 apple_quad qtrQ2        -0.507     0.111     -4.58  1.33e- 4 TRUE 
5 apple_quad qtrQ3        -0.435     0.111     -3.93  6.73e- 4 TRUE 
6 apple_quad qtrQ4        -0.320     0.111     -2.89  8.22e- 3 TRUE 

The quadratic trend term is not statistically significant. We will fit a full model with a linear trend.

Show the code
# Linear trend with standardized time variable

apple_linear_lm <- apple_ts |>
  model(apple_linear = TSLM(log(revenue_billions) ~ std_t + qtr ))
apple_linear_lm |>
  tidy() |>
  mutate(sig = p.value < 0.05)
# A tibble: 5 × 7
  .model       term        estimate std.error statistic  p.value sig  
  <chr>        <chr>          <dbl>     <dbl>     <dbl>    <dbl> <lgl>
1 apple_linear (Intercept)    2.13     0.0737     28.9  3.64e-20 TRUE 
2 apple_linear std_t          1.16     0.0396     29.2  2.81e-20 TRUE 
3 apple_linear qtrQ2         -0.503    0.108      -4.66 9.98e- 5 TRUE 
4 apple_linear qtrQ3         -0.431    0.108      -3.99 5.42e- 4 TRUE 
5 apple_linear qtrQ4         -0.316    0.108      -2.93 7.38e- 3 TRUE 

All the terms are statistically significant in this model. We now compare the models we have fitted using the AIC, AICc, and BIC criterion.

Show the code
model_combined <- apple_ts |>
  model(
    apple_cubic = TSLM(log(revenue_billions) ~ std_t + I(std_t^2) + I(std_t^3) +qtr),
    apple_quad = TSLM(log(revenue_billions) ~ std_t + I(std_t^2)+qtr),
    apple_linear = TSLM(log(revenue_billions) ~ std_t +qtr)
  )

glance(model_combined) |>
  select(.model, AIC, AICc, BIC)
Table 2: Comparison of the AIC, AICc, and BIC values for the models fitted to the logarithm of the simulated time series.
Model AIC AICc BIC
apple_cubic -84 -76.8 -73.1
apple_quad -82.5 -77.2 -73
apple_linear **-84.4** **-80.6** **-76.2**

We will apply the apple_linear model.

Using the Residuals to Determine the Appropriate Correction

The residuals of this model are illustrated in Figure 9.

Show the code
apple_resid_df <- model_combined |> 
  dplyr::select(apple_linear) |>
  residuals() |> 
  as_tibble() |> 
  dplyr::select(.resid) |>
  rename(x = .resid) 
  
apple_resid_df |>
  mutate(density = dnorm(x, mean(apple_resid_df$x), sd(apple_resid_df$x))) |>
  ggplot(aes(x = x)) +
    geom_histogram(aes(y = after_stat(density)),
        color = "white", fill = "#56B4E9", binwidth = 0.05) +
    geom_line(aes(x = x, y = density)) +
    theme_bw() +
    labs(
      x = "Values",
      y = "Frequency",
      title = "Histogram of Residuals from the Reduced Model with a Linear Trend"
    ) +
    theme(
      plot.title = element_text(hjust = 0.5)
    )
Figure 9: Histogram of the residuals from the reduced model with a linear trend component

Using the command skewness(apple_resid_df$x), we compute the skewness of these residuals as: -0.711. This number is not close to zero (it is between -1 and -0.5) indicating moderate skewness. We would therefore apply the empirical correction to our mean forecast values.

Applying the Correction Factor

Table 3 summarizes some of the corrected mean values. Note that in this particular case, the corrected values are very close to the uncorrected values.

Show the code
apple_model_values <- model_combined |> 
  dplyr::select(apple_linear) |>
  glance()

apple_model_check <- apple_model_values |>
  mutate(
    sigma = sqrt(sigma2),
    lognorm_cf = exp((1/2) * sigma2),
    empirical_cf = apple_linear_lm |>
      residuals() |>
      pull(.resid) |>
      exp() |>
      mean()) |>
  select(.model, r_squared, sigma2, sigma, lognorm_cf, empirical_cf)

apple_pred <- model_combined |> 
  dplyr::select(apple_linear) |>
  forecast(new_data = apple_ts) |>
  mutate(.mean_correction = .mean * apple_model_check$empirical_cf) |>
  select(t, revenue_billions, .mean, .mean_correction)

The log-normal adjustment is \(1.00025\), and the emperical adjustment is \(1.00022\).

Show the code
apple_pred <- model_combined |> 
  dplyr::select(apple_linear) |>
  forecast(new_data = apple_ts) |>
  mutate(.mean_correction = .mean * apple_model_check$empirical_cf) |>     
  select(t, revenue_billions, .mean, .mean_correction)
Table 3: Fitted values for the model representing Apple’s quarterly revenue
t .mean .mean_correction
1 1.293 1.316
2 0.896 0.911
3 1.103 1.122
4 1.416 1.441
28 36.873 37.516
29 57.956 58.965

Plotting the Fitted Values

These fitted values are illustrated in Figure 10.

Show the code
apple_ts |>
  autoplot(.vars = revenue_billions) +
  geom_line(data = apple_pred, aes(x = dates, y = .mean_correction), color = "#56B4E9") +
    labs(
      x = "Quarter",
      y = "Revenue (Billions USD)",
      title = "Apple Revenue in Billions of U.S. Dollars"
    ) +
    theme_minimal() +
    theme(plot.title = element_text(hjust = 0.5))
Figure 10: Apple Inc.’s quarterly revenue in billions of U.S. dollars through first quarter of 2012 (in black) and the fitted regression model (in blue)

This time series was used as an example. We are obviously not interested in forecasting future values using this model. However, this is an excellent example of real-world exponential growth in a time series with a seasonal component. Limiting factors prevent exponential growth from being sustainable in the long run. After 2012, the Apple quarterly revenues follow a different, but very impressive, model. This is illustrated in Figure 11.

Show the code
apple_raw |>
  dplyr::select(dates, revenue_billions) |>
  as_tsibble(index = dates) |>
  autoplot(.vars = revenue_billions) +
  geom_line(
    data = apple_raw |> filter(dates >= my("Jan 2012")), 
    aes(x = dates, y = revenue_billions), 
    color = "#D55E00"
  ) +
  labs(
    x = "Quarter",
    y = "Revenue (Billions USD)",
    title = "Apple Revenue (in Billions USD)"
  ) +
  theme_minimal() +
  theme(plot.title = element_text(hjust = 0.5))
Figure 11: Apple Inc.’s quarterly revenue in billions of U.S. dollars; values beginning with the first quarter of 2012 are shown in orange

Small-Group Activity: Industrial Electricity Consumption in Texas

These data represent the amount of electricity used each month for industrial applications in Texas.

Show the code
elec_ts <- rio::import("https://byuistats.github.io/timeseries/data/electricity_tx.csv") |>
  dplyr::select(-comments) |>
  mutate(date = my(month),
         month = as.factor(month(date))) |>
  mutate(
    t = 1:n(),
    std_t = (t - mean(t)) / sd(t)
  ) |>
  as_tsibble(index = date)

elec_plot_raw <- elec_ts |>
    autoplot(.vars = megawatthours) +
    labs(
      x = "Month",
      y = "Megawatt-hours",
      title = "Texas' Industrial Electricity Use"
    ) +
    theme_minimal() +
    theme(
      plot.title = element_text(hjust = 0.5)
    )

elec_plot_log <- elec_ts |>
    autoplot(.vars = log(megawatthours)) +
    labs(
      x = "Month",
      y = "log(Megwatt-hours)",
      title = "Log of Texas' Industrial Electricity Use"
    ) +
    theme_minimal() +
    theme(
      plot.title = element_text(hjust = 0.5)
    )

elec_plot_raw | elec_plot_log

Check Your Understanding

Use the Texas industrial electricity consumption data to do the following.

  • Select an appropriate fitted model using the AIC, AICc, or BIC critera.
  • Use the residuals to determine the appropriate correction for the data.
  • Forecast the data for the next 5 years.
  • Apply the appropriate correction to the forecasted values.
  • Plot the fitted (forecasted) values along with the time series.

Homework Preview (5 min)

  • Review upcoming homework assignment
  • Clarify questions
Download Homework

Small-Group Activity

:::