#{{< include outcomes/_chapter_5_lesson_4_outcomes.qmd >}}
Preparation
Read Sections 5.7, 5.9-5.11
Learning Journal Exchange (10 min)
Review another student’s journal
What would you add to your learning journal after reading another student’s?
What would you recommend the other student add to their learning journal?
Sign the Learning Journal review sheet for your peer
Class Activity: Simulate an Exponential Trend with a Seasonal Component (15 min)
There are a lot of processes that grow over time at a regular rate. These growing variables may be modeled as exponential functions. Exponential functions can be difficult to model in a linear regression, so we can use a logarithmic transformation to make the function linear and easier to work with.
We will simulate code that has a seasonal component and impose an exponential trend.
Figures
Figure 1 shows the simulated time series and it’s natural logarithm.
Show the code
n_years <-9n_months <- n_years *12sigma <-0.05# Datesdates_seq <-seq(floor_date(now(), unit ="year"),length.out = n_months +1,by ="-1 month") |>floor_date(unit ="month") |>sort() |>head(n_months)t <-1:n_months# Month-of-year index: 1 = Jan, ..., 12 = Decmonth_index <- ((t -1) %%12) +1month_factor <-factor(month_index, levels =1:12, labels = month.abb)# ----- KEY PART: indicator-style seasonal effects by month -----# Compute the seasonal effect for each month-of-year# using your original harmonic coefficients, but evaluated# once per month instead of for every t.seasonal_effect_by_month <-sapply(1:12, function(m) {0.03*sin(2* pi *1* m /12) +0.04*cos(2* pi *1* m /12) +0.05*sin(2* pi *2* m /12) +0.03*cos(2* pi *2* m /12) +0.01*sin(2* pi *3* m /12) +0.005*cos(2* pi *3* m /12)})# Map each time point to its month-specific seasonal effectseasonal_term <- seasonal_effect_by_month[month_index]# AR(2) noise as beforerandom_ar <-arima.sim(model =list(ar =c(0.5, 0.2)),n = n_months, sd =0.02)# Build tsibblesim_ts <-tibble(t = t,dates = dates_seq,month = month_factor,random = random_ar,seasonal = seasonal_term,x_t =exp(2+0.015* t + seasonal + random)) |>mutate(std_t = (t -mean(t)) /sd(t)) |>as_tsibble(index = dates)# (Optional) create explicit 0/1 indicator columns for each monthsim_ts <- sim_ts |>bind_cols(model.matrix(~ month -1, data = sim_ts) |>as_tibble() )# Plotsplot_raw <- sim_ts |>autoplot(x_t) +labs(x ="Month",y ="x_t",title ="Simulated Time Series" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5))plot_log <- sim_ts |>autoplot(log(x_t)) +labs(x ="Month",y ="log(x_t)",title ="Logarithm of Simulated Time Series" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5))plot_raw | plot_log
Figure 1: Time plot of the time series (left) and the natural logarithm of the time series (right)
We will use (natural) logarithm of the time series before fitting any linear models. So, our response variable will be \(\log(x_t)\), rather than \(x_t\).
Even though there is no visual evidence of curvature in the trend for the logarithm of the time series, we will start by fitting a model that allows for a cubic trend. (In practice, we would probably not fit this model. However, we want to understand how to evaluate among different model specifications.
Model Selection Information Criteria: Balancing Fit and Complexity
In a previous activity, we compared the Cubic, Quadratic, and Linear models. You might have noticed that adding more variables (like \(t^2\) and \(t^3\)) almost always increases the \(R^2\) value and reduces the residual sum of squares. However, a model that hugs the training data too tightly (overfitting) often fails when forecasting new data.The question that Inforcation Criteria answer is how to select the best forecasting model, we need a way to penalize complexity. We use Information Criteria to find the “Goldilocks” model: one that fits the data well but remains simple enough to generalize.
AIC, AICc, and BIC
The motivation behind Information Criteria is based on Parsimony. Parsimony is often associated with Occam’s Razor: Given two models with similar predictive power, the simpler one is preferred. We want the model that explains the signal with the fewest possible parameters.The structure of all Information Criteria follow the same basic structure:
\[\text{IC} = \text{Badness of Fit} + \text{Penalty for Complexity}\] We want to minimize this value. The “Badness of Fit” is measured by \(-2\log(\mathcal{L})\), where \(\mathcal{L}\) is the likelihood (how probable the data is given the model). The “Penalty” increases as we add parameters (\(k\)).1.
AIC (Akaike Information Criterion)
\[AIC = -2\log(\mathcal{L}) + 2k\]
The AIC is founded on Information Theory. It estimates the relative amount of information lost when a model is used to represent the process that generated the data (using Kullback-Leibler divergence). Intuitively, it tries to predict which model will have the lowest prediction error on future data. The AIC is xxcellent for forecasting. It tends to prefer slightly more complex models than BIC.
AICc (Corrected AIC)
\[AICc = AIC + \frac{2k(k+1)}{n - k - 1}\]
When the sample size (\(n\)) is small, the standard AIC tends to select models that are too complex (overfitting). In practice, use AICc instead of AIC when \(n\) is finite. As \(n \to \infty\), AICc converges to AIC. Since time series data is often limited, AICc is a great metric.
BIC (Bayesian Information Criterion)
\[BIC = -2\log(\mathcal{L}) + k\ln(n)\] The BIC is founded on Bayesian Inference. It attempts to estimate the probability that a specific model is the “true” model among the set of candidates.Notice the penalty is \(k\ln(n)\) rather than \(2k\). Since \(\ln(n) > 2\) for any dataset with \(n > 7\), BIC penalizes complexity much more strictly than AIC. Use BIC when you want your model choice to be conservative given uncertainty in modeling or forecasting. It is great for explanatory modeling (understanding relationships), but sometimes under-fits for forecasting purposes.Summary Rule of ThumbLower is Better. The absolute number means nothing; only the difference between models matters.If AICc and BIC disagree, rely on AICc for forecasting and BIC for explanation.
In our specific simulated example below, the Linear model had the lowest AICc and BIC. Even though the Cubic model had more parameters to “wiggle” and fit the training data, the Information Criteria penalized those extra parameters (\(\beta_2\) and \(\beta_3\)) because they did not contribute enough information to justify their “cost.”
Even though the quadratic terms were statistically significant, there is no visual indication that there is a quadratic trend in the time series after taking the logarithm. Hence, we will now fit a linear model to the log-transformed time series. We want to be able to compare the fit of models with a linear trend to the models with quadratic trends.
Full Linear Model
First, we fit a full model with a linear trend. We can express this model as:
Table 1: Comparison of the AIC, AICc, and BIC values for the models fitted to the logarithm of the simulated time series.
Model
AIC
AICc
BIC
cubic
**-817.1**
**-811.2**
**-774.2**
quad
-812.3
-807
-772
linear
-805.3
-800.8
-767.7
Investigating Autocorrelation of the Random Component
Recall that if there is autocorrelation in the random component, the standard error of the parameter estimates tends to be underestimated. We can account for this autocorrelation using Generalized Least Squares, GLS, if needed.
Linear Trend
Figure 2 illustrates the ACF of the model with a linear trend.
Notice that the residual correlogram indicates a positive autocorrelation in the values. This suggests that the standard errors of the regression coefficients will be underestimated, which means that some predictors can appear to be statistically significant when they are not.
Figure 3 illustrates the PACF of the model with a linear trend.
Show the code
pacf(linear_ts$.resid, plot=TRUE, lag.max =25)
Figure 3: PACF of model linear trend
Only the first partial autocorrelation is statistically significant. The partial autocorrelation plot indicates that an \(AR(1)\) model could adequately model the random component of the logarithm of the time series. Recall that in Chapter 5, Lesson 1, we fitted a linear regression model using the value of the partial autocorrelation function for \(k=1\). This helps account for the autocorrelation in the residuals.
The first few partial autocorrelation values are:
Show the code
pacf(linear_ts$.resid, plot=FALSE, lag.max =10)
Partial autocorrelations of series 'linear_ts$.resid', by lag
1 2 3 4 5 6 7 8 9 10
0.395 0.217 0.012 0.004 -0.030 -0.018 -0.109 -0.011 0.057 -0.192
The partial autocorrelation when \(k=1\) is approximately 0.395. We will use this value as we recompute the regression coefficients.
Figure 4 illustrates the original time series (in black) and the fitted model (in blue). For reference, a dotted line illustrating the simple least squares line is plotted on this figure for reference. It helps highlight the exponential shape of the trend.
Show the code
forecast_df <- linear_lm |>forecast(sim_ts) |>as_tibble() |> dplyr::select(std_t, .mean) |>rename(pred = .mean)sim_ts |>left_join(forecast_df, by ="std_t") |>as_tsibble(index = dates) |>autoplot(.vars = x_t) +geom_line(aes(y = pred), color ="#56B4E9", alpha =0.75) +labs(x ="Month",y ="Simulated Time Series",title ="Time Plot of Simulated Time Series with an Exponential Trend",subtitle ="Predicted values based on the full linear model are given in blue" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5),plot.subtitle =element_text(hjust =0.5) )
Figure 4: Time plot of the time series and the fitted linear regression model
Small-Group Activity: Retail Sales (20 min)
Figure 5 gives the total sales (in millions of U.S. dollars) for the category “all other general merchandise stores (45299).”
Show the code
# Read in retail sales data for "all other general merchandise stores"retail_ts <- rio::import("https://byuistats.github.io/timeseries/data/retail_by_business_type.parquet") |>filter(naics ==45299) |>filter(as_date(month) >=my("Jan 1998")) |>as_tsibble(index = month)retail_ts |>autoplot(.vars = sales_millions) +labs(x ="Month",y ="Sales (Millions of U.S. Dollars)",title =paste0(retail_ts$business[1], " (", retail_ts$naics[1], ")") ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5))
Figure 5: Time plot of the total monthly retail sales for all other general merchandise stores (45299)
Check Your Understanding
Use Figure 5 to explain the following questions to a partner.
What is the shape of the trend of this time series?
Which decomposition would be more appropriate: additive or multiplicative? Justify your answer.
Apply the appropriate transformation to the time series.
Fit appropriate models utilizing the indicator terms for seasonal components.
Determine the “best” model for these data. Justify your decision.
Check for autocorrelation in the random terms.
Use GLS to fit the model, if there is evidence of autocorrelation in the random component.
Plot the fitted values and the time series on the same figure.
Class Activity: Anti-Log Transformation and Bias Correction on Simulated Data (10 min)
Forecasts for a Simulated Time Series
We can use the forecast() function to predict future values of this time series. The code fold below displays the output of the forecast() command. Note that the column labeled x_t (i.e. \(x_t\)), representing the time series is populated with information tied to a normal distribution. The mean and standard deviation specified are the estimated parameters for the distribution of the predicted values of \(\log(x_t)\). If you raise \(e\) to the power of the mean, you get the values in the .mean column.
Forecast of Simulated Data: Show the code
# Fit model (OLS)sim_reduced_linear_lm1 <- sim_ts |>model(sim_reduced_linear1 =TSLM(log(x_t) ~ std_t + month))# Number of years / months to forecastn_years_forecast <-5n_months_forecast <-12* n_years_forecast# Last observed t and date from the simulated datalast_t <-max(sim_ts$t)last_date <-max(sim_ts$dates)# Build new data consistent with the indicator-variable specnew_dat <-tibble(# Future time index (no overlap with observed sample)t = last_t +seq_len(n_months_forecast),# Future monthly dates, starting one month after the last observed datedates =seq(from = last_date %m+%months(1),by ="1 month",length.out = n_months_forecast )) |>mutate(# Month-of-year index and factor (same as in sim_ts)month_index = ((t -1) %%12) +1,month =factor(month_index, levels =1:12, labels = month.abb),# Standardized time using the *training* t from sim_tsstd_t = (t -mean(pull(sim_ts, t))) /sd(pull(sim_ts, t)) ) |>as_tsibble(index = dates)sim_reduced_linear_lm1 |>forecast(new_data = new_dat)
Figure 6 illustrates the forecasted values for the time series.
Figure 6: Forecasted values of the time series with 95% confidence bands
Bias Correction
The forecasts presented on the left figure were computed by raising \(e\) to the power of the predicted log-values. Unfortunately, this introduces bias in the forecasted means. This bias tends to be large if the regression model does not fit the data closely.
The textbook points out that the bias correction should only be applied for means, not for simulated values. This means that if you are simulating transformed values, and you apply the inverse of your original transformation, the resulting values are inappropriate.
When we apply the inverse transform to the residual series, we introduce a bias. We can account for this bias applying one of two adjustments to our mean values. The theory behind this transformations is alluded to in the textbook, but is not essential.
There are two common patterns observed in the residual series: (1) Gaussian white noise or (2) Non-Normal values.
We can use the skewness statistic to assess the shape of the residual series. When the skewness is less than -1 or greater than 1, we say that the distribution is highly skewed. For skewness values between -1 and -0.5 or between 0.5 and 1, we say there is moderate skewness. If skewness lies between -0.5 and 0.5, the distribution is considered roughly symmetric.
Log-Normal Correction
Normally-Distributed Residual Series
If the residual series follows a normal distribution, we multiply the means of the forecasted values \(\hat x_t\) by the factor \(e^{\frac{1}{2} \sigma^2}\):
where \(\left\{ \hat x_t: t = 1, \ldots, n \right\}\) gives the values of the forecasted series, and \(\left\{ \hat x_t': t = 1, \ldots, n \right\}\) is the adjusted forecasted values.
Emperical Correction
Non-Normally Distributed Residual Series
If the residual series lacks normality , then we can adjust the forecasts \(\left\{ \hat x_t \right\}\) as follows:
The log-normal adjustment is \(1.00025\), and the emperical adjustment is \(1.00022\). Both of these values are extremely close to 1, so they will have a negligible impact on the predicted values.
This result does not generalize. In other situations, there can be a substantial effect of this bias on the predicted means.
Histogram of residuals
Figure 7 gives a histogram of the residuals and compute the skewness of the residual series.
Show the code
sim_resid_df <- sim_reduced_linear_lm1 |>residuals() |>as_tibble() |> dplyr::select(.resid) |>rename(x = .resid) sim_resid_df |>mutate(density =dnorm(x, mean(sim_resid_df$x), sd(sim_resid_df$x))) |>ggplot(aes(x = x)) +geom_histogram(aes(y =after_stat(density)),color ="white", fill ="#56B4E9", binwidth =0.01) +geom_line(aes(x = x, y = density)) +theme_bw() +labs(x ="Values",y ="Frequency",title ="Histogram of Residuals from the Reduced Linear Model" ) +theme(plot.title =element_text(hjust =0.5) )
Figure 7: Histogram of the values in the residual series based on the model with a linear trend and seasonal Fourier terms where i≤3
We can use the command skewness(sim_resid_df$x) to compute the skewness of these residuals: -0.156. This number is close to zero (specifically between -0.5 and 0.5,) so we conclude that the residual series is approximately normally distributed. We can apply the log-normal correction to our mean forecast values.
Class Activity: Apple Revenue (10 min)
We take another look at the quarterly revenue reported by Apple Inc. from Q1 of 2005 through Q1 of 2012
Visualizing the Time Series
Figure 8 gives the time plot illustrating the quarterly revenue reported by Apple from the first quarter of 2005 through the first quarter of 2012.
Show the code
apple_raw <- rio::import("https://byuistats.github.io/timeseries/data/apple_revenue.csv") |>mutate(dates =round_date(mdy(date), unit ="quarter")) |>arrange(dates)apple_ts <- apple_raw |>filter(dates <=my("Jan 2012")) |> dplyr::select(dates, revenue_billions) |>mutate(t =row_number(),std_t = (t -mean(t)) /sd(t),# quarter-of-year as indicator-style factorqtr =factor(quarter(dates), levels =1:4, labels =paste0("Q", 1:4)) ) |>as_tsibble(index = dates)# Plots stay the same, now just using the updated apple_tsapple_plot_regular <- apple_ts |>autoplot(.vars = revenue_billions) +stat_smooth(method ="lm",formula = y ~ x,geom ="smooth",se =FALSE,color ="#E69F00",linetype ="dotted" ) +labs(x ="Quarter",y ="Revenue (Billions USD)",title ="Apple Revenue (in Billions USD)" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5))apple_plot_transformed <- apple_ts |>autoplot(.vars =log(revenue_billions)) +stat_smooth(method ="lm",formula = y ~ x,geom ="smooth",se =FALSE,color ="#E69F00",linetype ="dotted" ) +labs(x ="Quarter",y ="Logarithm of Revenue",title ="Logarithm of Apple Revenue" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5))apple_plot_regular | apple_plot_transformed
Figure 8: Apple quarterly revenue figures (in billions of U.S. dollars) from Q1 of 2005 to Q1 of 2012; the figure on the left presents the revenue in dollars and the figure on the right gives the logarithm of the quarterly revenue; a simple linear regression line is given for reference
Finding a Suitable Model
We start by fitting a cubic trend to the logarithm of the quarterly revenues. The full model is fitted here:
Show the code
# Cubic model with standardized time variableapple_cubic_lm <- apple_ts |>model(apple_cubic =TSLM(log(revenue_billions) ~ std_t +I(std_t^2) +I(std_t^3) +qtr )) apple_cubic_lm |>tidy() |>mutate(sig = p.value <0.05)
Table 2: Comparison of the AIC, AICc, and BIC values for the models fitted to the logarithm of the simulated time series.
Model
AIC
AICc
BIC
apple_cubic
-84
-76.8
-73.1
apple_quad
-82.5
-77.2
-73
apple_linear
**-84.4**
**-80.6**
**-76.2**
We will apply the apple_linear model.
Using the Residuals to Determine the Appropriate Correction
The residuals of this model are illustrated in Figure 9.
Show the code
apple_resid_df <- model_combined |> dplyr::select(apple_linear) |>residuals() |>as_tibble() |> dplyr::select(.resid) |>rename(x = .resid) apple_resid_df |>mutate(density =dnorm(x, mean(apple_resid_df$x), sd(apple_resid_df$x))) |>ggplot(aes(x = x)) +geom_histogram(aes(y =after_stat(density)),color ="white", fill ="#56B4E9", binwidth =0.05) +geom_line(aes(x = x, y = density)) +theme_bw() +labs(x ="Values",y ="Frequency",title ="Histogram of Residuals from the Reduced Model with a Linear Trend" ) +theme(plot.title =element_text(hjust =0.5) )
Figure 9: Histogram of the residuals from the reduced model with a linear trend component
Using the command skewness(apple_resid_df$x), we compute the skewness of these residuals as: -0.711. This number is not close to zero (it is between -1 and -0.5) indicating moderate skewness. We would therefore apply the empirical correction to our mean forecast values.
Applying the Correction Factor
Table 3 summarizes some of the corrected mean values. Note that in this particular case, the corrected values are very close to the uncorrected values.
apple_ts |>autoplot(.vars = revenue_billions) +geom_line(data = apple_pred, aes(x = dates, y = .mean_correction), color ="#56B4E9") +labs(x ="Quarter",y ="Revenue (Billions USD)",title ="Apple Revenue in Billions of U.S. Dollars" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5))
Figure 10: Apple Inc.’s quarterly revenue in billions of U.S. dollars through first quarter of 2012 (in black) and the fitted regression model (in blue)
This time series was used as an example. We are obviously not interested in forecasting future values using this model. However, this is an excellent example of real-world exponential growth in a time series with a seasonal component. Limiting factors prevent exponential growth from being sustainable in the long run. After 2012, the Apple quarterly revenues follow a different, but very impressive, model. This is illustrated in Figure 11.
?@fig-RetailSalesGeneralMerch2 gives the total sales (in millions of U.S. dollars) for the category “all other general merchandise stores (45299),” beginning with January 1998.
Show the code
# Read in retail sales data for "all other general merchandise stores"retail_ts <- rio::import("https://byuistats.github.io/timeseries/data/retail_by_business_type.parquet") |>filter(naics ==45299) |>mutate(t =1:n(),month =as.factor((t -1) %%12+1),month =factor(month, levels =1:12, labels = month.abb)) |>mutate(std_t = (t -mean(t)) /sd(t)) |>as_tsibble(index = t)
Figure 12 shows the “All other general merchandise” retail sales data.
Show the code
plot_raw <- retail_ts |>autoplot(.vars = sales_millions) +labs(x ="Time",y ="sales_millions",title ="Other General Merchandise Sales" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5) )plot_log <- retail_ts |>autoplot(.vars =log(sales_millions)) +labs(x ="Time",y ="log(sales_millions)",title ="Logarithm of Other Gen. Merch. Sales" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5) )plot_raw | plot_log
Figure 12: Time plot of the time series (left) and the natural logarithm of the time series (right)
Cubic Trend
Show the code
# Cubic model with standardized time variablecubic_lm <- retail_ts |>model(cubic =TSLM(log(sales_millions) ~ std_t +I(std_t^2) +I(std_t^3) + month)) # Note sin6 is omittedcubic_lm |>tidy() |>select(!`.model`)|>mutate(sig = p.value <0.05)
We will now compare the models we fitted above. #tbl-ModelComparison2 gives the AIC, AICc, and BIC of the models fitted above. In addition, other models with a reduced number of Fourier terms are included. For example, the model labeled reduced_quadratic_fourier_i5 includes linear and quadratic trend terms but also the Fourier terms where \(i \le 5\).
Figure 14: PACF of the full model with a quadratic trend
Applying Genearlized Least Squares, GLS
Recall that in Chapter 5, Lesson 1, we fitted a linear regression model using the value of the partial autocorrelation function for \(k=1\). This helps account for the autocorrelation in the residuals.
We will use the PACF when \(k=1\) to apply the GLS algorithm. The first few partial autocorrelation values are:
Show the code
pacf(cubic_ts$.resid, plot=FALSE, lag.max =10)
Partial autocorrelations of series 'cubic_ts$.resid', by lag
1 2 3 4 5 6 7 8 9 10
0.413 0.144 0.042 0.091 -0.035 0.028 -0.017 0.085 -0.093 0.063
The partial autocorrelation when \(k=1\) is approximately 0.413. We will use this value as we recompute the regression coefficients.
Figure 15 illustrates the original time series (in black) and the fitted model (in blue). For reference, a dotted line illustrating the simple least squares line is plotted on this figure for reference. It helps highlight the exponential shape of the trend.
Show the code
forecast_df <- cubic_lm |>forecast(retail_ts) |># computes the anti-log of the predicted values and returns them as .meanas_tibble() |> dplyr::select(std_t, .mean) |>rename(pred = .mean)retail_ts |>left_join(forecast_df, by ="std_t") |>as_tsibble(index = t) |>autoplot(.vars = sales_millions) +geom_line(aes(y = pred), color ="#56B4E9", alpha =0.75) +labs(x ="Time",y ="Simulated Time Series",title ="Time Plot of Simulated Time Series with an Cubic Trend",subtitle ="Predicted values based on the full cubic model are given in blue" ) +theme_minimal() +theme(plot.title =element_text(hjust =0.5),plot.subtitle =element_text(hjust =0.5) )
Figure 15: Time plot of the time series (left) and the natural logarithm of the time series (right)
Show the code
retail_resid_df <- cubic_lm |>residuals() |>as_tibble() |> dplyr::select(.resid) |>rename(x = .resid) retail_resid_df |>mutate(density =dnorm(x, mean(retail_resid_df$x), sd(retail_resid_df$x))) |>ggplot(aes(x = x)) +geom_histogram(aes(y =after_stat(density)),color ="white", fill ="#56B4E9", binwidth =0.02) +geom_line(aes(x = x, y = density)) +theme_bw() +labs(x ="Values",y ="Frequency",title ="Histogram of Residuals from the Full Cubic Model" ) +theme(plot.title =element_text(hjust =0.5) )
Show the code
skewness(retail_resid_df$x)
[1] 0.1706905
Small-Group Activity: Industrial Electricity Consumption in Texas Solution
Check Your Understanding
Use the Texas industrial electricity consumption data to do the following.
Select an appropriate fitted model using the AIC, AICc, or BIC critera.
Use the residuals to determine the appropriate correction for the data.
Forecast the data for the next 5 years.
Apply the appropriate correction to the forecasted values.
Plot the fitted (forecasted) values along with the time series.
Show the code
# Cubic model with standardized time variableelec_cubic_lm <- elec_ts |>model(elec_cubic =TSLM(log(megawatthours) ~ std_t +I(std_t^2) +I(std_t^3) + month)) # Note sin6 is omittedelec_cubic_lm |>tidy() |>mutate(sig = p.value <0.05)