Forecasting a Seasonal ARMA Process
- Article History
- RSS Feed
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Overview
Many economic and business variables are affected by seasonal factors. For example, power usage is highest in the months when temperatures are most extreme. The most common type of seasonality is variation due to the time of year, but other types of seasonality are also found in time series data.
Seasonal models are often multiplicative rather than additive. A multiplicative model includes the product of one or more nonseasonal parameters with one or more seasonal parameters. For example, a multiplicative model with both autoregressive and moving average terms (an ARMA model) and with yearly seasonality for a time series, yt, can be written as:
where
is the intercept parameter.
is the nonseasonal first-order autoregressive parameter.
is the seasonal autoregressive parameter.
is the nonseasonal first-order moving average parameter.
is the seasonal moving average parameter.
To identify a seasonal model, you need to examine the autocorrelation function (ACF) and the inverse autocorrelation function (IACF) plots. For multiplicative MA processes, there are small spikes in the ACF plot q lags before and after the seasonal lag, where q is the number of nonseasonal MA parameters necessary to model the data. These small spikes are usually in the opposite direction of the seasonal spike. For example, a multiplicative MA(1, 12) process typically has small spikes at lags 11 and 13 on either side of, and in the opposite direction of, a large spike at lag 12.
An additive MA process typically has small spikes q lags before the seasonal lag, where q is the number of nonseasonal MA parameters necessary to model the data. For example, an additive MA(1, 12) process typically has a small spike at lag 11 and a larger spike at lag 12.
To identify an AR process, look for the patterns described previously in the IACF plot rather than in the ACF plot. If a process contains both AR and MA components, the patterns may appear in both the ACF and IACF plots.
This example develops an ARMA model for steel shipments from U.S. steel mills.
Analysis
The identification and estimation of Autoregressive Integrated Moving Average (ARIMA) models is more of an art than a science. Generally, the most parsimonious model fitting the data is considered the best. This example uses steel shipments data taken from Metal Statistics 1993. The values represent monthly totals of steel products shipped from U.S. steel mills, in thousands of net tons, for the period from January 1984 to December 1991. The following statements create the data set STEEL.
data steel;
input date:monyy5. steelshp @@;
format date monyy5.;
title 'U.S. Steel Shipments Data';
title2 '(thousands of net tons)';
datalines;
JAN84 5980 FEB84 6150 MAR84 7240 APR84 6472 MAY84 6948 JUN84 6686
JUL84 5820 AUG84 6033 SEP84 5454 OCT84 6087 NOV84 5317 DEC84 4867
... more data lines ...
;
The analysis performed by the ARIMA procedure is divided into three stages, corresponding to the stages described by Box and Jenkins (1976). The IDENTIFY, ESTIMATE, and FORECAST statements perform these three stages. In the identification stage, you use the IDENTIFY statement to specify the response series and identify candidate ARIMA models for it. The IDENTIFY statement reads time series that are to be used in later statements, possibly differencing them, and computes autocorrelations, inverse autocorrelations, partial autocorrelations, and cross correlations. The analysis of this output usually suggests one or more ARIMA models that could be fit. The VAR= option specifies the variable to be identified.
proc arima data=steel;
i var=steelshp;
run;
|
The large spike at lag 12 in the ACF plot provides evidence that the steel shipments time series has a seasonal autoregressive component. The lack of a large spike at lag 24 indicates that the series is stationary at the seasonal level.
|
The spikes at lags 1 and 3 in the IACF plot indicate that other components are necessary to fit an adequate model. The null hypothesis of white noise residuals is resoundingly rejected.
|
In the estimation and diagnostic checking stage, you use the ESTIMATE statement to specify the ARIMA model to fit to the variable specified in the previous IDENTIFY statement and to estimate the parameters of that model. The ESTIMATE statement also produces diagnostic statistics to help you judge the adequacy of the model.
Significance tests for parameter estimates indicate whether some terms in the model may be unnecessary. Goodness-of-fit statistics aid in comparing this model to others. Tests for white noise residuals indicate whether the residual series contains additional information that might be used by a more complex model. If the diagnostic tests indicate problems with the model, you try another model, then repeat the estimation and diagnostic checking stage.
The following statement fits a seasonal ARMA model to the time series. In the syntax of the ESTIMATE statement, the two multiplicative AR terms, denoted by the P= option, are enclosed in separate parentheses. The two additive MA terms, denoted by the Q= option, are separated by a space within a single set of parentheses.
e p=(2)(12) q=(1 3);
run;
|
The Autocorrelation Check of Residuals shows that none of the Q-statistics are statistically significant. This indicates that the model provides an adequate fit to the data.
|
All of the estimated parameters have relatively large t-statistics, which indicates that these parameters cannot be omitted from the model.
In the forecasting stage, you use the FORECAST statement to forecast future values of the time series and to generate confidence intervals for these forecasts from the ARIMA model produced by the preceding ESTIMATE statement.
The following statements produce forecasts and upper and lower 95% confidence limits for 12 future periods and creates the output data set STEEL2.
f lead=12
out=steel2
id=date
interval=month
noprint;
run;
To prepare the output data set for plotting, change the values for the forecasts and confidence limits to missing for all dates prior to the future forecast periods.
data steel3;
set steel2;
if date lt '01jan92'd then do;
forecast=.;
l95=.;
u95=.;
end;
run;
Use the GPLOT procedure to plot the data.
proc gplot data=steel3;
format date year4.;
plot steelshp*date=1
forecast*date=2
l95*date=3
u95*date=3 / overlay cframe=ligr
haxis=axis1 vaxis=axis2
vminor=1 href='01jan92'd;
title 'U.S. Steel Shipments Data';
title2 '(thousands of net tons)';
axis1 offset=(1 cm)
label=('Year') minor=none
order=('01jan84'd to '01jan93'd by year);
axis2 label=(angle=90 'Steel Shipments')
order=(4500 to 8500 by 1000);
symbol1 c=blue i=join l=1 v=star;
symbol2 c=red i=join l=1 v=F;
symbol3 c=green i=join l=20;
run;
quit;
The values of the original steel shipments time series are plotted with the star symbol. The forecasts are plotted with the F symbol, and the upper and lower 95% confidence limits for the forecasts are plotted with dashed lines.
Because the model fit to the steel shipments data includes a seasonal component, the forecasts do not follow a simple linear trend. Instead, the forecasts show variability due to the season (month of the year).
References
Box, G.E.P. and Jenkins, G.M. (1976), Time Series Analysis: Forecasting and Control, San Francisco: Holden-Day.
Chilton Publications (1993), Metal Statistics 1993, New York: Chilton Publications.
Hamilton, J. (1994), Time Series Analysis, Princeton, NJ: Princeton University Press.
SAS Institute Inc. (1996), Forecasting Examples for Business and Economics Using the SAS System, Cary, NC: SAS Institute Inc.
SAS Institute Inc. (1993), SAS/ETS User's Guide, Version 6, Second Edition, Cary, NC: SAS Institute Inc.