- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Posted 06-09-2010 02:03 PM
(2994 views)
Hi,
I have more than 100 stores; and there is a lot of store level and area demographics/economic information. I need to forecast sales for each store for the next 1 year. And I just have SAS 9 (so no Proc Panel).
Been reading on Proc Mixed and Proc Tscsreg. Can anyone please guide me on which technique/procedure will be more appropriate and why? Any case study I can look up?
Thanks
I have more than 100 stores; and there is a lot of store level and area demographics/economic information. I need to forecast sales for each store for the next 1 year. And I just have SAS 9 (so no Proc Panel).
Been reading on Proc Mixed and Proc Tscsreg. Can anyone please guide me on which technique/procedure will be more appropriate and why? Any case study I can look up?
Thanks
- Tags:
- analytics_2012
8 REPLIES 8
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello, I am a forecasting novice but my first thought would be Proc Autoreg. Have you tried it? It can only account for limited types of autocorrelation; Proc ARIMA is more powerful but requires a lot of interaction (time and know-how). Proc Autoreg can incorporate any explanatory variables that you choose to include in the model, and the automatic nature might be appealing since you need so many forecasts.
I'm not seeing an application for Proc Mixed, and I'm not familiar with Proc Tscsreg.
Good luck!
I'm not seeing an application for Proc Mixed, and I'm not familiar with Proc Tscsreg.
Good luck!
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello -
For these kinds of "large-scale" forecasting exercises I would usually recommend looking into SAS Forecast Server, which can be used to automate these kinds of tasks quite nicely.
Otherwise, you will need to come up with forecasting models manually for each store - of course SAS provides procedures with this kind of task as well.
Usually it is good practice to start with more simplistic models like exponential smoothing models (available in either FORECAST or ESM procedure) and then turn to more advanced models like ARIMA or UCM (see respective procedures).
Thanks,
Udo
For these kinds of "large-scale" forecasting exercises I would usually recommend looking into SAS Forecast Server, which can be used to automate these kinds of tasks quite nicely.
Otherwise, you will need to come up with forecasting models manually for each store - of course SAS provides procedures with this kind of task as well.
Usually it is good practice to start with more simplistic models like exponential smoothing models (available in either FORECAST or ESM procedure) and then turn to more advanced models like ARIMA or UCM (see respective procedures).
Thanks,
Udo
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I've the same situation and I used the High Performance Forecasting to solve the problem.
I've the same situation and I used the High Performance Forecasting to solve the problem.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I don't have SAS Forecast server, so that rules out a lot of things.
Can Proc Arima and Proc Autoreg be used to model for each Store separately, at one time?
One of the main reasons I am using Proc Mixed is that it will take care of differences across Stores, and it will give me different models/equations for each Store at one time.
Is there anything in SAS that will do the same thing? Also, is there a correct or most appropriate method for selecting which variables will come under Random effects?
Can Proc Arima and Proc Autoreg be used to model for each Store separately, at one time?
One of the main reasons I am using Proc Mixed is that it will take care of differences across Stores, and it will give me different models/equations for each Store at one time.
Is there anything in SAS that will do the same thing? Also, is there a correct or most appropriate method for selecting which variables will come under Random effects?
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello -
When using procedures like ARIMA or AUTOREG each series will be modeled "independently" - i.e. there are no cross-effects between stores. If you want to go for such an approach then the VARMAX procedure might be of interest. However, in my experience it is usually good practice to start with more simplistic approaches first. If I understand your problem correctly it is kind of large scale, hence multivariate approaches might be computationally expensive.
Thanks,
Udo
When using procedures like ARIMA or AUTOREG each series will be modeled "independently" - i.e. there are no cross-effects between stores. If you want to go for such an approach then the VARMAX procedure might be of interest. However, in my experience it is usually good practice to start with more simplistic approaches first. If I understand your problem correctly it is kind of large scale, hence multivariate approaches might be computationally expensive.
Thanks,
Udo
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
thanks udo.
i am still working on proc mixed and have got reasonably good results (average mape of 3 to 4 %).
the fixed effects parameter estimates are quite stable in training and validation datasets (always staying within the confidence limits) but the random estimates (i have used intercept and same month, last year sales as random effects) are not at all stable. whenever i run proc mixed on different samples, the random estimates change a lot for the same store. can you please explain this?
regards,
datalligence
i am still working on proc mixed and have got reasonably good results (average mape of 3 to 4 %).
the fixed effects parameter estimates are quite stable in training and validation datasets (always staying within the confidence limits) but the random estimates (i have used intercept and same month, last year sales as random effects) are not at all stable. whenever i run proc mixed on different samples, the random estimates change a lot for the same store. can you please explain this?
regards,
datalligence
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello -
I have discussed your challenge with some of my colleagues.
Here are some thoughts:
It is difficult to make proper statements without seeing the data. It may simply be too sparse. However, in general, PROC MIXED does not handle non-stationary series very well. You could try de-trending the series using PROC ESM of SAS/ETS.
If the random effects that are unstable were significantly different from 0, then you might consider ignoring the random effects that have a high standard error…..as they are too unstable typically.
Regards,
Udo
I have discussed your challenge with some of my colleagues.
Here are some thoughts:
It is difficult to make proper statements without seeing the data. It may simply be too sparse. However, in general, PROC MIXED does not handle non-stationary series very well. You could try de-trending the series using PROC ESM of SAS/ETS.
If the random effects that are unstable were significantly different from 0, then you might consider ignoring the random effects that have a high standard error…..as they are too unstable typically.
Regards,
Udo
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I am using seasonally adjusted sales as my dependent variable.
For each store, i have about 4 years of data. The accuracy looks good in the training and validation datasets but my concern is that the random effect estimates are unstable.
When you talk about ignoring the random effects with high standard errors, how should I do that? I have lag12(sales) and intercept as random effects. For some stores, these estimates are pretty much stable, for some stores they are way off. Are you talking about ignoring the variables for the model or for some stores only? Should I ignore or use the random effect estimates in deriving the predicted sales for those stores where the estimate is insignificant or has a high standard error?
Thanks,
Datalligence
For each store, i have about 4 years of data. The accuracy looks good in the training and validation datasets but my concern is that the random effect estimates are unstable.
When you talk about ignoring the random effects with high standard errors, how should I do that? I have lag12(sales) and intercept as random effects. For some stores, these estimates are pretty much stable, for some stores they are way off. Are you talking about ignoring the variables for the model or for some stores only? Should I ignore or use the random effect estimates in deriving the predicted sales for those stores where the estimate is insignificant or has a high standard error?
Thanks,
Datalligence