Hi, My question is about selecting best set of models in case of hundreds or thousands of time series to forecast. One should simply rely on the automatically generated forecasts and save the time or each individual model has to be carefully investigated and then customized to achieve better and more reliable models. The latter would require investing huge amount of time (practically infeasible) though the solution quality would be higher while in the former it would be reverse. What should be the trade-off approach to save the time and still not compromising on the quality? Any tips? Regards
... View more
Hi Udo, Yes, the winning models were having the the x variables included. By validation data I meant the holdout sample. The new x values are significantly different but within the expected range, look normal as the rest of the values. The parameter estimates of the transfer model components take different combinations, sometimes with very small p-values, and sometime with very high values. But the p-values do matter here? Thanks for your time
... View more
Hi Udo, Actually, I had some 20 time series to model and project for next 6 months. Initially I had kept 50 data points (Apr'08 till May'12) to train the model and next 12 data points (Jun'12 to May'13) for validation. So, I had 62 data points in the dataset for each of the 20 series being analyzed. After building the models I replaced the dataset with new updated one having the next 6 future values of the x variables with response variable values as missing. I chose the same option as you've mentioned "Forecast: refresh the current forecast model, using the same parameter values". But I still obtained the same future forecast values as I saw before updating the dataset with new values of x variables. I thought I was making some mistake. Could you please shed some light? Thanks once again.
... View more
Hi, I have built some ARIMA models based on the true drivers of the response variable. I want to generate forecasts of the response based on new values of the predictors with same set of parameter estimates. Please help me understand how to achieve this and how prepare the new dataset required. Thanks very much in advance. Regards
... View more
First of all, thanks very much Udo and ets_kps for replying on the doubt. To elaborate a little bit more I am adding the screenshot on a similar situation. It's an example of Univariate time series analysis, keeping 20% data as holdout. The best model selected by automatic forecasting is an ESM model, with both Level and Trend estimates coming with insignificant p-value. Can we still consider this model as the best model and use for prediction. Do these p-values have anything to do with the quality or reliability of the forecasts. Similar cases happen when the input variables are tested to enhance the model and to see the influence of the added variable on the predictive ability of model. The AR terms of the best (ARIMA) model chosen by the FS in such cases appear with insignificant p-values. Is it a matter of sample size (in training/holdout) and we need to try a different (bigger) sample to train the model? Or we can still use the best model as it has the lowest MAPE and hence highest forecast accuracy? Thanks very much in advance. Regards DrSharma
... View more
Hi, Just a basic question on SAS Forecast Studio (SFS). If the best model selected (based on holdout MAPE) by SFS has one or more parameters not statistically significant, should that model be discarded and we should look at the other models?
... View more
Hi Udo, I am also looking for a good literature on SAS Forecast Studio. Can you please share a copy of that paper on Rolling Simulation with me (vikas10s@gmail.com) as well. Thanks very much in advance. Regards DrSharma
... View more