03-10-2014 08:04 AM
My question is about selecting best set of models in case of hundreds or thousands of time series to forecast. One should simply rely on the automatically generated forecasts and save the time or each individual model has to be carefully investigated and then customized to achieve better and more reliable models. The latter would require investing huge amount of time (practically infeasible) though the solution quality would be higher while in the former it would be reverse. What should be the trade-off approach to save the time and still not compromising on the quality? Any tips?
03-10-2014 05:23 PM
Typically I like to propose at "forecasting by exception" strategy to large-scale automatic forecasting users:
automate as much as possible, come up with a smart way to detect exceptional cases (like rules on reasonable accuracy levels) or series which are of major importance to your business, and deal with those in an interactive manner.
In a way a large scale forecasting system should behave like an autopilot on a plane: it should handle "ordinary" cases well - but of course we are all more than happy that pilots are still in charge of the plane, who can interfere and address "extraordinary" cases.
In fact this is one of the design principles which we have in mind as we are building the next generation of SAS Forecast Server.
Please feel free to have a look at our ideas here: Analytics 2013 - Day 1 - SAS Presents - Udo Sglavo - YouTube.