Hi,
My question is about selecting best set of models in case of hundreds or thousands of time series to forecast. One should simply rely on the automatically generated forecasts and save the time or each individual model has to be carefully investigated and then customized to achieve better and more reliable models. The latter would require investing huge amount of time (practically infeasible) though the solution quality would be higher while in the former it would be reverse. What should be the trade-off approach to save the time and still not compromising on the quality? Any tips?
Regards
Hello -
Typically I like to propose at "forecasting by exception" strategy to large-scale automatic forecasting users:
automate as much as possible, come up with a smart way to detect exceptional cases (like rules on reasonable accuracy levels) or series which are of major importance to your business, and deal with those in an interactive manner.
In a way a large scale forecasting system should behave like an autopilot on a plane: it should handle "ordinary" cases well - but of course we are all more than happy that pilots are still in charge of the plane, who can interfere and address "extraordinary" cases.
In fact this is one of the design principles which we have in mind as we are building the next generation of SAS Forecast Server.
Please feel free to have a look at our ideas here: Analytics 2013 - Day 1 - SAS Presents - Udo Sglavo - YouTube.
Thanks,
Udo
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.
Find more tutorials on the SAS Users YouTube channel.