Hi,
My question is about selecting best set of models in case of hundreds or thousands of time series to forecast. One should simply rely on the automatically generated forecasts and save the time or each individual model has to be carefully investigated and then customized to achieve better and more reliable models. The latter would require investing huge amount of time (practically infeasible) though the solution quality would be higher while in the former it would be reverse. What should be the trade-off approach to save the time and still not compromising on the quality? Any tips?
Regards
Hello -
Typically I like to propose at "forecasting by exception" strategy to large-scale automatic forecasting users:
automate as much as possible, come up with a smart way to detect exceptional cases (like rules on reasonable accuracy levels) or series which are of major importance to your business, and deal with those in an interactive manner.
In a way a large scale forecasting system should behave like an autopilot on a plane: it should handle "ordinary" cases well - but of course we are all more than happy that pilots are still in charge of the plane, who can interfere and address "extraordinary" cases.
In fact this is one of the design principles which we have in mind as we are building the next generation of SAS Forecast Server.
Please feel free to have a look at our ideas here: Analytics 2013 - Day 1 - SAS Presents - Udo Sglavo - YouTube.
Thanks,
Udo
Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.
If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website.
Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.
Find more tutorials on the SAS Users YouTube channel.