BookmarkSubscribeRSS Feed
jjjon
Calcite | Level 5

Hi,

I am currently running a number of projects with large amounts of forecasts in forecast studio. As I am rather newwith this software, I have a predefined package as a base, filled with mainly ARIMA models.

However, I have noticed that some of these models are never accurate for my time series and rather makes my work extremely time consuming as I have to remove them manually for each forecast.

It is mainly my log-models that appear to have good RMSE but gives extreme results in the actual forecast numbers.

I have tried to remove the models from the projects, but as they are imported from a file in the creation, they have been set as "default" models and cannot be tampered with appears.

Does anyone have a clue of how to solve this? Is there a way to remove a pre-defined model from the entire project - all the way down the hierarchy?

Kr.

3 REPLIES 3
udo_sas
SAS Employee

Hi -

When you say "the models appear to have good RMSE but give extreme results", I suspect you are referring to the fit region.

Have you tried using a holdout sample and pick the champion model on it's performance on data which was not used to fit the models?

Thanks,

Udo

jjjon
Calcite | Level 5

Hi Udo,

Thanks for your reply.

Yes, I am referring to  the fit region, sorry for me being unclear about that.

I have tried to use a holdout sample on the higher hierarchy levels, but in some cases the extreme behavior is occurring at the end points of my forecast horizon.

Another concern is that I have a limited amount of data lower down in the hiererchy, so a holdout sample is not always feasible.

Do you know if there is a way to set a threshold that filters models that have predicted data reaching it?

udo_sas
SAS Employee

Hello -

I think you will have to deal with this type of threshold as a post process, similar to what SAS Forecast Studio does for constraining forecasts to non-negative values. First create the models and then set the forecasts to 0 if predictions are negative. The idea is to model "unconstraint" and apply constraints afterwards. If you like to avoid "exploding" models you may have to come up with your own model repository. Example: if your models feature a LOG transformation, the predictions might be behaving in an unexpected manners, as we have to back-transform the predictions.

If you data is limited on lower levels, you may decide to only create statistical models on higher aggregations and reconcile these predictions down to the level you need. This can happen using profiles for example.

Thanks,

Udo

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

Multiple Linear Regression in SAS

Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 3 replies
  • 1586 views
  • 3 likes
  • 2 in conversation