BookmarkSubscribeRSS Feed
RichardHorgen
Fluorite | Level 6

How can I increase the precision of the GINI calculation when running the model performance monitoring in SAS Model Manager on SAS Viya?

 

I see that due to how the GINI calculation is implemented in SAS Model Manager, this calculation can deviate quite a bit from the true GINI of a model when calculated via proc logistics. I work with probability of default models and these models tend to have very narrow ranges for the predicted PD's. This causes the model monitoring to underestimate the performance of the model, presumably due to how the calculation is performed with too little granularity in setting the cut-offs to be evaluated. 

2 REPLIES 2
SophiaRowland
SAS Super FREQ

Hello @RichardHorgen! Are you looking to expand the number of digits shown for Gini in the performance monitoring report? By default, 4 decimals are shown in the performance monitoring report, but you can open the data underlying the report in SAS Visual Analytics by opening the options menu on the chart.

SophiaRowland_0-1718997339278.png

 


Within SAS Visual Analytics, you can create a list table containing the Gini values that shows more digits. 

SophiaRowland_1-1718997362229.png

 

You can also opt to view the data table directly from SAS Visual Analytics

SophiaRowland_2-1718997398839.png

And scroll over to view the complete Gini column: 

SophiaRowland_3-1718997433366.png


Conversely, if you are looking to change how Gini is calculated and are comfortable with SAS programming, you can create you own Key Performance Indicator on SAS Viya 4. I did this a few years ago (so a bit rusty), but I wrote about that experience in this article: https://communities.sas.com/t5/SAS-Communities-Library/Creating-Custom-Fraud-Monitoring-KPIs-in-SAS-... If going this route, I recommend reviewing the latest documentation about Creating KPIs as well. 

 

RichardHorgen
Fluorite | Level 6

Hi

The issue I have with the GINI/AUC calculation is that it is based on (and dependent upon) the choices made for where the cutoffs are made. I have been able to reproduce the results from the Model Manager procedure by using the proc assess-procedure with these settings:

proc assess data=PM.ASSESS nbins=10 ncuts=100;
target Target_Bad / event="1" level=nominal;
input Em_Eventprobability;
/* Fit statistics cannot be generated because no posterior probability variables for non-event levels are detected.  Assign posterior probability variables on the OPTIONS tab. */
ods output ROCInfo=WORK._roc_temp LIFTInfo=WORK._lift_temp;
run;

Now, for most models this probably works fine. However, as the GINI/AUS calculation is dependent upon 100 cutoff values and these are distributed evenly across the 0-1 range (at each percentage point) this means that the first cutoff is made at 1% (em_eventprobabilty<0.01). For a PD model, this means that the first point of inspection is made at risk grade 4 or 5 dependent on the rating scale being used. This also means that the model's discriminatory power in the lower risk grades are not evaluated and that a large portion of the portfolio ends up in the first cutoff. 

 

The workaround I will probably use is to recalibrate the em_eventprobability to a range more suited for the assessment-procedure, but it would have been nice if the procedure rather had been implemented using deciles or something that is more adaptive to low-defualt probability models. 

 

Thanks for the link to your custom-KPI post. I'll be sure to look into this to see if this also can be used as a workaround.