BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
aranganayagi
Obsidian | Level 7

Hi, I am very new to SAS Stats and running Logistic regression.

 

I am looking to get answer for two questions

1. I am getting "Model Convergence status is Quasi complete Separation of data Point detected". what would be the implication of this warning and how to solve this.

2. C-statistics of validation data set is larger than C-statistics of Training set. Is this possible? 

My expectation is Training set should perform better than validation data set.  

 

Attached the report for your reference.

 

Could you please help to get answer for the 2 questions.

1 ACCEPTED SOLUTION

Accepted Solutions
Ksharp
Super User

No. GOF only can tell you if the model fit the sample data(train dataset) well or not .

If you have good GOF statistics ,it usually hint model is NOT overfit and is NOT lackfit .

 

 

If you have sas 9.4 m6 , You could try

 

proc logistic ....

model ........ / GOF ;

run;

 

if not try 

model ......../ LACKFIT ;

 

 

Another GOF is check if the model is overdisperse :

model ............/ scale=none aggregate ;

 

Search sas logistic 's documentation or Rick's blog you could find it .

View solution in original post

7 REPLIES 7
PaigeMiller
Diamond | Level 26

@aranganayagi wrote:

Hi, I am very new to SAS Stats and running Logistic regression.

 

I am looking to get answer for two questions

1. I am getting "Model Convergence status is Quasi complete Separation of data Point detected". what would be the implication of this warning and how to solve this.

 


https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqwhat-is-complete-or-quasi-complete-separat...

 

 

2. C-statistics of validation data set is larger than C-statistics of Training set. Is this possible? 

My expectation is Training set should perform better than validation data set.  

 

Yes, it is possible, if there is just random noise as the difference between training and validation, there's no reason that the training set has to perform better, randomly the model might fit the validation better. 

 

--
Paige Miller
Ksharp
Super User

Question 1:

You have sparse data for category variable .

Example:

 

Y   RACE

1     white     

1    white

0    white

1    black

1    black 

 

 

You could see white have both 1 and 0 , but black have only 1 .

you could remove this kind of variable .

 

 

Question 2:

Yes. Anything is possible .

Since your Train and Validate data are random sample ,anything would happen .especially the size of Validate data is smaller than Train data . (Smaller size data tend to get higher C statistic)

aranganayagi
Obsidian | Level 7
Thanks Page Miller and Ksharp for the reply. It is very helpful.

Again I get 2 more questions.
1.Based on ROC curve and C statistics of train and validation set, can we determine the model is performing better.

2. Is it necessary that the model should converge. ( I mean, should we fix Quasi complete separation warning). If we dont fix, what would be the implication.
PaigeMiller
Diamond | Level 26

@aranganayagi wrote:
Thanks Page Miller and Ksharp for the reply. It is very helpful.

Again I get 2 more questions.
1.Based on ROC curve and C statistics of train and validation set, can we determine the model is performing better.








Better than what?

2. Is it necessary that the model should converge. ( I mean, should we fix Quasi complete separation warning). If we dont fix, what would be the implication.

 

The link I provided explains what to do in the presence of quasi-complete separation.

--
Paige Miller
Ksharp
Super User

1.Based on ROC curve and C statistics of train and validation set, can we determine the model is performing better.

 

I would not trust ROC or C statisitic , I prefer to Goodness Of Fit statistic like H-L test . @Rick_SAS  has written several blog about it .

 

2. Is it necessary that the model should converge. ( I mean, should we fix Quasi complete separation warning). If we dont fix, what would be the implication.

 

Yes . I thinks so . If model is not converge , the output is not trust.

Or @Rick_SAS  might have some word to say.

aranganayagi
Obsidian | Level 7

Thanks @Ksharp for the reply.

Can we understand whether the model is overfitting or underfitting from Goodness of Fit statistic like H-L test. I went through the materials but couldn't figure out. Could you please help me

Ksharp
Super User

No. GOF only can tell you if the model fit the sample data(train dataset) well or not .

If you have good GOF statistics ,it usually hint model is NOT overfit and is NOT lackfit .

 

 

If you have sas 9.4 m6 , You could try

 

proc logistic ....

model ........ / GOF ;

run;

 

if not try 

model ......../ LACKFIT ;

 

 

Another GOF is check if the model is overdisperse :

model ............/ scale=none aggregate ;

 

Search sas logistic 's documentation or Rick's blog you could find it .

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

Mastering the WHERE Clause in PROC SQL

SAS' Charu Shankar shares her PROC SQL expertise by showing you how to master the WHERE clause using real winter weather data.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 7 replies
  • 1069 views
  • 2 likes
  • 3 in conversation