Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Options

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

🔒 This topic is **solved** and **locked**.
Need further help from the community? Please
sign in and ask a **new** question.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Posted 10-31-2019 07:23 AM
(1187 views)

Hi, I am very new to SAS Stats and running Logistic regression.

I am looking to get answer for two questions

1. I am getting "Model Convergence status is Quasi complete Separation of data Point detected". what would be the implication of this warning and how to solve this.

2. C-statistics of validation data set is larger than C-statistics of Training set. Is this possible?

My expectation is Training set should perform better than validation data set.

Attached the report for your reference.

Could you please help to get answer for the 2 questions.

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

No. GOF only can tell you if the model fit the sample data(train dataset) well or not .

If you have good GOF statistics ,it usually hint model is NOT overfit and is NOT lackfit .

If you have sas 9.4 m6 , You could try

proc logistic ....

model ........ / GOF ;

run;

if not try

model ......../ LACKFIT ;

Another GOF is check if the model is overdisperse :

model ............/ scale=none aggregate ;

Search sas logistic 's documentation or Rick's blog you could find it .

7 REPLIES 7

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

@aranganayagi wrote:

Hi, I am very new to SAS Stats and running Logistic regression.

I am looking to get answer for two questions

1. I am getting "Model Convergence status is Quasi complete Separation of data Point detected". what would be the implication of this warning and how to solve this.

2. C-statistics of validation data set is larger than C-statistics of Training set. Is this possible?

My expectation is Training set should perform better than validation data set.

Yes, it is possible, if there is just random noise as the difference between training and validation, there's no reason that the training set has to perform better, randomly the model might fit the validation better.

--

Paige Miller

Paige Miller

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Question 1:

You have sparse data for category variable .

Example:

Y RACE

1 white

1 white

0 white

1 black

1 black

You could see white have both 1 and 0 , but black have only 1 .

you could remove this kind of variable .

Question 2:

Yes. Anything is possible .

Since your Train and Validate data are random sample ,anything would happen .especially the size of Validate data is smaller than Train data . (Smaller size data tend to get higher C statistic)

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Thanks Page Miller and Ksharp for the reply. It is very helpful.

Again I get 2 more questions.

1.Based on ROC curve and C statistics of train and validation set, can we determine the model is performing better.

2. Is it necessary that the model should converge. ( I mean, should we fix Quasi complete separation warning). If we dont fix, what would be the implication.

Again I get 2 more questions.

1.Based on ROC curve and C statistics of train and validation set, can we determine the model is performing better.

2. Is it necessary that the model should converge. ( I mean, should we fix Quasi complete separation warning). If we dont fix, what would be the implication.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

@aranganayagi wrote:

Thanks Page Miller and Ksharp for the reply. It is very helpful.

Again I get 2 more questions.

1.Based on ROC curve and C statistics of train and validation set, can we determine the model is performing better.

Better than what?

2. Is it necessary that the model should converge. ( I mean, should we fix Quasi complete separation warning). If we dont fix, what would be the implication.

The link I provided explains what to do in the presence of quasi-complete separation.

--

Paige Miller

Paige Miller

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

1.Based on ROC curve and C statistics of train and validation set, can we determine the model is performing better.

I would not trust ROC or C statisitic , I prefer to Goodness Of Fit statistic like H-L test . @Rick_SAS has written several blog about it .

2. Is it necessary that the model should converge. ( I mean, should we fix Quasi complete separation warning). If we dont fix, what would be the implication.

Yes . I thinks so . If model is not converge , the output is not trust.

Or @Rick_SAS might have some word to say.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Thanks @Ksharp for the reply.

Can we understand whether the model is overfitting or underfitting from Goodness of Fit statistic like H-L test. I went through the materials but couldn't figure out. Could you please help me

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

No. GOF only can tell you if the model fit the sample data(train dataset) well or not .

If you have good GOF statistics ,it usually hint model is NOT overfit and is NOT lackfit .

If you have sas 9.4 m6 , You could try

proc logistic ....

model ........ / GOF ;

run;

if not try

model ......../ LACKFIT ;

Another GOF is check if the model is overdisperse :

model ............/ scale=none aggregate ;

Search sas logistic 's documentation or Rick's blog you could find it .

Build your skills. Make connections. Enjoy creative freedom. Maybe change the world. **Registration is now open through August 30th**. Visit the SAS Hackathon homepage.

Upcoming Events

- Ask the Expert: Tricks for Report Builders: Report Design Best Practices | 06-Aug-2024
- SAS Bowl XLII, The SAS Hackathon 2024 | 14-Aug-2024
- Ask the Expert: Top Tips for SAS®9 Programmers Moving to SAS® Viya® | 15-Aug-2024
- Ask the Expert: Workload Orchestration in SAS® Viya® 4 | 22-Aug-2024
- Ask the Expert: Solving Multi-Objective Optimization Models in SAS® Optimization | 29-Aug-2024
- WUSS 2024 | 04-Sep-2024
- Ask the Expert: Executing SAS Analytics From R Shiny Applications | 05-Sep-2024

Mastering the WHERE Clause in PROC SQL

SAS' Charu Shankar shares her PROC SQL expertise by showing you how to master the WHERE clause using real winter weather data.

Find more tutorials on the SAS Users YouTube channel.