turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- Convergence criterion

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-22-2015 05:02 AM

May I request someone to explain the meaning of 'Convergence criterion (GCONV=1E-8) satisfied' in logistic regression in simple terms?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Babloo

12-22-2015 06:36 AM

GCONV is one of the Convergence Criteria that you can specify in your Model statement and it is the default one if not specified explicitly in the Model statements with the value of **GCONV=1E–8. **And this value works as threshold value in terminating the iterative gradient-descent search which is an optimization technique that is applied to minimize the vector of error values at each iteration by starting with an arbitrary initial vector of parameter estimates and repeatedly minimizing error values in small steps.

**Model convergence status** section in your output is related to your model’s optimization convergence and precision. And it is important to figure out some problems like complete separation or quasi-complete separation.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to mohamed_zaki

12-22-2015 06:44 AM

Thanks for the reply.

Since I've trouble understanding the technical stuffs,may I request you to explain in layman's term?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Babloo

12-22-2015 07:26 AM

For one-dimensional functions, the derivative is zero when the function reaches a maximum value. The "gradient" is a generalzation of a derivative for multivariable functions. The GCONV= criterion says that the optimization will stop when the "derivative" is very close to zero (smaller than 1E-8). When that occurs, the log likelihood function should be very close to its maximum value.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Rick_SAS

12-23-2015 06:57 AM

May I request you to explain about the terms 'derivative' and 'optimization' from your comment?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Babloo

12-23-2015 08:08 AM

"Optimization" means that you are trying to find the largest or smallest value of a function. In statistics, you try to find the "best" function that fits your data. Mathematically, this corresponds to an optimization. For example, a "line of best fit" is the line that minimizes the size of the residuals.

A "derivative" is the slope of a function. In calculus you learn how to compute the slope and you learn that the largest and smallest values often occur when the slope is zero. Think about the top of a hill or the bottom of a valley.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Rick_SAS

12-24-2015 08:42 PM

Hi Dr. Rick,

Just to confirm, is that second derivative which should be used for minimum or maximum value?

Regards,

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to stat_sas

12-25-2015 07:09 AM

No, I never mentioned the second derivative. The hills and valleys occur where the FIRST derivative is zero.

However, you are correct thatyou look at the second derivative in order to determine whether you have a hill or valley.

For functions of more than one variable, the first derivative is replaced by the gradient, which is the vector of first partial derivatives. The second derivative is replaced by the Hessian matrix, which is the matrix of second partial derivatives. In addition to hills and valleys, multivariate functions have "saddle points" where the function is a hill in some directions, but a valley in others.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Rick_SAS

12-25-2015 12:03 PM

Thanks Dr. Rick - this is very helpful.