Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- SAS Programming
- /
- SAS Procedures
- /
- root finding, proc optmodel

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

09-14-2016 05:15 AM

I have the problem to solve for 3 parameters of 3 nonlinear equations which result from analytic entropy maximization with constraints.

I only know of the IML/polyroot procedure to find roots and only so for polynomials. So in searching the internet I found a discussion where it was recommended to convert this problem into a minimization problem by forming minimizing the sum of the squares of these functions using proc optmodel with the option "solve with nlp".

When I do this, the solver stops after 5000 steps with "Solution Status: Iteration Limit Reached" but the objective function being 3.6E-21, "Optimality Error: 0.00768" and "Infeasibilty: 0", so pretty close to a solution.

Now, by chance, I tried also to optimize a constant function with my three equations as constraints.

This time the routine converged in only 10 iterations, with the parameter estimates being equal to the ones obtained with the first method.

While this seems to be a very convenient way to find the roots of a set of equations, I am a little bit uneasy about its theoretical foundation.

Is this method recommendable or are there easier and more sound solutions?

Thank you,

Florian

Accepted Solutions

Solution

09-20-2016
03:20 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Lohan

09-14-2016 05:45 AM

All Replies

Solution

09-20-2016
03:20 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Lohan

09-14-2016 05:45 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Lohan

09-14-2016 10:43 AM - edited 09-14-2016 10:58 AM

Do you have linear constraints of nonlinear constraints? I assume linear.

The optimization approach is usually a good method. For high-dimensional roots, I like to use Newton's method in PROC IML to find the zeros, but the standard examples of using Newton's method do not account for constraints. However, you can solve for the roots by recasting the problem as an optimization, as you've said. This is a standard way to solve for roots of nonlinear systems. It is called the "least squares minimization problem."

PROC OPTMODEL is a good choice if you have it. For users who have SAS/IML but not SAS/OR, I recommend that they use the SAS/IML NLPLM subroutine, which enables you to solve the least-squares minimization problems like

min [ f_1^2(x) + f_3^2(x) + f_3^2(x)]

where f_i(x) is the i_th component function. (You would set opt[1]=3 to specify that the problem should be solved as a least squares problem.)

Example of using NLPLM to solve a least squares problem

In the second part of your question, you say

*>> "I tried also to optimize a constant function with my three equations as constraints"*

I don't have any practical experience doing this, but think you are right to feel "uneasy" about this technique.

I would rather do an optimization with linear constraints (the first way?) than try to enforce nonlinear constraints (the second way). Linear constraints are pretty easy to enforce. Furthermore, optimization routines have many parameters that you can use to control the convergence of the algorithms: absolute error, relative errors, magnitude of gradient, etc. All of these quantities are exactly zero for a constant function, and most software provides fewer parameters that control how strictly you want to enforce nonlinear constraints. I would probably choose the first method over the second, if only because I understand the numerical properties better.

I'd be interested to here if @RobPratt has any experience or thoughts to contribute.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Rick_SAS

09-15-2016 09:38 AM

Thank you all for your answers! I wasn't aware of the nlpxx solvers in IML. This is very helpful.

Rick, my constraints are nonlinear. Solving a constant function with linear constraints would be far more easy.

I managed now to solve the least squares optimization in as few steps as the constant function with constraints approach by simply scaling some variables. Presumably this was a precision problem. So I shall stick to the least squares optimization.

Do you know also of a way to solve this kind of problems in JMP?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Rick_SAS

09-21-2016 03:23 PM

From my perspective, the main appeal of the least-squares approach is that it returns a nearly feasible solution even if the constraints cannot be satisfied. If you instead use constraints and the problem is infeasible, the solver returns only the conclusion "Problem may be infeasible." Whether this distinction is meaningful depends on whether a nearly feasible solution is useful to you.

Also, you might see better performance by explicitly introducing slack variables to be squared in the objective.

Can you please share the PROC OPTMODEL code and data?