BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
Lohan
Calcite | Level 5

I have the problem to solve for 3 parameters of 3 nonlinear equations which result from analytic entropy maximization with constraints. 

I only know of the IML/polyroot procedure to find roots and only so for polynomials. So in searching the internet I found a discussion where it was recommended to convert this problem into a minimization problem by forming minimizing the sum of the squares of these functions using proc optmodel with the option "solve with nlp". 

When I do this, the solver stops after 5000 steps with "Solution Status: Iteration Limit Reached" but the objective function being 3.6E-21, "Optimality Error: 0.00768" and "Infeasibilty: 0", so pretty close to a solution. 

Now, by chance, I tried also to optimize a constant function with my three equations as constraints. 

This time the routine converged in only 10 iterations, with the parameter estimates being equal to the ones obtained with the first method. 

While this seems to be a very convenient way to find the roots of a set of equations, I am a little bit uneasy about its theoretical foundation. 

Is this method recommendable or are there easier and more sound solutions?

 

Thank you,

Florian 

1 ACCEPTED SOLUTION

Accepted Solutions
Ksharp
Super User
In IML, There are tons of functions like FROOT(), NLPXXX .......... can solve nonlinear optimal problem.
You can post it at IML forum.

The following is GA code I wrote to find a function root.


proc iml;
/*Define a object function. NOTE: Add an ABS()on f(x)*/
start func(x);
v=abs( (0.3-x)#2/(1+x)##2 - 48 );
return (v);
finish;
/* 1)Encoding */
/*The first 1 means fixed-length real encoding. since there is only one
parameter X (fixed-length) in object function and searching interval is
continuous for X (real)*/
/*The second 1 means the size of encoding. since there is only one
parameter X in object function */
/* 1234 is initial random seed*/
id=gasetup(1,1,1234);
/* 2)Objective */
/* id is what returned from gasetup()*/
/* 0 means minimize user module*/
/* “func” is user module we define above*/
call gasetobj(id,0,"func");
/* 3)Selection */
/* 100 is selecting the best 100 members into next generation*/
/* 1 is dual tournament selective method*/
/* 0.95 is selective pressure. The member with the best objective value is
chosen with a Probability 0.95 */
call gasetsel(id,100,1,0.95);
/* 4)Crossover */
/* 0.05 is crossover probability. For real encoding I recommend to use a
small value to avoid to leak out the best member*/
/* 4 define a kind of crossover method – heurisitic */
call gasetcro(id,0.05,4);
/* 5) Mutation */
/* 0.05 is mutation probability. Use the default mutation method -
uniform*/
call gasetmut(id,0.05);
/*Initial population size is 10000 members. Range is [-10,10]*/
call gainit(id,10000,{-10,10});
niter = 100; /* The number of iterations*/
summary = j(niter,2);
mattrib summary [c = {"Min Value", "Avg Value"} l=""];
/* 6) Repeat */
do i = 1 to niter;
/*Get the next generation*/
call garegen(id);
/*Get the value of Objection function */
call gagetval(value, id);
/*Get the best value of Objection function of this generation*/
summary[i,1] = value[1];
/*Get the average value of Objection function of this generation*/
summary[i,2] = value[:];
end;
/*Get the best member X and the best value of object function*/
call gagetmem(mem, value, id, 1);
print mem[ l="ROOT:"],
"Min Value: " value[l = ""] ;
iteration = t(1:niter);
print iteration summary;
call gaend(id);
quit;


View solution in original post

4 REPLIES 4
Ksharp
Super User
In IML, There are tons of functions like FROOT(), NLPXXX .......... can solve nonlinear optimal problem.
You can post it at IML forum.

The following is GA code I wrote to find a function root.


proc iml;
/*Define a object function. NOTE: Add an ABS()on f(x)*/
start func(x);
v=abs( (0.3-x)#2/(1+x)##2 - 48 );
return (v);
finish;
/* 1)Encoding */
/*The first 1 means fixed-length real encoding. since there is only one
parameter X (fixed-length) in object function and searching interval is
continuous for X (real)*/
/*The second 1 means the size of encoding. since there is only one
parameter X in object function */
/* 1234 is initial random seed*/
id=gasetup(1,1,1234);
/* 2)Objective */
/* id is what returned from gasetup()*/
/* 0 means minimize user module*/
/* “func” is user module we define above*/
call gasetobj(id,0,"func");
/* 3)Selection */
/* 100 is selecting the best 100 members into next generation*/
/* 1 is dual tournament selective method*/
/* 0.95 is selective pressure. The member with the best objective value is
chosen with a Probability 0.95 */
call gasetsel(id,100,1,0.95);
/* 4)Crossover */
/* 0.05 is crossover probability. For real encoding I recommend to use a
small value to avoid to leak out the best member*/
/* 4 define a kind of crossover method – heurisitic */
call gasetcro(id,0.05,4);
/* 5) Mutation */
/* 0.05 is mutation probability. Use the default mutation method -
uniform*/
call gasetmut(id,0.05);
/*Initial population size is 10000 members. Range is [-10,10]*/
call gainit(id,10000,{-10,10});
niter = 100; /* The number of iterations*/
summary = j(niter,2);
mattrib summary [c = {"Min Value", "Avg Value"} l=""];
/* 6) Repeat */
do i = 1 to niter;
/*Get the next generation*/
call garegen(id);
/*Get the value of Objection function */
call gagetval(value, id);
/*Get the best value of Objection function of this generation*/
summary[i,1] = value[1];
/*Get the average value of Objection function of this generation*/
summary[i,2] = value[:];
end;
/*Get the best member X and the best value of object function*/
call gagetmem(mem, value, id, 1);
print mem[ l="ROOT:"],
"Min Value: " value[l = ""] ;
iteration = t(1:niter);
print iteration summary;
call gaend(id);
quit;


Rick_SAS
SAS Super FREQ

Do you have linear constraints of nonlinear constraints? I assume linear.

 

The optimization approach is usually a good method. For high-dimensional roots, I like to use Newton's method in PROC IML to find the zeros, but the standard examples of using Newton's method do not account for constraints. However, you can solve for the roots by recasting the problem as an optimization, as you've said. This is a standard way to solve for roots of nonlinear systems. It is called the "least squares minimization problem."

 

PROC OPTMODEL is a good choice if you have it.  For users who have SAS/IML but not SAS/OR, I recommend that they use the SAS/IML NLPLM subroutine, which enables you to solve the least-squares minimization problems like
min [ f_1^2(x) + f_3^2(x) + f_3^2(x)]

 where f_i(x) is the i_th component function. (You would set opt[1]=3 to specify that the problem should be solved as a least squares problem.)

Example of using NLPLM to solve a least squares problem

 

In the second part of your question, you say

>> "I tried also to optimize a constant function with my three equations as constraints"

I don't have any practical experience doing this, but think you are right to feel "uneasy" about this technique. 

I would rather do an optimization with linear constraints (the first way?) than try to enforce nonlinear constraints (the second way). Linear constraints are pretty easy to enforce. Furthermore, optimization routines have many parameters that you can use to control the convergence of the algorithms: absolute error, relative errors, magnitude of gradient, etc.  All of these quantities are exactly zero for a constant function, and most software provides fewer parameters that control how strictly you want to enforce nonlinear constraints. I would probably choose the first method over the second, if only because I understand the numerical properties better.

 

I'd be interested to here if @RobPratt has any experience or thoughts to contribute.

Lohan
Calcite | Level 5

Thank you all for your answers! I wasn't aware of the nlpxx solvers in IML. This is very helpful. 

 

Rick, my constraints are nonlinear. Solving a constant function with linear constraints would be far more easy. 

I managed now to solve the least squares optimization in as few steps as the constant function with constraints approach by simply scaling some variables. Presumably this was a precision problem. So I shall stick to the least squares optimization. 

 

Do you know also of a way to solve this kind of problems in JMP?

RobPratt
SAS Super FREQ

From my perspective, the main appeal of the least-squares approach is that it returns a nearly feasible solution even if the constraints cannot be satisfied.  If you instead use constraints and the problem is infeasible, the solver returns only the conclusion "Problem may be infeasible."  Whether this distinction is meaningful depends on whether a nearly feasible solution is useful to you.

 

Also, you might see better performance by explicitly introducing slack variables to be squared in the objective.

 

Can you please share the PROC OPTMODEL code and data?

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

What is Bayesian Analysis?

Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.

Find more tutorials on the SAS Users YouTube channel.

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!

Discussion stats
  • 4 replies
  • 1618 views
  • 1 like
  • 4 in conversation