Aims to minimize the function f(x, y), where the range of x and y are provided. f(x, y) is calculated by the ratio of functions g(x, y) and h(x, y, z), where h(x, y, z) is minimized for each (x, y) and the range z is provided.
Example:
f(x, y)=g(x, y)/min (h(x, y, z));
g(x, y)=1/x+1/y;
h(x, y, z)=x/z+y;
The below code is definitely not working but I like to show what the picture is
proc iml;
start funch(x) global(var1 var2);
f = var1/x[1]+var2;
*where var1 and var2 are x[1] and x[2] in funcf
return (f);
finish funch;
start funcf(x);
con = {1,
10};
x0 = {1};
optn = {1
1};
call nlphqn(rc, xres, "funch", x0, optn) blc=con;
h=x[1]/xrec+x[2];
g=1/x[1]+1/x[2];
f = g/h;
return (f);
finish funcf;
con = {0.01 0.02,
0.05 0.05};
x0 = {0.01 0.02};
optn = {1
1};
call nlphqn(rc, xres, "funcf", x0, optn) blc=con;
print xres;
Your help would be appreciated!
OK. Here is the IML solution to the simple problem you posted. The problem is degenerate (no minima), so you get lots of NOTEs in the log.
proc iml;
start funch(z) global(g_xy);
x = g_xy[1]; y = g_xy[2];
f = x / z + y;
return (f);
finish funch;
/* first, test that we get the correct answer when we optimize funch()
for fixed values of (x,y) */
g_xy = {0.1 0.15};
con = {1,
10};
x0 = {1};
optn = {1,
0}; * no printing!;
call nlphqn(rc, xres, "funch", x0, optn) blc=con;
print xres;
/* now set up the real objective function problem */
start funcf(x) global(g_xy);
g_xy = x; /* copy x -> g_xy */
con = {1,
10};
x0 = {1};
optn = {1,
0}; /* no printing! */
call nlphqn(rc, z, "funch", x0, optn) blc=con;
h=x[1]/z + x[2];
g=1/x[1]+1/x[2];
f = g/h;
return (f);
finish funcf;
con = {0.01 0.02,
0.05 0.05};
x0 = {0.01 0.02};
optn = {1,
1};
call nlphqn(rc, xres, "funcf", x0, optn) blc=con;
print xres;
QUIT;
Calling @Rick_SAS
Meanwhile I recommend to start exploring visually what the function looks like. Mathematically I cannot help you. and I lack knowledge of a feasible data range for x, y, z. Here comes some code to get you started.
proc iml;
x=do(0.1,10, 0.1);
y=do(0.1,10, 0.1);
xy=expandgrid(x,y);
start fxy(xx, yy);
gxy=1/xx+1/yy;
z=rand('uniform', 1, 10);
/* z=10; */
hxyz=xx/z+yy;
f=z||gxy/hxyz;
return(f);
finish fxy;
res=j(nrow(xy), 4, .);
do i=1 to nrow(xy);
res[i, 1:2]=xy[i,];
res[i, 3:4]=fxy(xy[i,1], xy[i,2]);
end;
create outs from res [colname={'x' 'y' 'z' 'res'}];
append from res;
close outs;
quit;
proc template;
define statgraph surfaceplotparm;
begingraph;
entrytitle "Surface Plot of task";
layout overlay3d / cube=false;
surfaceplotparm x=x y=y z=res /
reversecolormodel=true
colorresponse=z
colormodel=(red yellow green);
endlayout;
endgraph;
end;
run;
proc sgrender data= outs template=surfaceplotparm;
run;
When you say, "h(x, y, z) is minimized for each (x, y) and the range z is provided," do you mean that you are treating h as a 1-D function? For each (x,y) value, you want to minimize h for z in the range [1, 10]?
If so, the minimum value is always z_max = 10 when x > 0, as it appears to be. The minimum of h(z; x, y) = x/z + y is achieved when z is as large as possible, and you have constrained z in [1, 10].
The problem therefore reduces to minimizing the function
f(x,y) = (1/x + 1/y) / (x/10 + y)
Both partial derivatives are negative when x>0 and y>0, so the minimum occurs when x and y are as large as possible. Therefore, on the domain [0.01, 0.05] x [0.02, 0.05], the solution is at (x,y) = (0.05, 0.05).
Is this the correct interpretation of your question?
Thank you for the response. Yes, h(x, y, z) is a 1-dimention function for each x and y. The example I presented is simple since both functions are monotonic functions. The real ones are very complicated actually ☹️
OK. Here is the IML solution to the simple problem you posted. The problem is degenerate (no minima), so you get lots of NOTEs in the log.
proc iml;
start funch(z) global(g_xy);
x = g_xy[1]; y = g_xy[2];
f = x / z + y;
return (f);
finish funch;
/* first, test that we get the correct answer when we optimize funch()
for fixed values of (x,y) */
g_xy = {0.1 0.15};
con = {1,
10};
x0 = {1};
optn = {1,
0}; * no printing!;
call nlphqn(rc, xres, "funch", x0, optn) blc=con;
print xres;
/* now set up the real objective function problem */
start funcf(x) global(g_xy);
g_xy = x; /* copy x -> g_xy */
con = {1,
10};
x0 = {1};
optn = {1,
0}; /* no printing! */
call nlphqn(rc, z, "funch", x0, optn) blc=con;
h=x[1]/z + x[2];
g=1/x[1]+1/x[2];
f = g/h;
return (f);
finish funcf;
con = {0.01 0.02,
0.05 0.05};
x0 = {0.01 0.02};
optn = {1,
1};
call nlphqn(rc, xres, "funcf", x0, optn) blc=con;
print xres;
QUIT;
Thank you for the code. It really helps!
The log file shows either "NOTE: ABSGCONV convergence criterion satisfied." or "NOTE: All parameters are actively constrained. Optimization cannot proceed." Is something wrong with the second note? It sounds that no optimization procedure was done. Any input here?
> "NOTE: All parameters are actively constrained. Optimization cannot proceed."
The note means that the algorithm wants to take a step in the "downhill" direction, but the constraints prevent it from doing so. As I said, your toy problem is degenerate. Every subproblem has the solution on the boundary of the constraint region.
The messages in the log are NOTEs. They are not WARNINGs or ERRORs. So there is something unusual happening, but nothing is wrong.
One more question about optn= {1}; Does it mean to solve a maximization problem? It aims to minimize the function, the optn[1] value should be 0 instead of 1? When I replace 1 by 0, the error message is shown as below.
ERROR: NLPHQN call: The number of observations must be set.
Use the first element of the OPT vector.
ERROR: (execution) Matrices do not conform to the operation.
You stated that you wanted to use the NLPHQN subroutine. The documentation states "The NLPHQN subroutine uses a hybrid quasi-Newton least squares method [emphasis added] to compute an optimum value of a function." Later on the same page, it states "Note: In least squares subroutines, you must set the first element of the opt vector to m, the number of functions."
Since the objective function in this problem is scalar-valued, you can use a different optimizer (such as NLPNRA or NLPQN) to solve the problem. If you use a different optimizer, then you can set opt[1]=0, which specifies that the optimizer should find a minimum of a scalar-valued function.
Got it! Thank you so much!
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.
Find more tutorials on the SAS Users YouTube channel.