## A Problem with NLPNMS when using Modules with random generations

Hello................

I am running a proc iml "SAS 9.0" optimization program where the objective function and constraint are computed through 2 modules. The modules return values that are based on certain  generations from the normal distribution.  My questions are:

1-When I run the optimization (NLPNMS), Does the optimization compute the objective function value using different generations through different iterations? What I need is to fix the random generations through different optimization iterations, so that at each iteration of my optimization, the only variable will the parameter estimates not the generated x value. Could this be achieved?

2- Is there any procedure to reduce the time the program takes to give results as it takes more than one day!!

Here is the constraint module  [ Note that the objective function module is very similar to the constraint module]:

start
AYA(hry)global(hr);

hr=hry;

sss=0;

bb0=j(1,1,0.);

p=3;

nruns=100000;

vvv=0;

do reps=1 to nruns;

rl=0;

z=0//j(p-1,1,0);

ti=ssq(z);

do until (ti>hr);

x=normal(repeat(-1,p,1));

sh=sss//j(p-1,1,0);

x=x+sh;

ei=sqrt(ssq(x-z));

if ei>hr then w=(ei-(1-hr)*hr)/ei;

else w=hr;

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

vvv=vvv+rl;

end;

rrr=vvv/nruns;

bb0=(rrr)-200;

return(bb0);

finish;

con = {   0    0    0

.    1    .   };

hr={1,0.1,2};

optn= j(1,11,.);optn=5;optn=1;optn=0;

CALL NLPNMS(rc,hrres,"OBJECTIVE",hr) blc=con opt=optn nlc="AYA";

quit;

Thanks,

A. Emara

9 REPLIES 9

## Re: A Problem with NLPNMS when using Modules with random generations

In answer to 1): as it stands, each time the optimization algorithm evaluates the objective or constraint the normal() function will return a different value. I highly double that you can get a valid optimization given this. You need to put x in the global clause and compute it once, before you start the optimization. i assume you will need to run the optimization over and over again for different samples from the normal distribution?

In answer to 2, you may find the optimization converges faster after doing 1).

## Re: A Problem with NLPNMS when using Modules with random generations

I cannot answer 1 or 2 because I don't know what your program is supposed to do and the program contains no comments.

At each iteration, z starts at the origin. The inner loop does sort of a random walk in 3D until z drifts sufficiently far from the origin.

The variable rl counts how many steps it took to drift away.

The rest of the program repeats that process many times and records the average time for z to drift away.

You are trying to find the parameters that minimize this average time, where the parameters are :

hr determines how far z can drift before you determine that the drift is "far enough"

hr is some kind of mixing parameter that determines how z drifts (?)

hr is some scaling factor (?)

Is any of this close to the truth?  Could you provide a high-level summary that describes what the program is trying to accomplish?

## Re: A Problem with NLPNMS when using Modules with random generations

Yes, this is very close to the truth. I am trying to find the optimal parameters vector (hr) where the objective function represent the average of a certain function over 100,000 runs . This function counts the number of generated values until a certain metric "ti" exceeds the parameter hr. I have one constraint which takes a form very close to the objective function form. I added the comments to the program Really appreciate your help.

A. Emara

prociml;

start ARL global(hr);

shift=0.5;

* p is number of variables;

p=2;

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

* starting the simulation;

do reps=1
to nruns;

rl=0;

*initialy z is set as a px1 vector of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr);

*generating p univariate standard normal variables;

x=normal(repeat(-1,p,1));

*adding the shift for the out of control case;

sh=shift//j(p-1,1,0);

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr then w=(ei-(1-hr)*hr)/ei;

else w=hr;

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb=arlr;

return(bb);

finish;

start ARL0(hry)global(hr);

hr=hry;

shift=0;

bb0=j(1,1,0.);

p=2;

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

* starting the simulation;

do reps=1 to nruns;

rl=0;

*initialy z is set as a px1 vector Of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of Z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr);

*generating p univariate standard normal variables;

x=normal(repeat(-1,p,1));

*adding the shift for the out of control case;

sh=shift//j(p-1,1,0);

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr then w=(ei-(1-hr)*hr)/ei;

else w=hr;

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb0=(arlr)-200;

return(bb0);

finish;

con = {   0    0    0

.    1    . };

*initial vector;

hr={0.8856  0.17  3.85};

optn= j(1,11,.);optn=5;optn=1;optn=0;

CALL NLPNMS(rc,hrres,"ARL",hr) blc=con opt=optn nlc="ARL0";

quit;

## Re: A Problem with NLPNMS when using Modules with random generations

OK. I think I can solve your problem without running any simulations.  An optimal solution is hr = {0, anything, 0}.

I might have some of the details wrong, but the basic idea is that you are trying to minimize the number of a random motions required for a particle to leave a certain neighborhood. hr describes the width of the neighborhood, so the smaller hr is, the sooner the particle will leave.  If hr=0 then ei=norm(x)>0, which forces w=0.

If you plug in those values, then the minimum value of -199 is achieved, which means it takes exactly one step to leave the neighborhood.  The value of hr doesn't matter, which probably explains why you are having convergence problems.  The objective function does not have a unique minimum where the gradient is zero in the three-dimensional parameter space.

This information might cause you to reformulate your model of the process.  To aid in future attempts, here are some tips:

1) Start with x and z being scalar quantities. When you can solve the 1D problem, then let x and y be 2D vectors.

2) Always debug the objective and constraint modules before moving on to optimization.

3) Evaluate the objective function on a grid of parameter values to try to get a feel for the objective function and to obtain a good initial guess to feed to the NLP procedures.

## Re: A Problem with NLPNMS when using Modules with random generations

Thank u so much , but actually i have a constraint which restricts my minimum value, I am not free to choose any value for my parameters so that the objective function is minimized. I need arlo>=200. The syntax gives answers that seem reasonable but they take very long time. Hutch suggested putting x in a global clause. Would this help and how could this be done? Could i fix the random generations through different optimization iterations so that at every guess of the solution , the generated data that is used is the same?? I hope I have clarified my point.

## Re: A Problem with NLPNMS when using Modules with random generations

I have given you several tips and suggestions. I need to attend to some pressing work-related issues, so I'll let others offer further advice. Good luck.

## Re: A Problem with NLPNMS when using Modules with random generations

proc iml;
start ARL(hr);
call randseed(1000,1);
shift=0.5;

* p is number of variables;

p=2;

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

sh=shift//j(p-1,1,0); /* sh unchanged throughout the simulation */
x = j(p,1); /* set up x with proper dimensions, just once */

* starting the simulation;

do reps=1 to nruns;
rl=0;

*initialy z is set as a px1 vector of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr);

*generating p univariate standard normal variables;
call randgen(x, 'normal');

*adding the shift for the out of control case;

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));
if ei>hr then
w=(ei-(1-hr)*hr)/ei;
else
w=hr;

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb=arlr;

return(bb);

finish;

start ARL0(hry)global(hr);

hr=hry;

call randseed(1000,1);

shift=0;

bb0=j(1,1,0.);

p=2;
sh=shift//j(p-1,1,0); /* sh unchanged throughout the simulation */
x = j(p,1); /* set up x with propoer dimensions, just once */

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

* starting the simulation;

do reps=1 to nruns;

rl=0;

*initialy z is set as a px1 vector Of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of Z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr);

*generating p univariate standard normal variables;

call randgen(x,'normal');

*adding the shift for the out of control case;

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr then w=(ei-(1-hr)*hr)/ei;

else w=hr;

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb0=(arlr)-200;

return(bb0);

finish;

con = {   0    0    0,

.    1    . };

*initial vector;

hr={0.8856  0.17  3.85};

optn= j(1,11,.);optn=5;optn=1;optn=0;

CALL NLPNMS(rc,hrres,"ARL",hr) blc=con opt=optn nlc="ARL0";
print hrres;

quit;

## Re: A Problem with NLPNMS when using Modules with random generations

Thank you so much, but are these functions available in SAS 9.0??

A. Emara

## Re: A Problem with NLPNMS when using Modules with random generations

Oops, I missed the fact that you were still at 9.0.  RANDGEN and RANDSEED were introduced in 9.1. I don't believe the old random number generation functions allow the arbitrary re-initialization of the random number stream. In order to get the same sampling, you will have to create a table of random variates at the beginning of your program, and re-use that stream at each call. If you need more samples than can be stored in memory, you might have to store them in a data set and read them in as called for.

From The DO Loop