Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Options

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Posted 04-17-2014 05:03 AM
(1236 views)

Hello................

I am running a proc iml "SAS 9.0" optimization program where the objective function and constraint are computed through 2 modules. The modules return values that are based on certain generations from the normal distribution. **My questions are:**

1-When I run the optimization (NLPNMS), Does the optimization compute the objective function value using different generations through different iterations? What I need is to fix the random generations through different optimization iterations, so that at each iteration of my optimization, the only variable will the parameter estimates not the generated x value. Could this be achieved?

2- Is there any procedure to reduce the time the program takes to give results as it takes more than one day!!

Here is the constraint module [ Note that the objective function module is very similar to the constraint module]:

start

AYA(hry)global(hr);

hr=hry;

sss=**0**;

bb0=j(**1**,**1**,**0.**);

p=**3**;

nruns=**100000**;

vvv=**0**;

do reps=**1** to nruns;

rl=**0**;

z=**0**//j(p-**1**,**1**,**0**);

ti=ssq(z);

do until (ti>hr[**1**]);

***The following is the random part,I am asking about;**

x=normal(repeat(-**1**,p,**1**));

sh=sss//j(p-**1**,**1**,**0**);

x=x+sh;

ei=sqrt(ssq(x-z));

if ei>hr[**3**] then w=(ei-(**1**-hr[**2**])*hr[**3**])/ei;

else w=hr[**2**];

z=(**1**-w)*z+w*x;

ti=ssq(z);

rl=rl+**1**;

end;

vvv=vvv+rl;

end;

rrr=vvv/nruns;

bb0[**1**]=(rrr)-**200**;

return(bb0);

finish;

con = { **0** **0** **0**,

**.** **1** **. **};

hr={**1**,**0.1**,**2**};

optn= j(**1**,**11**,**.**);optn[**2**]=**5**;optn[**10**]=**1**;optn[**11**]=**0**;

CALL NLPNMS(rc,hrres,"OBJECTIVE",hr) blc=con opt=optn nlc="AYA";

**quit**;

Thanks,

A. Emara

9 REPLIES 9

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

In answer to 1): as it stands, each time the optimization algorithm evaluates the objective or constraint the normal() function will return a different value. I highly double that you can get a valid optimization given this. You need to put x in the global clause and compute it once, before you start the optimization. i assume you will need to run the optimization over and over again for different samples from the normal distribution?

In answer to 2, you may find the optimization converges faster after doing 1).

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I cannot answer 1 or 2 because I don't know what your program is supposed to do and the program contains no comments.

Here's how I read your program:

At each iteration, z starts at the origin. The inner loop does sort of a random walk in 3D until z drifts sufficiently far from the origin.

The variable rl counts how many steps it took to drift away.

The rest of the program repeats that process many times and records the average time for z to drift away.

You are trying to find the parameters that minimize this average time, where the parameters are :

hr[1] determines how far z can drift before you determine that the drift is "far enough"

hr[2] is some kind of mixing parameter that determines how z drifts (?)

hr[3] is some scaling factor (?)

Is any of this close to the truth? Could you provide a high-level summary that describes what the program is trying to accomplish?

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Yes, this is very close to the truth. I am trying to find the optimal parameters vector (hr) where the objective function represent the average of a certain function over 100,000 runs . This function counts the number of generated values until a certain metric "ti" exceeds the parameter hr[1]. I have one constraint which takes a form very close to the objective function form. I added the comments to the program Really appreciate your help.

A. Emara

**proc****iml**;

start ARL global(hr);

shift=**0.5**;

* p is number of variables;

p=**2**;

* nruns is the number of simulation runs;

nruns=**100000**;

arlv=**0**;

* starting the simulation;

do reps=**1**

to nruns;

rl=**0**;

*initialy z is set as a px1 vector of zeros;

z=**0**//j(p-**1**,**1**,**0**);

*ti carries the sum of squares of z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[**1**]);

*generating p univariate standard normal variables;

x=normal(repeat(-**1**,p,**1**));

*adding the shift for the out of control case;

sh=shift//j(p-**1**,**1**,**0**);

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[**3**] then w=(ei-(**1**-hr[**2**])*hr[**3**])/ei;

else w=hr[**2**];

z=(**1**-w)*z+w*x;

ti=ssq(z);

rl=rl+**1**;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb=arlr;

return(bb);

finish;

start ARL0(hry)global(hr);

hr=hry;

shift=**0**;

bb0=j(**1**,**1**,**0.**);

p=**2**;

* nruns is the number of simulation runs;

nruns=**100000**;

arlv=**0**;

* starting the simulation;

do reps=**1 **to nruns;

rl=**0**;

*initialy z is set as a px1 vector Of zeros;

z=**0**//j(p-**1**,**1**,**0**);

*ti carries the sum of squares of Z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[**1**]);

*generating p univariate standard normal variables;

x=normal(repeat(-**1**,p,**1**));

*adding the shift for the out of control case;

sh=shift//j(p-**1**,**1**,**0**);

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[**3**] then w=(ei-(**1**-hr[**2**])*hr[**3**])/ei;

else w=hr[**2**];

z=(**1**-w)*z+w*x;

ti=ssq(z);

rl=rl+**1**;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb0[**1**]=(arlr)-**200**;

return(bb0);

finish;

con = { **0** **0** **0**,

**.** **1** **. **};

*initial vector;

hr={**0.8856** **0.17** **3.85**};

optn= j(**1**,**11**,**.**);optn[**2**]=**5**;optn[**10**]=**1**;optn[**11**]=**0**;

CALL NLPNMS(rc,hrres,"ARL",hr) blc=con opt=optn nlc="ARL0";

**quit**;

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

OK. I think I can solve your problem without running any simulations. An optimal solution is hr = {0, anything, 0}.

I might have some of the details wrong, but the basic idea is that you are trying to minimize the number of a random motions required for a particle to leave a certain neighborhood. hr[1] describes the width of the neighborhood, so the smaller hr[1] is, the sooner the particle will leave. If hr[3]=0 then ei=norm(x)>0, which forces w=0.

If you plug in those values, then the minimum value of -199 is achieved, which means it takes exactly one step to leave the neighborhood. The value of hr[2] doesn't matter, which probably explains why you are having convergence problems. The objective function does not have a unique minimum where the gradient is zero in the three-dimensional parameter space.

This information might cause you to reformulate your model of the process. To aid in future attempts, here are some tips:

1) Start with x and z being scalar quantities. When you can solve the 1D problem, then let x and y be 2D vectors.

2) Always debug the objective and constraint modules before moving on to optimization.

3) Evaluate the objective function on a grid of parameter values to try to get a feel for the objective function and to obtain a good initial guess to feed to the NLP procedures.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

After you posted your code, I tried running your objective function a few times. As one would expect, it gives slightly different answers if you run it with the same input multiple times, because the random number stream is different each time the objective function or non-linear constraint function is evaluated. This can be easily fixed. Instead of calling the NORMAL function each time for your objective and constraint functions, use the newer RANDGEN call. You can then re-set the random number seed to the same value by calling RANDSEED at the beginning of each objective/constraint module call, guaranteeing that you use the same random number stream each time. I also noticed that you could hoist a couple of operations outside your inner loop to marginally improve performance. I made these changes in your program, and it ran in about 3 hrs on my PC. I pasted it in below. Give it a try and see if you get better results. I also recommend that you try some different seeds in the RANDSEED call to see how sensitive your results are to the choice of random seed.

proc iml;

start ARL(hr);

call randseed(1000,1);

shift=0.5;

* p is number of variables;

p=2;

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

sh=shift//j(p-1,1,0); /* sh unchanged throughout the simulation */

x = j(p,1); /* set up x with proper dimensions, just once */

* starting the simulation;

do reps=1 to nruns;

rl=0;

*initialy z is set as a px1 vector of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[1]);

*generating p univariate standard normal variables;

call randgen(x, 'normal');

*adding the shift for the out of control case;

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[3] then

w=(ei-(1-hr[2])*hr[3])/ei;

else

w=hr[2];

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb=arlr;

return(bb);

finish;

start ARL0(hry)global(hr);

hr=hry;

call randseed(1000,1);

shift=0;

bb0=j(1,1,0.);

p=2;

sh=shift//j(p-1,1,0); /* sh unchanged throughout the simulation */

x = j(p,1); /* set up x with propoer dimensions, just once */

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

* starting the simulation;

do reps=1 to nruns;

rl=0;

*initialy z is set as a px1 vector Of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of Z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[1]);

*generating p univariate standard normal variables;

call randgen(x,'normal');

*adding the shift for the out of control case;

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[3] then w=(ei-(1-hr[2])*hr[3])/ei;

else w=hr[2];

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb0[1]=(arlr)-200;

return(bb0);

finish;

con = { 0 0 0,

. 1 . };

*initial vector;

hr={0.8856 0.17 3.85};

optn= j(1,11,.);optn[2]=5;optn[10]=1;optn[11]=0;

CALL NLPNMS(rc,hrres,"ARL",hr) blc=con opt=optn nlc="ARL0";

print hrres;

quit;

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Thank you so much, but are these functions available in SAS 9.0??

A. Emara

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.

**If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website. **

Multiple Linear Regression in SAS

Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.

Find more tutorials on the SAS Users YouTube channel.