turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

04-17-2014 05:03 AM

Hello................

I am running a proc iml "SAS 9.0" optimization program where the objective function and constraint are computed through 2 modules. The modules return values that are based on certain generations from the normal distribution. **My questions are:**

1-When I run the optimization (NLPNMS), Does the optimization compute the objective function value using different generations through different iterations? What I need is to fix the random generations through different optimization iterations, so that at each iteration of my optimization, the only variable will the parameter estimates not the generated x value. Could this be achieved?

2- Is there any procedure to reduce the time the program takes to give results as it takes more than one day!!

Here is the constraint module [ Note that the objective function module is very similar to the constraint module]:

start

AYA(hry)global(hr);

hr=hry;

sss=**0**;

bb0=j(**1**,**1**,**0.**);

p=**3**;

nruns=**100000**;

vvv=**0**;

do reps=**1** to nruns;

rl=**0**;

z=**0**//j(p-**1**,**1**,**0**);

ti=ssq(z);

do until (ti>hr[**1**]);

***The following is the random part,I am asking about;**

x=normal(repeat(-**1**,p,**1**));

sh=sss//j(p-**1**,**1**,**0**);

x=x+sh;

ei=sqrt(ssq(x-z));

if ei>hr[**3**] then w=(ei-(**1**-hr[**2**])*hr[**3**])/ei;

else w=hr[**2**];

z=(**1**-w)*z+w*x;

ti=ssq(z);

rl=rl+**1**;

end;

vvv=vvv+rl;

end;

rrr=vvv/nruns;

bb0[**1**]=(rrr)-**200**;

return(bb0);

finish;

con = { **0** **0** **0**,

**.** **1** **. **};

hr={**1**,**0.1**,**2**};

optn= j(**1**,**11**,**.**);optn[**2**]=**5**;optn[**10**]=**1**;optn[**11**]=**0**;

CALL NLPNMS(rc,hrres,"OBJECTIVE",hr) blc=con opt=optn nlc="AYA";

**quit**;

Thanks,

A. Emara

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Emara

04-17-2014 10:59 AM

In answer to 1): as it stands, each time the optimization algorithm evaluates the objective or constraint the normal() function will return a different value. I highly double that you can get a valid optimization given this. You need to put x in the global clause and compute it once, before you start the optimization. i assume you will need to run the optimization over and over again for different samples from the normal distribution?

In answer to 2, you may find the optimization converges faster after doing 1).

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Emara

04-17-2014 11:00 AM

I cannot answer 1 or 2 because I don't know what your program is supposed to do and the program contains no comments.

Here's how I read your program:

At each iteration, z starts at the origin. The inner loop does sort of a random walk in 3D until z drifts sufficiently far from the origin.

The variable rl counts how many steps it took to drift away.

The rest of the program repeats that process many times and records the average time for z to drift away.

You are trying to find the parameters that minimize this average time, where the parameters are :

hr[1] determines how far z can drift before you determine that the drift is "far enough"

hr[2] is some kind of mixing parameter that determines how z drifts (?)

hr[3] is some scaling factor (?)

Is any of this close to the truth? Could you provide a high-level summary that describes what the program is trying to accomplish?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Rick_SAS

04-17-2014 06:05 PM

Yes, this is very close to the truth. I am trying to find the optimal parameters vector (hr) where the objective function represent the average of a certain function over 100,000 runs . This function counts the number of generated values until a certain metric "ti" exceeds the parameter hr[1]. I have one constraint which takes a form very close to the objective function form. I added the comments to the program Really appreciate your help.

A. Emara

**proc****iml**;

start ARL global(hr);

shift=**0.5**;

* p is number of variables;

p=**2**;

* nruns is the number of simulation runs;

nruns=**100000**;

arlv=**0**;

* starting the simulation;

do reps=**1**

to nruns;

rl=**0**;

*initialy z is set as a px1 vector of zeros;

z=**0**//j(p-**1**,**1**,**0**);

*ti carries the sum of squares of z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[**1**]);

*generating p univariate standard normal variables;

x=normal(repeat(-**1**,p,**1**));

*adding the shift for the out of control case;

sh=shift//j(p-**1**,**1**,**0**);

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[**3**] then w=(ei-(**1**-hr[**2**])*hr[**3**])/ei;

else w=hr[**2**];

z=(**1**-w)*z+w*x;

ti=ssq(z);

rl=rl+**1**;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb=arlr;

return(bb);

finish;

start ARL0(hry)global(hr);

hr=hry;

shift=**0**;

bb0=j(**1**,**1**,**0.**);

p=**2**;

* nruns is the number of simulation runs;

nruns=**100000**;

arlv=**0**;

* starting the simulation;

do reps=**1 **to nruns;

rl=**0**;

*initialy z is set as a px1 vector Of zeros;

z=**0**//j(p-**1**,**1**,**0**);

*ti carries the sum of squares of Z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[**1**]);

*generating p univariate standard normal variables;

x=normal(repeat(-**1**,p,**1**));

*adding the shift for the out of control case;

sh=shift//j(p-**1**,**1**,**0**);

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[**3**] then w=(ei-(**1**-hr[**2**])*hr[**3**])/ei;

else w=hr[**2**];

z=(**1**-w)*z+w*x;

ti=ssq(z);

rl=rl+**1**;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb0[**1**]=(arlr)-**200**;

return(bb0);

finish;

con = { **0** **0** **0**,

**.** **1** **. **};

*initial vector;

hr={**0.8856** **0.17** **3.85**};

optn= j(**1**,**11**,**.**);optn[**2**]=**5**;optn[**10**]=**1**;optn[**11**]=**0**;

CALL NLPNMS(rc,hrres,"ARL",hr) blc=con opt=optn nlc="ARL0";

**quit**;

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Emara

04-18-2014 09:31 AM

OK. I think I can solve your problem without running any simulations. An optimal solution is hr = {0, anything, 0}.

I might have some of the details wrong, but the basic idea is that you are trying to minimize the number of a random motions required for a particle to leave a certain neighborhood. hr[1] describes the width of the neighborhood, so the smaller hr[1] is, the sooner the particle will leave. If hr[3]=0 then ei=norm(x)>0, which forces w=0.

If you plug in those values, then the minimum value of -199 is achieved, which means it takes exactly one step to leave the neighborhood. The value of hr[2] doesn't matter, which probably explains why you are having convergence problems. The objective function does not have a unique minimum where the gradient is zero in the three-dimensional parameter space.

This information might cause you to reformulate your model of the process. To aid in future attempts, here are some tips:

1) Start with x and z being scalar quantities. When you can solve the 1D problem, then let x and y be 2D vectors.

2) Always debug the objective and constraint modules before moving on to optimization.

3) Evaluate the objective function on a grid of parameter values to try to get a feel for the objective function and to obtain a good initial guess to feed to the NLP procedures.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Rick_SAS

04-18-2014 12:16 PM

Thank u so much , but actually i have a constraint which restricts my minimum value, I am not free to choose any value for my parameters so that the objective function is minimized. I need arlo>=200. The syntax gives answers that seem reasonable but they take very long time. Hutch suggested putting x in a global clause. Would this help and how could this be done? Could i fix the random generations through different optimization iterations so that at every guess of the solution , the generated data that is used is the same?? I hope I have clarified my point.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Emara

04-18-2014 12:56 PM

I have given you several tips and suggestions. I need to attend to some pressing work-related issues, so I'll let others offer further advice. Good luck.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Emara

04-20-2014 12:30 PM

After you posted your code, I tried running your objective function a few times. As one would expect, it gives slightly different answers if you run it with the same input multiple times, because the random number stream is different each time the objective function or non-linear constraint function is evaluated. This can be easily fixed. Instead of calling the NORMAL function each time for your objective and constraint functions, use the newer RANDGEN call. You can then re-set the random number seed to the same value by calling RANDSEED at the beginning of each objective/constraint module call, guaranteeing that you use the same random number stream each time. I also noticed that you could hoist a couple of operations outside your inner loop to marginally improve performance. I made these changes in your program, and it ran in about 3 hrs on my PC. I pasted it in below. Give it a try and see if you get better results. I also recommend that you try some different seeds in the RANDSEED call to see how sensitive your results are to the choice of random seed.

proc iml;

start ARL(hr);

call randseed(1000,1);

shift=0.5;

* p is number of variables;

p=2;

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

sh=shift//j(p-1,1,0); /* sh unchanged throughout the simulation */

x = j(p,1); /* set up x with proper dimensions, just once */

* starting the simulation;

do reps=1 to nruns;

rl=0;

*initialy z is set as a px1 vector of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[1]);

*generating p univariate standard normal variables;

call randgen(x, 'normal');

*adding the shift for the out of control case;

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[3] then

w=(ei-(1-hr[2])*hr[3])/ei;

else

w=hr[2];

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb=arlr;

return(bb);

finish;

start ARL0(hry)global(hr);

hr=hry;

call randseed(1000,1);

shift=0;

bb0=j(1,1,0.);

p=2;

sh=shift//j(p-1,1,0); /* sh unchanged throughout the simulation */

x = j(p,1); /* set up x with propoer dimensions, just once */

* nruns is the number of simulation runs;

nruns=100000;

arlv=0;

* starting the simulation;

do reps=1 to nruns;

rl=0;

*initialy z is set as a px1 vector Of zeros;

z=0//j(p-1,1,0);

*ti carries the sum of squares of Z, i.e, each element of z is squared and then we take the sum;

ti=ssq(z);

do until (ti>hr[1]);

*generating p univariate standard normal variables;

call randgen(x,'normal');

*adding the shift for the out of control case;

x=x+sh;

*computing the norm of the error;

ei=sqrt(ssq(x-z));

if ei>hr[3] then w=(ei-(1-hr[2])*hr[3])/ei;

else w=hr[2];

z=(1-w)*z+w*x;

ti=ssq(z);

rl=rl+1;

end;

arlv=arlv+rl;

end;

arlr=arlv/nruns;

bb0[1]=(arlr)-200;

return(bb0);

finish;

con = { 0 0 0,

. 1 . };

*initial vector;

hr={0.8856 0.17 3.85};

optn= j(1,11,.);optn[2]=5;optn[10]=1;optn[11]=0;

CALL NLPNMS(rc,hrres,"ARL",hr) blc=con opt=optn nlc="ARL0";

print hrres;

quit;

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Hutch_sas

04-20-2014 04:42 PM

Thank you so much, but are these functions available in SAS 9.0??

A. Emara

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to Emara

04-21-2014 09:38 AM

Oops, I missed the fact that you were still at 9.0. RANDGEN and RANDSEED were introduced in 9.1. I don't believe the old random number generation functions allow the arbitrary re-initialization of the random number stream. In order to get the same sampling, you will have to create a table of random variates at the beginning of your program, and re-use that stream at each call. If you need more samples than can be stored in memory, you might have to store them in a data set and read them in as called for.