turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- A problem in PROC NLMIXED

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

05-28-2015 10:13 AM

Hello Everybody,

I have a problem with my codes in PROC NLMIXED. I have seen similar question on internet and their answers. But non of them worked for me!

My dataset is simulated and it's not real. The data has a one-level structure identified by cluster_id. Outcome is time to event with weibull distribution (here called survival_t). I have 4 independent covariates.

Codes are:

proc nlmixed data=data qpoints=30 noad;

bounds rho > 0 ;

parms rho=5 b1=0 b2=0 b3=0 b4=0 logsig=1;

eta = b0 + b1 * covariate1 + b2 * covariate2 + b3 * covariate3 + b4 * covariate4 + W;

lambda = exp(eta);

G_t = exp(-(lambda *survival_t) ** rho);

g = rho*lambda_c*((lambda *survival_t) ** (rho-1))*G_t;

ll = (censorship=1)*log(g) + (censorship=0)*log(G_t);

model survival_t ~ general(ll);

random W ~ normal(0,exp(2*logsig)) subject=cluster_id ;

run;

There is no error, but a NOTE:

NOTE: Execution error for observation 1.

The answers I have found online are:

1) Convert the codes to DATASTEP to see what the problem is: I did so and I found no problem..

2) Check if the outcome contains zero: It does not contain any zero value (you can check from the attached.)

3) Log of no zero or negative value is taken....

I really appreciate any help.

Accepted Solutions

Solution

05-28-2015
11:06 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

05-28-2015 11:06 AM

Probably one of the calls to the EXP function is overflowing. For example, if ETA or the expression for G_t is moderately large (greater than constant('logbig')=709), then the EXP function overflows during the first pass through the data.

It is also a good idea to give default values for ALL parameters. You've omitted b) and lambda_c, which get set to 1 by the PROC.

Looking at the expression in the G_t assignment, I suggest making the coefficients for ETA small, and the power RHO small. Probably RHO is the culprit. The following initial conditions seem to work:

parms rho=**2** b1=**0** b2=**0** b3=**0** b4=**0** logsig=**1** b0=**0** lambda_c=**0.01**;

All Replies

Solution

05-28-2015
11:06 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

05-28-2015 11:06 AM

Probably one of the calls to the EXP function is overflowing. For example, if ETA or the expression for G_t is moderately large (greater than constant('logbig')=709), then the EXP function overflows during the first pass through the data.

It is also a good idea to give default values for ALL parameters. You've omitted b) and lambda_c, which get set to 1 by the PROC.

Looking at the expression in the G_t assignment, I suggest making the coefficients for ETA small, and the power RHO small. Probably RHO is the culprit. The following initial conditions seem to work:

parms rho=**2** b1=**0** b2=**0** b3=**0** b4=**0** logsig=**1** b0=**0** lambda_c=**0.01**;

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

05-28-2015 11:40 AM

Thanks Rick. You're wonderful and your answers are always correct

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

05-30-2015 02:18 PM

Hi Rick. I have a question related to this code. Your answer was correct, by the way, and I can see the results (previously I couldn't have estimation). However, the parameter estimates are wrong (especially regression parameters). Noted that I'm doing simulation study so I know exact values of the parameters. I've generated one-level survival data with weibull baseline and normal random effects.

The true values of parameters are:

rho=5 lambda=2 b1=2 b2=1.5 b3=.4 b4=-1 sigma2=3;

But estimates are: rho=4.1388 lambda=exp(-0.1536) b1= b2=0.3353 b3=-0.06213 b4=-0.2256 sigma2=exp(2*-0.8578);

The strange part is that the codes work for SAS datasets and other online dataset (So codes are correct). On the other hand, semiparametric frailty model and other models work satisfactory for my simulated data (So simulation is done correctly). BUT these codes (parametric frailty model with weibull baseline and normal frailties) do not work for my simulated data. What do you think about this? I'm sure that I'm missing something here but whatever I think I cannot get the point...

The codes are copied bellow and the data is attached.

proc nlmixed data=Survival_MultiLevel qpoints=5 noad;

bounds rho > 0 ;

parms rho=1 b1=0 b2=0 b3=.4 b4=-1 logsig=.6 b0=1;

eta = b0 + b1*covariate1 + b2*covariate2 + b3*covariate3 + b4*covariate4 + W; /*b0 is log of weibull scale parameters.*/

lambda = exp(eta);

S_t = exp(-(lambda*survival_t)**rho);

h = rho*lambda*((lambda*survival_t)**(rho-1))*S_t;

ll = (censorship=1)*log(h) + (censorship=0)*log(S_t);

model survival_t ~ general(ll);

random W ~ normal(0,exp(2*logsig)) subject=cluster_id ;

run;

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

05-31-2015 02:00 AM

Dear Rick,

Regarding the question I posted some hours ago, I think I solve the issue myself. When I write the log of the likelihood in another way, I get precise results. The formula I'm using is: L=(h(t)^censorship)*S(t). It seems this way of writing the likelihood decreases the computational burden...

proc nlmixed data=data qpoints=5 noad;

bounds rho > 0 ;

parms rho=1 b1=0 b2=0 b3=.4 b4=-1 logsig=.6 b0=1;

eta = b0 + b1*covariate1 + b2*covariate2 + b3*covariate3 + b4*covariate4 + W; /*b0 is log of weibull scale parameters.*/

lambda = exp(eta);

log_S_t = -lambda*(survival_t**rho);

log_h=(censorship=1)*(eta+log(rho)+(rho-1)*log(survival_t));

ll = log_h+log_S_t;

model survival_t ~ general(ll);

random W ~ normal(0,exp(2*logsig)) subject=cluster_id ;

run;

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

06-01-2015 09:30 AM

I am going to wager on sample size as the driver here. While the point estimates don't look very good, what do the interval estimates look like--are your priors covered by the interval? If so, then it may be just a matter of increasing the number of cases simulated in order to get adequate estimates.

It could well be something else that is not apparent right away to me, but this would be my first cut at the behavior.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

06-01-2015 09:59 AM

Thanks