turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- PROC NLMIXED GENERAL Statement Issues

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-06-2017 02:36 AM

Hi all,

I'm fitting a nonlinear mixed-effects model in NLMIXED to a set of longitudinal birth weight data. Because the data is strictly positive, I wanted to model the response with a log-normal distribution. As you may be aware, the log-normal distribution is not a default option in the MODEL statement. So I went ahead and computed the log-likelihood and used the GENERAL statement. Below is the code I used to fit the model:

```
proc nlmixed data = weight gconv = 0;
parms b0 = 1 b1 = .6 b2 = 1 var0 = .02 cov10 = -.01 var1 = .02 cov20 = -.001 cov21 = -.04 var2 = .4 s2e = .001;
beta0 = b0 + u0; beta1 = b1 + u1; beta2 = b2 + u2;
predv = beta0 + beta1*(1-exp(-beta2*Years));
pi = arcos(-1);
ll = (-1/2)*((log(2*pi)) + (log(s2e)) + ((log(Years - predv)**2)/s2e)) + Years;
model Weightkg ~ general(ll);
random u0 u1 u2 ~ normal([0,0,0], [var0,cov10,var1,cov20,cov21,var2]) subject = ID;
run;
```

When I run the code, this is the error code I receive:

NOTE: Execution error for observation 1

I've looked around and a possible reason for this error message is that my starting values may be off. I used starting values from fitting the model using Bayesian estimation with PROC MCMC but to no avail. Has anyone ever dealt with this issue and can offer any solutions? Thanks!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to tbanh

02-06-2017 06:51 AM

I think you have not specified your loglikelihood function correctly. For example, the sign for the quadratic term should be negative. An easier way would be to make a variable inside NLMIXED which is log to your observations. Then use the built-in likelihood for normal distributed data. The two ways to (using log-normal with generel loglikelood, and built-in normal likelihood) is equivalent as I can illustrate with this example

```
data simulation;
do i=1 to 100;
y=exp(rand('normal',0,1));
output;
end;
run;
*method 1;
proc nlmixed data=simulation;
parm mu 0.5;
z=log(y);
model z ~ normal (mu,1);
run;
*method 2;
proc nlmixed data=simulation;
parm mu 0;
pi = arcos(-1);
s=1;
ll = -log(2*pi)/2 - log(s) - ((log(y) - mu)/(2*s))**2 - log(y);
model y ~ general (ll);
run;
```

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to JacobSimonsen

02-06-2017 10:30 AM

Hi Jacob,

Well, they both converge to the same point estimate, but the error estimates and log likelihood estimates don't match up. My brain is sort of fried today, so if you have any ideas why the error estimates differ so drastically...

See, I would pick method 2 based on information criteria, but would end up trading off for wider confidence intervals. That makes me think there may be something missing somewhere.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

02-06-2017 10:45 AM

I did a mistake, sorry. I scale with 1/2 within the squared term. That should be done outside.

```
proc nlmixed data=simulation;
parm mu 0;
pi = arcos(-1);
s=1;
ll = -log(2*pi)/2 - log(s) - ((log(y) - mu)/(s))**2/2 - log(y);
model y ~ general (ll);
run;
```

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to JacobSimonsen

02-06-2017 10:47 AM

Such mistake just emphasize why its better to use the built-in functionality whenever possible:-)

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to JacobSimonsen

02-06-2017 12:47 PM

Whcih is why I don't use NLMIXED as much as I should.

Anybody have a spare Gompertz likelihood lying around...?

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to JacobSimonsen

02-06-2017 01:36 PM

Hi Jacob,

Thanks for the reply. I tried your code but I still received this message:

NOTE: Execution error for observation 1

It seems that the likelihood isn't the problem?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to tbanh

02-07-2017 09:16 AM

Show the code you run.

I am confused why "years" is used in the defintion of the predictor (predv), but in the likelihood function it seems that years is your observations you try to predict.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to JacobSimonsen

02-07-2017 02:46 PM

Hi Jacob,

Thanks for pointing that egregious error. Yes, I mistakenly put Years as my dependent variable when I should have put Weight. The likelihood function works now.

However, I'm running into a new problem. The model I'm running will not converge because I'm providing poor starting values. Do you have any advice on how to fix this?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to tbanh

02-08-2017 03:28 AM

It can be quite difficult to find good starting values. I can not come up with a general good answer, but a few suggestions:

Estimate first the parameters in a fixed effect model. Then take these estimate as starting values for the parameters to the mean in the random effect model.

Also, a baysian version of the random effect model can give you some good starting values. You can use proc mcmc almost as you have made proc nlmixed. A mcmc method is often very robust to starting values. The mean values (or max values) from the posterior distribution can then afterwards be used as starting values in proc nlmixed.

It can also help if you make it such that the parameters is expected to have same order of magnitude. Otherwise there can easily occur convergence problems.