Contributor
Posts: 21

# Maximum Likelihood Estimation: NLP/ NLMIXED/ IML

I am running the same simulation in SAS/IML and NLMIXED to estimate model parameters for a specified log-likelihood function. While NLMIXED runs without issues, SAS/IML some times stops in error. Can any state/list the main differences between NLP/NLMIXED/IML in finding MLE? Ordinarily, I would assume there are none but there must be some differences that would cause SAS/IML to stop but NLMIXED to run without error or warning. For security reasons I am not at liberty to post the code. However, I am hoping a general discussion may help to answer this question. I would really prefer to use SAS/IML as it is easier to code in a matrix language.

Regards,

Raphael

SAS Super FREQ
Posts: 4,174

## Re: Maximum Likelihood Estimation: NLP/ NLMIXED/ IML

When IML "stops in in error," what does the Log say is the cause of the error?  Also, what NLP algorithm are you using for IML and what algorithm and options are you using for NLMIXED?

Contributor
Posts: 21

## Re: Maximum Likelihood Estimation: NLP/ NLMIXED/ IML

Here is a snippet of the code:

I know why the error occurs but why not in NLMIXED also. I don't want to constrain my coefficients. How to solve this issue?

SAS/IML error

WARNING: Invalid argument resulted in missing value result.

count     : number of occurrences is 1000

operation : ## at line 1295 column 5

operands  : y, u

y   1000 rows      1 col     (numeric)

u      1 row       1 col     (numeric)

-0.180567

statement : ASSIGN at line 1295 column 5

traceback : module BC at line 1295 column 5

module OBJFUN at line 1322 column 5

/* Box-Cox transformation */

start BC(y,u);

f = (y##u - 1)/u;          /* <-- line 1295 */

return(f);

finish BC;

/* Bickle-Doksum transformation */

start BD(y,u);

f = (sign(y)#abs(y)##u - 1)/u;

return(f);

finish BD;

/* log-likelihood */

start OBJFUN(parm) global(yy, xx);

b = parm[1:2];

u1 = parm[3];

u2 = parm[4];

v = parm[5];

xb = xx*b;

w = BD(BC(yy,u1),u2);

mu = BD(BC(xb,u1),u2);      /* <-- line 1322 */

n = nrow(yy);

f = -n/2*log(v) - 1/2/v*sum((w-mu)##2) + (u2-1)*sum(log(abs(BC(yy,u1)))) + u1*sum(log(yy));

return (f);

finish OBJFUN;

xx = xx || J(nrow(xx),1); /* add intercept term */

/* b1-b0-u1-u2-v constraint matrix */

con = { .  .  .  1e-6  1e-6,   /* lower bounds */

.  .  .  .  .}; /* upper bounds */

x0 = {1 6.5 0.1 1 1}; /* starting values */

opt = {1 0};   /* maximization - supress results */

call nlpnra(rc, mle, "OBJFUN", x0, opt, con);

PROC NLMIXED

proc nlmixed data = sample tech = NEWRAP maxit = 1000;

parms b1 = 6.5 b2 = 1 v = 1 u1= 0.1 u2= 1 ;

xb = b1 + b2 * x ;

y0 = (  y ** u1 - 1 )/u1 ;

xb0 = (  xb ** u1 - 1 )/u1 ;

y_ = (sign(y0)*abs(y0)**u2-1)/u2 ;

xb_ = (sign(xb0)*abs(xb0)**u2-1)/u2 ;

llik = ( -1/2*log(v) - 1/2/v*( y_ - xb_)**2 + (u2 - 1)*log((abs(y0)) +  u1*log(y);

model zzz ~ general(llik);

run;

SAS Super FREQ
Posts: 4,174

## Re: Maximum Likelihood Estimation: NLP/ NLMIXED/ IML

I suspect that the error also occurs for PROC NLMIXED, but that the procedure just silently continues, whereas PROC IML is stopping after it records 1000 instances of the error.

Obviously I don't know your problem as well as you do, but shouldn't u1 be constrained to be positive, the way u2 and v are?

The error is that xb has negative values and so in the Box-Cox transformation the expression xb##u1 is not mathematically valid when u1<0. If you don't contrain u1, then you need to modify the BC function so that it can handle negative values for the local variable y.

Contributor
Posts: 21

## Re: Maximum Likelihood Estimation: NLP/ NLMIXED/ IML

Modifying the BC function to handle negative values is the approach I took. Since putting a constraint on u1>0 limits the type of transformations we have available. However, I soon realize that this approach has drawbacks as this is a transform both sides model. Meaning we must apply the same transformation to both the response and the regression function to preserve the relationship. I began with wanting to simultaneously estimate c , u1 and u2. That is, mu = BD(BC(xb + c,u1),u2)

It turns out that applying the two-parameter Box-Cox transformation to the transform-both-sides (TBS) regression model is not straight forward. The main reason being that the constraint min(xb) + c > 0 is data dependent. This in it self is not an issue until you realize that at each iteration of the algorithm the coefficients can take on

positive as well as negative values. Hence I see no way to specify the constraint min(xb) + c > 0 since the sign of xb may change at each iteration.

It does seem easier to fix c > -min(x) then estimate u1 and u2. As you rightly suggested some time ago in another discussion, we can fix c such that c > - min(xb). It is well known that as c moves away from - min(xb) the transformation becomes less effective.

Discussion stats
• 4 replies
• 568 views
• 0 likes
• 2 in conversation