BookmarkSubscribeRSS Feed
ZMX
Fluorite | Level 6 ZMX
Fluorite | Level 6

Hello SAS community,

 

I asked this question a few weeks ago on a different community in SAS and could not get any help. I'll try to be more specific and hope someone can help me out.

 

I set up a mix-logit model with random coefficients in PROC IML. I try to simplify the model here:

Y=b1X1 + b2X2 + eps where

b1=cZ1 + E1 

b2=dZ2 + E2

 Y is latent, E1 and E2 are errors coming from a bivariate normal distribution with zero mean and a covariance matrix. I used Cholesky decomposition such that:

E1=t.e1 

E2=r.e1 + m.e2 

where e1 and e2 each are coming from a standard normal distribution that I generate outside the optimization procedure. I then, use simulated maximum likelihood (simulated over R draws of e1,e2) to estimate these parameters: c, d, t, r, m. At the end, using the parameters from the lower triangular matrix (i.e. t, r, m), I can get the covariance matrix of the bivariate normal distribution of E1 and E2.

 

However, there are issues with Cholesky factorization. 1. The parameters are not identified as t, r, and m could be either positive or negative and still give the same covariance matrix. and in fact  2. I get a negative estimate for t (and sometimes for m or r) where t is technically a standard error and statistically should be positive (so I'm not sure if this result is acceptable?). I tried to add a constraint (i.e. t>0 and m>0). However, for negative parameters, it does not work and the result is bounded and equal to zero. I can also add arbitrary constraints such as t=1 and m=1 but I would like to try other ways as well, before having to add more restrictions to the model.

 

Therefore, I would like to know if there is a different way to estimate c, d and the covariance matrix parameters (variance and correlation coefficient of E1 and E2). For example, and more specifically, is it possible to generate bivariate normal errors (E1 and E2 with R draws) within the optimization procedure such that at every step of the optimization, the program generates a new set of bivariate normal errors (E1 and E2) with the new/improved parameters of the covariance matrix. Also, I appreciate any comments and suggestions about the issues in Cholesky factorization.

 

 

 

4 REPLIES 4
Rick_SAS
SAS Super FREQ

You originally posted this in the "New SAS User" Community, so welcome to SAS.

 

In your message, you say you have a mixed logistic model, but the formula you post looks closer to a linear mixed model. The general formulation of a mixed model is 

Y = X*beta + Z*gamma + eps

where gamma ~ MVN(0, G) and eps ~ MVN(0, R).

 

What is the goal of your program? If your goal is to fit the fixed and random effects, you can use PROC MIXED for linear models or PROC GLMMIX for a logit mixed model. If your goal is to fit some more exotic mixed model, you can use PROC NLMIXED to define your own model. 

 

If you are writing the program for educational purposes (eg, school project), then please post your IML code. Regarding your questions about the Cholesky factor, my comments are:

1. The parameter estimates for a mixed model are typically done on the variance-covariance matrix, not the Cholesky factor. You can constrain the optimization to find the positive diagonal elements (variances).

2. By definition, the Cholesky factor of a positive semidefinite matrix is the (unique!) matrix with positive elements on the diagonal for which L*L` = Sigma. In your scheme (solving for t, r, m), if you obtain negative values for t or m, then I think you can just change a few signs to get the usual Cholesky factor. For example, if t < 0 then the Cholesky factor is {-t 0, -r m}.  If m < 0 then the factor is {t 0, r -m};

 

Again, unless you have a compelling reason to want to program the estimation in SAS/IML, I would recommend using one of the specialized SAS procedure for mixed models.

ZMX
Fluorite | Level 6 ZMX
Fluorite | Level 6

Thanks Rick for the quick reply. I’m unfamiliar with Proc nlmixed. I used IML for a similar model (without the simulation part) because allows me to create my model in matrix language with all the details that I needed to include in the model. It might not be the only way to do it though. I put the main parts of the program here, without certain details, to briefly show what I’m trying to do. I hope it is clear.

 

I have a multinomial logistic structural model. I estimate the model parameters and then use these parameters for predictions. In this model, each individual faces certain choices (say 10 choices). Data is cross sectional.

 

* Program within Proc IML, to set up the likelihood ;

 

start DM(b) global ( X1, X2, e1, e2, e3, D, … );

 

 * rows are number of individuals;

* D is the number of draws for the random component;

 

do d=1 to D;

 

* Using Cholesky decomposition;

E1[,d]=t#e1[,d] ;

E2[,d]=r#e1[,d] + m#e2[,d]; 

 

* e1 and e2 are standard normal;

 

* Random coefficients are;

b1[,1]=Z1*c + E1[,d] ;

b2[,1]=Z2*d + E2[,d];

 

* where E1 and E2~MVN(0,G) and are the random component;

 

* The main model is: 

Yk [,1]= b1#log(X1k) + b2#log(X2k) + b3log(X1k)^2 +…+  epsk;   /* k=1,…,10, one equation per choice */

 *epsk~type I extreme value therefore changes in eps across choices ~ logistic;

 

* The logistic probability for each individual and each choice is ;

prob(choice 1)=exp(Y1) / sum[exp(Yk)];

.

.

.

prob(choice 10)=exp(Y7) / sum[exp(Yk)];

 

 

* then the likelihood for each individual in the matrix is, 

L[,1] = s # prob(choice k) ;

 

* s is the observed choice for each individual;

 * L is conditional on a given random draw;

 

/* L from each round (total of D rounds) are all added together (sum_L) to eventually get the approximate integral love the random component (SL) ; */

 

sum_L [ ,1]=sum_L [ , 1]+ L [ ,1];

 

* end the D loop;

end;

 

SL=sum_L / D;

 

* The log-simulated likelihood is;

SLL=sum(log(SL));

 return (SLL);

 

finish DM;

 

Parameters to estimate are c, d, b3, t, r, m (G). I used nlpnrr for optimisation. As I mentioned earlier instead of using Cholesky, I would like to use a different method that directly estimates the covariance matrix for E1 and E2 along with the other parameters. Is there a way to do this within this program (IML) or it should be transferred to Proc nlmixed?

About your second comment on the Cholesky estimates: as you mentioned the resulting covariance matrix by definition is positive semidefinite, regardless of the signs of t,r,m. But what I don't quit understand is how I should justify changing the signs of the estimates.

ZMX
Fluorite | Level 6 ZMX
Fluorite | Level 6
Oh, I think I see your point about changing the signs of the Cholesky factor.
Rick_SAS
SAS Super FREQ

If I understand what you are doing, you are using simulation from MVN to generate D instances of the random effects. Then you are using NLPNRR to fit a MLE model for each instance and are averaging the result.

 

That's not how mixed models are fit. As far as I know, your attempt to evaluate the "log-simulated likelihood" is not a valid estimation method. (And if you are re-generating the random effects between consecutive fits, this process probably won't converge.)

 

To answer your programming question, you can use the BLC option in the NLPNRR function to set the Boundary and Linear Constraints (BLC) for the parameters. In particular, you can specify that certain parameters must be positive.  For an example of using BLC, see "Step 2: Set up constraints" in the article "Maximum likelihood estimation in SAS/IML."

 

For an example that compares the PROC IML approach to MLE with the PROC NLMIXED approach, see "Two ways to compute maximum likelihood estimates in SAS."  I believe you can actually use PROC GLIMMIX for this problem, but I am not an expert on GLIMMIX and I am not 100% clear about what model you are trying to fit,

 

I suggest you read about "Types of logistic (or logit) models that can be fit using SAS" and focus on the models that include random effects. You might also want to browse the excellent papers by Robin High, in case they are relevant for you:

sas-innovate-2024.png

Available on demand!

Missed SAS Innovate Las Vegas? Watch all the action for free! View the keynotes, general sessions and 22 breakouts on demand.

 

Register now!

Multiple Linear Regression in SAS

Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.

Find more tutorials on the SAS Users YouTube channel.

From The DO Loop
Want more? Visit our blog for more articles like these.
Discussion stats
  • 4 replies
  • 1013 views
  • 1 like
  • 2 in conversation