BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
MahdiA110
Obsidian | Level 7

Hello Community,

I am using Joint Marginalized Random Effect model and I have received these messages:

NOTE: Convergence criterion (GCONV=1E-8) satisfied.
NOTE: At least one element of the gradient is greater than 1e-3.

The result for my second joint variable does NOT seem reasonable. All estimated p-values for the second parameter are larger than 0.9!

I do not really know, what is the reason of this outcome. 

I created the following code segments:

281 proc nlmixed data=original noad ;
282 parms a1=-2.2133 a2=-1.2087 Time_a=-0.0178 group_a=0.0885 time_ICU_a=0.0551
282! LASIXICU_a=-0.0046 CVP_a=0.0313 BE_a=-0.0764 Hgb_24h_after_a=-0.0004
283 b1=-4.5876 b2=-2.1619 Time_b=-0.0286 group_b=0.4285 time_ICU_b=0.1637
283! LASIXICU_b=0.0324 CVP_b=0.1275 BE_b=-0.1555 Hgb_24h_after_b=-0.0008
284 s1=0.2 s2=1.96 s12=-0.107;
285
286 *y1 = PH_ordinal;
287 if repeat in (1,3,5,7,9,11,13)then do;
288 eta1 = Time_a*Time + group_a*group + time_ICU_a*time_ICU + LASIXICU_a*LASIXICU+ CVP_a*
288! CVP + BE_a* BE + Hgb_24h_after_a* Hgb_24h_after ;
289 delta11= sqrt(1+s1**2) * PROBIT( exp(a1+eta1)/(1+exp(a1+eta1)));
290 delta12= sqrt(1+s1**2) * PROBIT( exp(a2+eta1)/(1+exp(a2+eta1)));
291 if ordinal=0 then do;
292 lik= PROBNORM (delta11+u1);
293 end;
294 if ordinal=1 then do;
295 lik= 1- PROBNORM (delta12+u1);
296 end;
297 if (lik > 1e-10) then loglik = log(lik);
298 else loglik = 1e100;
299 end;
300 *y2 = Lactate_ordinal;
301 if repeat in (2,4,6,8,10,12,14)then do;
302 eta2 = Time_b*Time + group_b*group + time_ICU_b*time_ICU + LASIXICU_b*LASIXICU+ CVP_b*
302! CVP + BE_b* BE + Hgb_24h_after_b* Hgb_24h_after ;
303 delta21 = sqrt(1+s2**2) * PROBIT( exp(b1+eta2)/(1+exp(b1+eta2)));
304 delta22 = sqrt(1+s2**2) * PROBIT( exp(b2+eta2)/(1+exp(b2+eta2)));
305 if ordinal=0 then do;
306 lik = PROBNORM(delta21+u2);
307 end;
308 if ordinal=1 then do;
309 lik= 1- PROBNORM(delta22+u2);
310 end;
311 if (lik > 1e-10) then loglik = log(lik);
312 else loglik = 1e100;
313 end;
314 model ordinal ~ general(loglik);
315 random u1 u2 ~ normal ([0,0] , [s1**2,s12,s2**2]) subject=id;
316 estimate "Rho" s12/(s1*s2);
317 run;

The parameter estimates of this coding is available in attachment. 

Hope some one can help me out.

TNX

 

1 ACCEPTED SOLUTION

Accepted Solutions
Rick_SAS
SAS Super FREQ

Please consider that it might not be a software issue but a model issue. See my previous comments. Reasons a model might not fit data include having rounded data values or a misspecified model.

 

It is also possible that the situation can be improved by a better choice of the parameters (or maybe you need to BOUND a parameter). If you are convinced that the model is correct and fits the data, you can try to create a grid of candidates for the initial parameter values. 

 

Lastly, I will mention that your 20-dimensional model is very complicated. Try a simpler model with fewer parameters. Sometimes you can use a series of "reduced" models to help understand how to fit the data. See the article on using a reduced (fixed-effect) model.

View solution in original post

16 REPLIES 16
ballardw
Super User

Reason: the values in the data and the code submitted

 

We don't have your data or any of the descriptions of your data that might point to issues.

 

Why specifically is it hard to believe any, all or some of those variables are not significant. You have not provided any reason for that belief.

MahdiA110
Obsidian | Level 7

Dear Ballardw;

Thank you for your response. I put a sample of my data in attachment. Please take a look at it.

And to answer your question: It is hard to believe, because all 9 parameters have p-value higher than 0.9! (without exception)

Please let me know, what is wrong with the code submitted?

 

 

ballardw
Super User

Data step code is the preferred way to share data. Then we don't get into conversion issues and some questions can get answered with the properties without examining actual data.

 

Instructions here: https://communities.sas.com/t5/SAS-Communities-Library/How-to-create-a-data-step-version-of-your-dat... will show how to turn an existing SAS data set into data step code that can be pasted into a forum code box using the <> icon or attached as text to show exactly what you have and that we can test code against.

 

What your result may be telling you is that some things that you thought were predictors aren't good ones. Or it may be that the scale of some of  the values is incorrect for the code you used.

MahdiA110
Obsidian | Level 7

Thanks for your guidance.

I changed the format of my data as you asked me. Also the .txt format is available in attachment.

I chose every variable using GEE test. Then I add them to my JMREM model in every possible way.

I meant: one variable, two variable and so on. But the second joint variable was meaningless.

In another hand, the two tables "Covariance matrix of parameter estimates" and "Correlation matrix of parameter estimates" are not displayed. 

Please let me know, if you have any idea.

 

Hgb_24h_
Obs id1 id repeat after LASIXICU group Time

45 28 4 13 10.9 10 1 7
46 24 4 5 10.9 10 1 3
47 23 4 3 10.9 10 1 2
48 25 4 7 10.9 10 1 4
49 22 4 1 10.9 10 1 1
50 28 4 14 10.9 10 1 7


Obs BE CVP Time_num Index1 ordinal

45 3.8 12 24 Lactate_ordinal 1
46 -5.8 8 4 Lactate_ordinal 2
47 -5.1 10 2 Lactate_ordinal 2
48 -4.5 11 6 Lactate_ordinal 2
49 -2.6 12 0 Lactate_ordinal 2
50 3.8 12 24 PH_ordinal 0

SteveDenham
Jade | Level 19

@MahdiA110 

II suggest the following:

First, break the joint estimation into two separate NLMIXED runs.  See if the results for the second part are still unreasonable.  If so, then it is likely there are data issues of some kind - either some of the data is misspecified or the response is so flat, that a simple mean provides as much information.  If the results change drastically so that you start to see results similar to what you saw in your GEE models, then use the parameter estimates obtained as starting values in your PARMS statement. Set s12 to zero in the PARMS statement.  This should make the first iteration of the joint NLMIXED more closely resemble the separate fits.  

 

One thing I noticed is that the 'b' parameters (those with the high significance values) have not moved from the initial values provided in the PARMS statement.  That is a pretty sure sign of starting values that do little in the way of changing values. I see that the error variances (u1 and u2) enter as variance components for the PROBNORM function. I also see that the variance associated with u2 and the covariance estimate are unchanged from the initial values. It seems that everything associated with the b parameters stays fixed including the random effects.  Consider making the b parameters as a function of the a parameters, such as

b1 = a1 + deva1, and providing starting values for deva1 that do not exactly add to a1 to give your current starting value for b1.  That pushes the optimization for the a section into the b section as well.  :You would need to do this for all of the b parameters. You'll need to write some ESTIMATE statements to get all of the 'b' associated variable estimates, std errors, etc., but this will enable you to see what might be going on with the deviation variables.

 

Now for a possible coding issue: A possible cause is that you have records where ordinal=2, but do not calculate a loglik value.  I don't know how these records contributed to your GEE obtained starting values, but if they did, then that should be considered. I also am not sure if (or how) these records are even incorporated into the optimization. 

 

And for the final thing - I can't guarantee that any of these would work.  They are just things I have tried.  Oh and change the optimization technique from the default Newton-Raphson (NEWRAP) to QUANEW if it looks like things are almost working.

 

SteveDenham

MahdiA110
Obsidian | Level 7
 
Dear Dr SteveDenham,
I really appreciate your response.
I reviewed your opinion and the opinion of some other statistics professors. Now,I doubted about the my data format! and start to restructure it again.
There is one column as ordinal:joint of two ordinal variables, Time: shows number of measuring variables, repeat:the same as time? and id: id of patients.
Am i right?
Could you send me a data sample or a link of a sample?  Because I'm really confused about it.
Any Help in this regards will be very kind of you!!
 
SteveDenham
Jade | Level 19

I think we have some communication issues, as I cannot figure out what you are doing within the new dataset you are describing.  My comments were directed at an analysis that would utilize the dataset that you shared.  If that has now changed, please share the new dataset.  I can't provide a dataset that meets your request, as I don't have knowledge of what was measured, and what the variables mean, and most importantly, what question you are trying to answer.

 

SteveDenham

MahdiA110
Obsidian | Level 7

@SteveDenham 

@Rick_SAS 

@ballardw 

Dear Dr SteveDenham,

Thank you for your help and attention.

I sent you the new data, codes and the results in attachment.

As you can see,I estimate the parameters using proc gee. Then I used proc nlmixed to estimate the joint model of two responses. 

It is clear that some thing is wrong! I do NOT know what is that?

The estimate of parameters? structure of data? or some thing else?

I need your experience to solve this problem. Any Help in this regards will be very kind of you!!

 

SteveDenham
Jade | Level 19

I am not certain, but it is possible that your problem arises from these lines of code:

 

if (lik > 1e-10) then loglik = log(lik);

else loglik = 1e100;

 

This looks like a situation where if the likelihood is very small, the log likelihood is defined as a large positive value.  For likelihoods less than 1, the log likelihood is negative.  This code certainly violates that.  As a result, the values of -2 log likelihood in the Fit Statistics are all on the order of -2 * 1e103.  That implies to me that the model should be giving you something like -2000 as the -2 log likelihood, but you have blown this up by a factor of a google. Consider running the NLMIXED code without this redefinition of the log likelihood.

 

SteveDenham

Rick_SAS
SAS Super FREQ

The article uses

else loglik = -1e100;

Your code is using positive 1E100.

 

Regardless, listen to Steve. Not everything you see in print is a good idea. This trick can be useful when trying to bound a parameter (instead of using the BOUNDS stmt), but I don't recommend it for the loglikelihood. You are trying to minimize the LL. If it is unbounded (what the code is trying to address) that tells you something about the model. Namely that it doesn't fit the data or was implemented incorrectly. By artificially giving it a minimum, you can run into convergence problems.

 

On a larger note, we don't know whether that ELSE statement has any impact on the analysis. The ELSE stmt might never get executed. You ought to be able to look at the Iteration History table to see what the LL is during the last iteration.

SteveDenham
Jade | Level 19

Hi @Rick_SAS 

 

I am pretty sure that the piece of code we are discussing does pass through the ELSE branch - it is probably the only way the overall -2 loglikelihood could be something like -2*1e103.

 

I will wager a small sum of money that if the else branch is changed to loglik = -1e100, the code will give reasonable results. Still a BOUNDS statement would be more appropriate, if needed.

 

SteveDenham

Rick_SAS
SAS Super FREQ

No need to wager: The Iteration History table should show the answer. 

 

AFAIK, the BOUNDS statement is only valid for parameters, not for the LL.

MahdiA110
Obsidian | Level 7

@Rick_SAS 

@SteveDenham 

I tried the "else loglik = -1e100;" but I received this error:

Quadrature accuracy of 0.000100 could not be achieved with 61 points. The achieved accuracy was 1.000000

Then I changed estimates too many times and still received this error.

So I could NOT run the model!!!

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 16 replies
  • 4069 views
  • 15 likes
  • 4 in conversation