06-12-2012 03:37 AM
Now I'm trying to analyze repeated measures data with proc mixed.
The sas code is as follows and X denotes the variable of laboratory data such as blood pressure.
PROC MIXED DATA=DATA;
CLASS ID TIME;
MODEL X=TIME / DDFM=KR;
REPEATED TIME / SUBJECT=ID TYPE=UN ;
LSMEANS TIME/ DIFF=CONTROL('1')
Though the code seems to work correctly with DDFM=KR and TYPE=UN,
the warning message “Warning: Stopped because of infinite likelihood” occurs with DDFM=KR and TYPE=CS.
I know such a message can occur when there are some records with the same ID and same time value in the dataset.
But the data I want to analyze does't have such records.
Why does the message occur?
Should I set DDFM=KR and TYPE=UN rather than DDFM=KR and TYPE=CS?
I'd really appreciate it if someone would help me.
Thanks in advance.
06-12-2012 07:42 AM
This is a difficult question. Could you share the covariance estimates under the TYPE=UN specification? I am thinking that there may be some pathological values.
Other things to try: Try a variety of other types. The first that comes to mind is CSH. If the timepoints are evenly spaced, or very nearly evenly spaced, try AR(1) and ARH(1). If all of these lead to the infinite likelihood, and UN does not, then there is something very unusual about the data.
Try using PROC GLIMMIX with a different optimization method using the NLOPTIONS statement.
Using the results from the TYPE=UN fit, see if the estimates can be put into a form that would give starting values under the TYPE=CS specification. Use the median of the diagonals and off-diagonals as starting parameters in the PARMS statement. This probably would only work if the infinite likelihood occurs on the first iteration--if it shows up after the procedure has done several iterations then it is almost surely data pathology.
06-26-2012 01:55 AM
Thank you for your continued sopport.
the covariance estimates under the TYPE=UN specification as follows:
Estimated R Matrix for SUBJID 0001-001
Row Col1 Col2 Col3 Col4 Col5
1 331.08 301.34 278.17 269.69 260.96
2 301.34 330.16 302.41 285.32 270.93
3 278.17 302.41 419.88 356.43 283.12
4 269.69 285.32 356.43 473.49 320.51
5 260.96 270.93 283.12 320.51 400.03
Covariance Parameter Estimates
Cov Parm Subject Estimate
UN(1,1) SUBJID 331.08
UN(2,1) SUBJID 301.34
UN(2,2) SUBJID 330.16
UN(3,1) SUBJID 278.17
UN(3,2) SUBJID 302.41
UN(3,3) SUBJID 419.88
UN(4,1) SUBJID 269.69
UN(4,2) SUBJID 285.32
UN(4,3) SUBJID 356.43
UN(4,4) SUBJID 473.49
UN(5,1) SUBJID 260.96
UN(5,2) SUBJID 270.93
UN(5,3) SUBJID 283.12
UN(5,4) SUBJID 320.51
UN(5,5) SUBJID 400.03
UN(6,1) SUBJID 255.04
UN(6,2) SUBJID 264.57
UN(6,3) SUBJID 276.21
UN(6,4) SUBJID 294.30
UN(6,5) SUBJID 323.57
UN(6,6) SUBJID 373.06
UN(7,1) SUBJID 248.73
UN(7,2) SUBJID 256.85
UN(7,3) SUBJID 274.29
UN(7,4) SUBJID 284.68
UN(7,5) SUBJID 295.96
UN(7,6) SUBJID 310.99
UN(7,7) SUBJID 351.47
UN(8,1) SUBJID 249.61
UN(8,2) SUBJID 257.29
UN(8,3) SUBJID 275.29
UN(8,4) SUBJID 285.32
UN(8,5) SUBJID 296.04
UN(8,6) SUBJID 298.96
UN(8,7) SUBJID 329.12
UN(8,8) SUBJID 360.03
Looking at the covariance estimates under the TYPE=UN specification, the assumption of compound symmetry is no longer valid.
I' d really appreciate it if you would give me your inputs.
06-26-2012 07:21 AM
There looks like a relatively constant correlation from time(i) to time(i+1), which implies to me that an autoregressive error structure may be appropriate. Given that the diagonal entries seem relatively constant, consider type=AR(1) if your time points are equally or very nearly equally spaced, or type=sp(pow)(time1) if they are not evenly spaced. You will need to construct time1 as a continuous variable in previous data step (time1=time), since time is specified as a categorical variable in the class statement.
I still fear that there may be some data pathology that is causing the infinite likelihood.
See how this works.
06-27-2012 12:27 AM
The time points are equally spaced for each subject, so I tried type=AR(1) and type=sp(pow)(time1).
But the "infinite likelihood" error message occured with both type specification.
I'n not sure what is causing the infinite liklihood error yet...
06-27-2012 07:32 AM
Yuck. I had hoped that would work.
OK--when does the infinite likelihood error occur? Is it at the initial iteration, or does it occur after several iterations? If it is the first case, it is almost certainly a problem with a duplicate record for one of the subjects at one of the timepoints. If it is the second, what is going on in the iteration history? Does it look like there is a relatively smooth history for the objective function up until something happens and it jumps off the tracks? Or is the history erratic?
Can't say I have an answer yet, but knowing the answers to these questions might help?
Also, it might be time to open a ticket with tech support, especially if you can share the data with them.
06-28-2012 01:07 AM
I really appreciate your continued support.
Looking at the log, it seems to occur at the initial iteration. And I looked into the data once again, but there are no such a record.
In addition, I have another question. If there are some duplicate record, why does not the infinite likelihood occur with type=UN specificstions?
06-28-2012 07:43 AM
I jerryrigged up some data and fit it with type=un. Duplicates also lead to the infinite likelihood with the message about a nonpositive definite R matrix. If I try GLIMMIX, the warning is that it failed to obtain mivque starting values. For HPMIXED, I get a specific error message that duplicate measures have been detected.
So, that is not the problem. Have you tried PROC HPMIXED? Something like:
PROC HPMIXED DATA=DATA;
CLASS ID TIME;
MODEL X=TIME ;
REPEATED TIME / SUBJECT=ID TYPE=AR(1) ;
LSMEANS TIME/ DIFF=CONTROL('1'):
Adjustments are not available in HPMIXED (at least according to my documentation), thus the commenting out of adjust=dunnett. So if you need them, then it would probably be necessary to output using the ODS statement, and then post-process using PROC MULTTEST. And then only if HPMIXED actually works.
I am thinking that you REALLY, REALLY need to open a ticket with tech support.