BookmarkSubscribeRSS Feed
SteveDenham
Jade | Level 19

Here's one that has recently popped up for us.  Given two nearly identical datasets (I'll get into how close to identical they are down below), we get radically different behavior in two programs that use PROC GLIMMIX to analyze a repeated measures design.  The two programs both sort the data prior to analysis, and sort in identical ways (the BY variables are the same and in the same order).  The PROC GLIMMIX code for the two differ only in that one explicitly lists variables in the class, lsmeans and random statements, while the other uses macro variables to fill these statements.  On resolution of the macro variables, the PROC GLIMMIX code for the two programs is the same.  I don't think this difference is critical for reasons given below.

Datasets: The data are cardiac measurements such as QT interval.  Using PROC COMPARE, the differences in the dependent variable are on the order of 1E-14 (yep, out at the 14th decimal place).

What happens:  The two programs behave identically, but depending on the dataset used, give radically different "results."  For one dataset, the procedure converges, gives all the tests, lsmeans, differences, etc. that we need.  For the other dataset, no convergence (but only for some covariance structures), usually throwing the "Failed to improve the objective function" message.

What we have done: For the dataset that does not converge, we have gone into the NLOPTIONS and made changes, such as changing the technique to the trust region method, or changing the update option for the quasi-Newton algorithm. 

Result: You get the same critical values (log likelihood, F values, etc.) as are obtained for the "convergent" dataset.

What else we have done: Round the dependent variable to eight decimal places prior to analysis.

Result 2: Everything converges, with identical results for both programs and both datasets.

Still a third thing we have done: Using the not-quite-the-same datasets, specify starting parameters in a PARMS statement.  Nothing exceptionally close to the final values, but different from the (I assume) MIVQUE0 values that GLIMMIX defaults to for starting estimates (which happen to be identical, at least in the .lst file, for the two datasets).

Question: What the heck is going on?  Has anyone else run into this before? I find it really disturbing that there is this much sensitive dependence on initial estimates, especially when it looks like the differences between the input data is smaller than the machine precision.  Anybody have an idea of where to go from here?

Steve Denham

2 REPLIES 2
Rick_SAS
SAS Super FREQ

2 comments and a suggestion.

Comment #1: SAS procedures never see the macro variables (they are substitued by the preprocessor) so that is not likely to be part of the problem.  Can you run the code without macros on DataSet1 and DataSet2 and still see the problem?

Coment #2: SAS procedures don't really see the BY group variables. They treat each BY group as if it were a separate data set.  Is there a particular BY group that is causing you problems? If so, you can use a WHERE clause to subset out the particular data that is causing the trouble.

Suggestion: If you can construct a small data set (a single BY group) that exhibits the problem, send it to Tech Support. It seems like anything that the Forum says about this problem would be conjecture, based on your description.  Of course, if someone else has encountered problems like this, that would be helpful information to know.

I think the critical observations to point out to Technical Support are (1) the dependent variables for the two dataset are equal to 1e-14, (2) if you round to 1e-8, the problem goes away. Also important is that using the PARMS statement eliminates the problem.  To me, this says that there IS sensitive dependence on the data. 

SteveDenham
Jade | Level 19

Comment 1: Yup.  Same problem.  Thought I would throw that in as a potential source, but it is obviously not what is going on.

Comment 2: Good idea--it is what we have been doing to address the issue, without going through the by variables that don't throw the problem.

Suggestion: We're getting to that.  Problem 1 with it is scrubbing it efficiently, since in a very strong sense, the data are proprietary.  We're working on that.  We'll probably get around to submitting it next week.

I hate SDIC (sensitive dependence on initial conditions)--it takes me back to the days when I was fitting systems of difference equations, both non-linear and lagged, and there were "windows" of starting conditions that led to radically different results.  Classical chaotic behavior. So I get antsy when I see something similar happening here.  Since we are trying to automate and validate the analysis, the hair loss rate has gone up exponentially after this discovery.

My question was really directed at whether anyone else had seen similar behavior with the QUANEW optimization routine, with other kinds of data.  I was hoping it wasn't something very specific to this data.  If similar behavior has been spotted, we can code around it based on general considerations.  I think our answer is going to be rounding/truncation of the dependent variables prior to analysis.

Steve Denham

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 2 replies
  • 1752 views
  • 0 likes
  • 2 in conversation