I'm modeling some data with glimmix. So far, so good. I love that it will do a step-down adjustment for multiplicity (one less thing to code). However, I just discovered some adjusted p-values that seem very strange to me. I was hoping someone could either explain it to me, or reassure me that there may be an error in the routine. I'm getting wildly adjusted p-values in my Holm test. One had a multiplier of "162692779209.42".
The data: 2 groups of pigs. One group receives an ACL transection. The other receives a transection PLUS a repair technique. Some time later they receive an MRI on the operated and contralateral limbs. Cartilage thickness is measured at 6 spots on one bone (femur: 3 spots each for medial and lateral) and 2 spots for another bone (tibia 1 spot each for medial and lateral). So, the design isn't quite complete, since there is only 1 spot per bone in the tibia. There were 2 positions for which values could not be assessed in different bones of different animals. The Pearson and Student residuals looked spot-on thanks to the covariance structure (R-side different residual variances for operated and contralateral with covariance, block diagonal for each location of measurement). There was slight evidence of some dependence between the variance and mean when looking at the raw residuals.
The hypotheses tested for each location measured: 1) thickening in ACLT group (operated-contralateral), 2) thickening in the ACLR group (operated-contralateral), and 3) the difference in these differences.
Bone: "1. Fibia" or "2. Tibia"
Dim1: "1. Medial" or "2 Lateral"
Dim2: "1. Anterior" or "2. Middle" or "3. Posterior"
Group: "1. ACLT" or "2. ACLR"
Limb: "1. Operated" or "2. Contralateral"
There were 24 estimates. I wrote a macro ages ago to do the Holm test so I double checked these and also imported the raw p-values into MULTTEST. My macro and MULTTEST both gave the results I'd expect (i.e. smallest p multiplied by 24, next by 23...etc.), whereas GLIMMIX gave me wildly conservative adjustments which appeared to be related to the raw p-values...the lower, the greater the adjustment.
I suspect, if anything, it has to do with the following statement in the sas documentation: "The ADJDFE=ROW setting is useful if you want multiplicity adjustments to take into account that denominator degrees of freedom are not constant across estimates. This can be the case, for example, when DDFM=SATTERTHWAITE or DDFM=KENWARDROGER is specified in the MODEL statement". However, as I stated the adjustments appeared to be more related to the p-values than the df, which didn't vary much.
I simply don't know HOW it does so, and so I'm curious. The sample size was quite small (5 per group) and yeilded degrees of freedom of 7-8 for the estimates. I can't conceive of a case where a multiplicity adjustment of "162692779209.42" would be appriopriate, though.
The other possibility is that, because I used the shorthand for saying I wanted all effects in the model...and some of them are not assessable...I've consfused SAS somehow.
I'm anxiously awaiting any comments or explanations.