I would like to extract exact confidence intervals for the fixed effects from PROC GLIMMIX. I have tried the CI returned by the ParameterEstimates option, but they have coverage that is above the nominal level (in simulation studies with N=(12,40), they have 100% coverage). How can I extract more precise fixed effect CI from PROC GLIMMIX?
Code used as an absolute minimum.
Since you seem to have an issue with the results then you may need to provide the data as well. "Large" confidence intervals result from data with large variability. Your data may have more variability than you expect. Classification variables will create subsets of the data so that within some, if not all, groups the variability is larger than you might expect.
Data examples should be in the form of a data step. Sensitive values such as identification can be replaced with dummy values but the result should be the same for you actual data.
Instructions here: https://communities.sas.com/t5/SAS-Communities-Library/How-to-create-a-data-step-version-of-your-dat... will show how to turn an existing SAS data set into data step code that can be pasted into a forum code box using the <> icon or attached as text to show exactly what you have and that we can test code against.
First, I would warn that the ParameterEstimates are for non-full rank parameterizations, so be cautious about interpretation. I suspect that confidence intervals for the levels of each class variable would be more applicable. That means using either LSMEANS or LSMESTIMATE statements. Second, I suspect that your bootstrap estimates of the confidence interval need a larger sample size, especially if you have variance components that are large relative to the least squares means. I would look at 500 estimates or more, generating datasets using PROC SURVEYSELECT. I don't know how to interpret N=(12,40) in this context. Last, confidence bounds from likelihood based estimates will not be exact in the sense the ordinary least squares bounds are, especially if you have a nonlinear link function.
SteveDenham
I suggest you verify that your simulation is generating data according to the assumptions that are used for assessing the coverage probability of the CIs. In the article, "Coverage probability of confidence intervals: A simulation approach," I show an example of simulating and computing CIs. For the example (CIs for a mean), the coverage of the CIs assumes that the data are sampled from a normal distribution. When you simulate data from a normal distribution, you correctly get the 95% coverage probability. However, if you sample from nonnormal data, the coverage probability changes. Depending on the tails of the sampling distribution, you might get higher or lower coverage.
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.
Find more tutorials on the SAS Users YouTube channel.