- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am running a multilevel Poisson model with students within classes within schools, and I would like to estimate the sum (and its SE) of the fixed and random effects for exposure at the school level.
PROC GLIMMIX data = dat7 noclprint ;
class school class / ref=first ;
model O = exposure / solution ;
random exposure / subject=school solution cl ;
random intercept / subject=class (school) ;
estimate 'school 1' intercept 0 exposure 1 | intercept 0 exposure 1 / subject 1;
estimate 'school 2' intercept 0 exposure 1 | intercept 0 exposure 1 / subject 0 1;
estimate 'school 3' intercept 0 exposure 1 | intercept 0 exposure 1 / subject 0 0 1;
estimate 'school 4' intercept 0 exposure 1 | intercept 0 exposure 1 / subject 0 0 0 1;
estimate 'school 5' intercept 0 exposure 1 | intercept 0 exposure 1 / subject 0 0 0 0 1;
run;
I get the following error message :
ERROR: Contrasts between subjects require that all random effects have the same subject; the CONTRAST statement is ignored.
Glimmix does not allow the estimate statement when there are more than two subject levems. I tried by curiosity with Proc HPMIXED and it does work (but not with Proc Mixed).
Is there a way to do this ?
Thank you !
Alexis
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
One way to get around this would be to consolidate the random statements into a single equivalent statement. Consider using:
random exposure class/subject=school solution cl;
The Z matrix for this ought to be the same as that generated by your two level random statement, recalling that SAS parameterizes nested effects exactly the same column-wise as crossed effects. However convergence may be more problematic.
Steve Denham
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
One way to get around this would be to consolidate the random statements into a single equivalent statement. Consider using:
random exposure class/subject=school solution cl;
The Z matrix for this ought to be the same as that generated by your two level random statement, recalling that SAS parameterizes nested effects exactly the same column-wise as crossed effects. However convergence may be more problematic.
Steve Denham
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am having a very similar problem. The main difference is that I have G- and R-side random effects. I have 9 transects placed in random locations, each containing 10 quadrats (quad) and measurements were taken in these quadrats once a year during 4 years. So I am using:
random int / subject = transect solution;
random year /subject=quad(transect) type=ar(1) residual;
Since I am using 2 subject types, when use "estimate" to obtain estimates at the transect level I get the error: "Contrasts between subjects require that all random effects have the same subject".
Is there any way to consolidate these in a single statement?
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I need a bit more info. When you are writing your ESTIMATE statements, are there random effects included? If not, then using the LSMESTIMATE statement might work out better. If there are random effects, as in the solution provided earlier, then perhaps you can consider quadrat within transect as the observational unit. This paper by Littell et al. points out how they did it for a similar design.
Statistical analysis of repeated measures data using SAS procedures.
random int / subject = quad(transect) solution;
random year /subject=quad(transect) type=ar(1) residual;
Now you have identical subjects, and the ESTIMATE statements shouldn't throw the error you are seeing. My only caveat is that you might now run into convergence issues.- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Steve,
Thanks for taking the time to think about it, I really appreciate it.
I am interested in the estimates of the random intercept and coefficient for each transect, so my estimates are:
estimate 'Transect 1 - Intercept' int 1 | int 1 /subject 1 ; estimate 'Transect 1 - Coefficient' year_num 1 | year_num 1 /subject 1 ; estimate 'Transect 2 - Intercept' int 1 | int 1/subject 0 1; estimate 'Transect 2 - Coefficient' year_num 1 | year_num 1/subject 0 1;
... for transects from 1 to 9.
I get no errors when I use quad(transect) as the subject in the first random statement as you suggested. But, since the quadrats are subdivisions of the transects, the design suggests that there will be some dependence between observations of the same transect (dependence between measurements collected in different quadrats of the same transect). So, is it ok to just use random intercepts for quadrats?
I am a beginner in GLMM so please forgive me if my understanding is wrong.
I will have access to the paper you suggested later today so I will read it as soon as I have it.
Again, thanks for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
If you want to model the dependence of quadrats within transects, it is going to get dicey. I would assume some sort of spatial covariance, so that distance between quadrats becomes a major factor in the covariance structure. For now, the marginals over quadrat within transect should correctly capture the variability needed to make comparisons between fixed effect levels. If it becomes of interest to compare BLUPs from transect to transect, then it might be good to consider modeling the within-transect error structure.
However, I will bet you a shiny new nickel that your research question doesn't need that level of granularity to be correctly addressed.
Steve Denham
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I will have to do some homework here to make sure I am tackling our research questions appropriately. Thanks for all your input, I really appreciate it!