BookmarkSubscribeRSS Feed
TravisTCU
Calcite | Level 5


Hello,

I'm hoping for some assistance with the coding of a hierarchical linear model with repeated measures.  I've written a code, but I'd like some verification that what I've done is proper for my data.

The data were collected as part of a decision making experiment.  Subjects played a game in groups of four, and each subject was assigned to a unique role within the group.  These roles are labeled r, w, d, and f.  Each period of the game, subjects placed orders to one another, (Order is the dependent variable in the model).  The lone factor of interest in the study is a treatment condition, (labeled Condition), which had two levels.  All subjects in a given group were given the same treatment (condition).

In short, I'm looking at the effect of Condition on Order, while controlling for the role each subject played within their group.  Subjects were nested within groups, and orders were recorded each period of the game, (repeated measurement).

The code I developed is as follows:

 

proc mixed;

class period condition r w d group;

model Order = condition r w d /ddfm=satterh;

repeated period / subject=participant(group);

lsmeans condition / pdiff;

run;

My concern is that perhaps I have not properly specified the model, particularly regarding the repeated measures and nested structure.  Any guidance would be greatly appreciated.  

3 REPLIES 3
SteveDenham
Jade | Level 19

I am guessing that r, w and d are dummy variables reflecting role, and if all are zero then the role is f.  Much easier to have a class variable and let SAS do the heavy lifting.  Call this variable 'role'.

Then try:

proc mixed;

class period condition role group participant;

model order = period|condition|role /ddfm=kenwardrogers;

random intercept/subject=group;

repeated period/subject=participant(group) type=<I would put this in as AR(1), as I guess the periods are equally spaced in time>;

lsmeans condition/diff;

run;

Steve Denham

TravisTCU
Calcite | Level 5

Hi Steve,

Thank you very much for the quick and helpful response.  Your assumption about the dummy variables was correct; I've changed to using a single class variable labeled Role as you suggest.  Further, I've edited the model you created to exclude the fixed effect of period, as well as the interaction terms, as they are not focal points of this study.  All I'm truly interested in is the effect of Condition after controlling for Role, and accounting for the fact that repeated measures were taken, and subjects were nested in groups.  Thus, my revised model is structured as follows:

  

proc mixed covtest;

class period condition role group participant;

model order = condition role /ddfm=kenwardrogers;

random intercept/subject=group;

repeated period/subject=participant(group)type=ar(1);

lsmeans condition/diff;

run;

Ultimately, it appears as though the main difference between this model and my original model is the inclusion of the random intercept statement.  (You also specify the covariance structure as AR1 in the repeated statement).  Interestingly, this has a profound impact on the results.  Under the original model, Condition is statistically significant (p<0.001).  Under the revised model, it is not (p=0.179).  What does the addition of the random intercept statement do to the way in which the data are modeled as compared to the original model?  My assumption is that it allows for the estimation of separate intercepts for each group, but I'd like to understand more so as to determine why Condition is no longer significant.  Any insight you could provide would be most appreciated.

SteveDenham
Jade | Level 19

I would strongly consider going back to the model statement that I suggested, as it reflects the design of your study.  Eliminating terms because they aren't focal points of the study leads to mis-estimation of standard errors, and consequently faulty p-values when comparing means.

So, the random statment here estimates a variance component due to group.  The statement is exactly equivalent to:

random group;

but the subject= syntax is more stable and faster.

And it isn't surprising that adding it in affects the significance of the test.  Part of the reason is that with that in, you now have the correct error term for testing condition, where before it was being tested against the residual error.  The test against the residual is incorrect for your design.

The syntax I gave you fits your design--give it a try.  I imagine that you will find significant interactions, which is where the richness of a split-plot is found.  If significant interactions are found, then simple effect (sliced) tests of condition can be set up at the various levels of role and period.

Steve Denham

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 3 replies
  • 1437 views
  • 6 likes
  • 2 in conversation