Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- How to determine random effects?

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-29-2017 10:53 PM

Hi, I'm trying to analyze my data to determine the effects of knife and band castration with and without injection of anti-inflammatory on growth performance (**BW**) in beef calves. Seventy-two calves were randomly assigned to treatments according to a 3 × 2 factorial design assessing castration technique (**CAST**), band castration, knife, or sham castration; and drug administration (**DRUG**), with animals being medicated at the time of castration or non-medicated (saline injection). . Calves were managed in two groups (**GROUP**) of 36, castrated on two separate days, two weeks apart, but all 72 animals were housed in the same pasture (6 calves/treatment/group). All data was collected on d-1 and d0 (prior to castration; baseline measurement), and on d6, 13, 20, 34, 48 and 62 post-castration. The average baseline measurements of BW was used as a covariate (**BWin**).

How would be my model???

PROC MIXED data=castration;

CLASS animal CAST DRUG group day;

MODEL BW = CAST DRUG DAY CAST*DRUG CAST*DRUG*DAY BWin / ddfm=satterwth;

RANDOM GROUP(CAST*DRUG) / SUBJECT animal(CAST*DRUG);

REPEATED animal(day) / Type= AR(1);

LSMEANS CAST DRUG CAST*DRUG CAST*DRUG*DAY/ pdiff=all; run;

Accepted Solutions

Solution

02-02-2017
02:40 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to soaresrd

02-01-2017 08:22 AM

Use of the form

random intercept/subject=varname

is essentially the same as saying

random varname;

However, the processing by subjects allows for better convergence properties.

The model proposed by your friend has the advantage of being more likely to converge, and actually accommodate the effect of your GROUP variable, but that comes at a cost. You need to think about the inferences you want to make--are they about the effects of your other variables for only these two GROUPs, or do you wish to make inferences about all possible ways of constucting GROUP, of which you have two random samples? I see your GROUP variable as a blocking variable, leading to a split-plot design, whereas your friends approach is a factorial approach. My personal opinion would be to consider it as a random effect.

Steve Denham

All Replies

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to soaresrd

01-30-2017 09:42 AM

I suppose that I would include all 2 way interactions whenever I have a 3 way interaction in the model. Consequently, I would use something like this:

```
PROC MIXED data=castration;
CLASS animal CAST DRUG group day;
MODEL BW = CAST DRUG DAY CAST*DRUG CAST*DAY DRUG*DAY CAST*DRUG*DAY BWin / ddfm=satterwth;
RANDOM intercept / SUBJECT=group;
REPEATED day /subject=animal Type= AR(1);
LSMEANS CAST DRUG CAST*DRUG CAST*DRUG*DAY/ pdiff=all; run;
```

Note the changes in both the random and repeated statements. The random statement reflects a single variance component due to group. The repeated statement says that observations on separate days within an animal are correlated. I retatined the lsmeans statement as is, because I assume that it contains the comparisons of interest.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

01-30-2017 12:08 PM

Hi Steve,

Thanks for helping me.

I saw some papers using "intercept" as RANDOM effect, but I really don't understand what this is mean. Also, "group" on SUBJECT?

Would you mind to explain this? Is there anything that I could read about it?

A friend of mine also suggested me to use GROUP as fixed effect and not use RANDOM, as follow:

proc MIXED data= castration;

CLASS animal group cast drug day;

MODEL BW= cast | drug | day | group BWin;

REPEATED day / subject= animalTYPE=ar(1);

lsmeans cast drug cast*drug/pdiff=all;

RUN;

What do you think?

Desiree

Solution

02-02-2017
02:40 PM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to soaresrd

02-01-2017 08:22 AM

Use of the form

random intercept/subject=varname

is essentially the same as saying

random varname;

However, the processing by subjects allows for better convergence properties.

The model proposed by your friend has the advantage of being more likely to converge, and actually accommodate the effect of your GROUP variable, but that comes at a cost. You need to think about the inferences you want to make--are they about the effects of your other variables for only these two GROUPs, or do you wish to make inferences about all possible ways of constucting GROUP, of which you have two random samples? I see your GROUP variable as a blocking variable, leading to a split-plot design, whereas your friends approach is a factorial approach. My personal opinion would be to consider it as a random effect.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

02-02-2017 02:45 PM

Hi Steve,

Thank you so much for your explanation. It helped me to understand better my model.

If you permit I have another question for you: somebody told me a few days ago that we should not ignore the P-values on the PDIFF, even if the P-value on the main effects were not significant. For this person, this rule would include the P-value for interactions, as well. Is this right? I never heard this before, so I would like your opinion about it.

Thanks again,

Desiree

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to soaresrd

02-06-2017 09:59 AM

Well, many statisticians have different opinions, but...

/OPINION ON

If you know the pairwise comparisons you are going to make before you start the experiment, and you know those are the only comparisons of interest, then the omnibus F tests really don't provide any additional information--PROVIDED that you apply a rational multiple comparison adjustment to the p values obtained for the pairwise comparisons.

However, if you are running an experiment where there are a large number of comparisons possible, and you really are not sure which comparisons are of interest, then the omnibus F tests, starting with the highest order interaction, provide a guide to which comparisons have at least one mean differing.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

02-14-2017 12:05 PM

Hi Steve,

In my case I know the pairwise comparisons I'm going to make (3 different castration method in beef calves associated with or without use of inflammatory) to assess the effects on growth performance. So, if I understood right, you are saying that I could use (in this case) the P-values from the PDIFF as results, even if the ANOVA showed P=0.9098? Is that right?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to soaresrd

02-14-2017 12:14 PM

That sounds pretty broad to me, essentially you are looking at all possible comparisons. Given that, you will probably want to implement some sort of multiple comparison adjustment. Where I was going was that if there was ONE or maybe 2 comparisons that you knew a priori were the only comparisons of interest (even though the experiment was designed so that more could be made), then setting up a CONTRAST statement (or equivalently the DIFF in the LSMEANS) and obtaining that p value may not be dependent on the results of the omnibus test.

Am I making any sense here? If not, try looking at Westfall et al. *Multiple Comparisons and Multiple Tests Using SAS, 2nd ed.* and some of the other literature that addresses predefined comparisons.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

02-15-2017 02:38 PM

Thanks Steve for your help.

Desiree