turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- proc mixed - repated measure model - impact of var...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

03-17-2016 07:27 AM

Hi all,

If a repeated measures model with proc mixed is run with

**proc** **mixed** data=data plots=none method=reml;

class treat sex age visit subject;

model chg=treat sex age visit base / solution;

repeated visit / type=un sub=subject(treat ) rcorr;

lsmeans treat *visit / slice=visit cl e diff;

**run**;

and the same repeated measures model is run with a different order of variables.

**proc** **mixed** data=data plots=none method=reml;

class treat sex age visit subject;

model chg=base treat sex age visit / solution;

repeated visit / type=un sub=subject(treat ) rcorr;

lsmeans treat *visit / slice=visit cl e diff;

**run**;

The results are not exactly the same. Any explanation of this? Would there be any guidance to define an ideal order of variables? SAS helps seems to suggest effects with more importance first.

Rearranging effects in the MODEL statement so that the most significant ones are first can help, because PROC MIXED sweeps the estimate of in the order of the MODEL effects and the sweep is more stable if larger pivots are dealt with first. If this does not help, specifying starting values with the PARMS statement can place the optimization on a different and possibly more stable path.

Thanks

Accepted Solutions

Solution

03-18-2016
06:44 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

03-17-2016 06:22 PM

Use any measure of influence you like to pick importance order. And then test the result with different orders.

This is not unique to Proc Mixed repeated measures or even to SAS. I first ran into this with modeling procedures in SPSS. Our household rule when using the procedure, which I can't remember the name of since it has been 12 or more years now, was to move the most "influential" variable of the model to the last position in the model statement and see if it retained its influence or if it was even still statistically significant.

Why? Order of operations and the algorithm has to start somewhere, so some of the process is done in the order of the varaible appearance. Since there could be significant "cost" in terms of performance the internal process doesn't do all permutations of the variables to get "ideal" solution. Of course defining "ideal" might be challenge all by itself. If I get different r-squares, for example, using a different order of variables then which is the correct r-square to actually use?

All Replies

Solution

03-18-2016
06:44 AM

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

03-17-2016 06:22 PM

Use any measure of influence you like to pick importance order. And then test the result with different orders.

This is not unique to Proc Mixed repeated measures or even to SAS. I first ran into this with modeling procedures in SPSS. Our household rule when using the procedure, which I can't remember the name of since it has been 12 or more years now, was to move the most "influential" variable of the model to the last position in the model statement and see if it retained its influence or if it was even still statistically significant.

Why? Order of operations and the algorithm has to start somewhere, so some of the process is done in the order of the varaible appearance. Since there could be significant "cost" in terms of performance the internal process doesn't do all permutations of the variables to get "ideal" solution. Of course defining "ideal" might be challenge all by itself. If I get different r-squares, for example, using a different order of variables then which is the correct r-square to actually use?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

03-18-2016 06:43 AM

Thank you very much, I did not know it could be also like this with SPSS or any other software. I just noticed this when using SAS recently. Ok thanks I will investigate this issue more deeply using your advice.

Happy to hear any other experience here.