BookmarkSubscribeRSS Feed
pdortho
Calcite | Level 5

I think I left out an important bit of information, so I want to get your thoughts on if this would change how the model is specified. The observational unit is individual patients in our model, rather than surgeons. We are looking at 90-day hospital readmission after surgery for patients who are nested within surgeons, and surgeons may or may not work at more than one hospital. So, would we even need year as a repeated measure since we are not repeatedly measuring individual patient outcomes from year to year? Does year even need to be a random effect at all, since I cannot imagine patients operated on in 2018 would necessarily be more similar to each other than they are to patients operated on in 2019?

 

The last wrinkle in all of this is how to handle patients. Most patients only appear in the dataset once, for a single surgery. However, some people can appear in the data twice if they have a second contralateral surgery. Should patient be a repeated effect then since some could have had multiple surgeries?

 

Thanks everyone for your responses.

 

Patrick

SteveDenham
Jade | Level 19

Interesting, and this may be helpful.  In this case, I would view each patient as a trial, each readmission as an event, surgeon as the "experimental unit" and year as a repeated effect if and only if surgeons are measured in each year. The contralateral surgeries can be regarded as independent for an initial analysis.  Depending on the number of these, they might be repeated, but for a first analysis, try considering each surgery as a separate trial. How many surgeons are observed in all years, and how many years are included in the data?  The answers to those questions could lead to a simpler model that uses the binomial distribution rather than the binary distribution.

 

SteveDenham

researchstats
Calcite | Level 5

Hello,

 

So for this dataset, we have about 700k patients (surgeries). We have 3 years of data.  There are about 700 surgeons, and about the same number of hospitals.  If you do not mind, as event/trials syntax is new to us, how would that sas code look?  

Perhaps something like this:

model allreads90/patientID = region season covar1 covar2 covar3 / options

I'm assuming then that we can change the distribution type, and keep the random effects statements the same (as discussed above)? 

 

Thank you again!  We really appreciate it. 

pdortho
Calcite | Level 5

There are 3 years of data.

 

7,488 out of 12,126 surgeons have data for all 3 years. I believe 700 was a typo in the previous post.

 

123,728 out of 726,597 patients had more than 1 surgery.

 

Best regards,

 

Patrick

SteveDenham
Jade | Level 19

Now I just have some random thoughts.  I doubt whether you can estimate a decent residual effect with only 3 years and slightly more than half the subjects observed in all 3 years.  I would suggest using yr as strictly a fixed effect without any repeated/residual modification.  In that case it serves as a "nuisance" variable whose effect you wish to account for so that your tests for other fixed effects are more powerful.  You might consider this:

 

proc glimmix data=newdata_from_add8 method=rspl;
class Region (ref="1") Season (ref="1") RaceKey (ref="1") gender (ref="1") agegrp (ref="1") Component_Major (ref="1") yrs surg npiregion;
model sumallreads90/Number_surgeries = region season racekey gender agegrp component_major wgtcci yr /
solution oddsratio;
*Specify random effects;
random intercept / subject = npi;
random intercept / subject = surg(npi);
lsmeans region season /oddsratio ilink diff cl adjust=bon;
run;

Here, sumallreads90 is the number of readmissions per surgeon and number_surgeries is the total number of surgeries per surgeon. If this model runs, then you may want to include the fixed interactions of region and season with yr.

 

SteveDenham

 

researchstats
Calcite | Level 5

Hello Steve,

 

Thanks again for your input.  We took 'year' out as you suggested and just included as a fixed effect, since we only had 3 years of data, and the model actually ran.  The results look appropriate!

 

Regards,

 

Patrick 

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 20 replies
  • 1910 views
  • 0 likes
  • 6 in conversation