Programming the statistical procedures from SAS

Degrees of freedom involved in Wald tests for parameter estimates with NLMIXED

Reply
New Contributor
Posts: 2

Degrees of freedom involved in Wald tests for parameter estimates with NLMIXED

[ Edited ]

The degrees of freedom associated with wald tests of parameter estimates for NLMIXED is: 

#Subjects - #Random Effects Parameters; by default.

 

My question is: why the number of fixed effect parameters being estimated does not count for estimating themselves?

 

E.g. I can have a dataset with a given number of subjects and try two different models: one with fewer (simpler) and another with more fixed effect parameters (more complex), and yet the wald test df for all parameters (fixed & random) is the same in both scenarios.

 

Shouldn't I lose df as I add more fixed effect parameters?

New Contributor
Posts: 2

Re: Degrees of freedom involved in Wald tests for parameter estimates with NLMIXED

[ Edited ]

A copy of the answer I got from Dr Ed Vonesh with references to his book on Generalized Linear and Nonlinear Models for Correlated Data by SAS Inst:

 

You ask a great question. The question of what denominator DF (DDF) one should use with nonlinear mixed-effects (NLME) models is a difficult question. As there is no unifying theory on what the underlying distribution of the corrected Wald test-statistic is under a NLME model, we are faced with choosing a DDF option that allows a somewhat conservative approach to construction of tests and confidence intervals that would otherwise be way too liberal using standard asymptotic distributions (the z-test or chi-square test).  Use of a t-test or F-test with DDF = (n-v) where n=number of subjects and v=number of random effects will, in most applications, provide a conservative p-value (or conservative confidence interval) when n is "small". Even then, the use of DDF = (n-v) can run into problems - see example 5.4.1 and discussion of DDF = 4 (pp. 295-296).  The problem with using something  like DDF = (n-s-v) where s = number of regression parameters that need to be estimated is that you could run into negative DDF estimates as shown in the Orange Tree example (pp. 295-296).

 

Alternatively, as pointed out in one of my earlier publications (see page 8 of Vonesh and Carter, "Mixed Effects Nonlinear Regression for Unbalanced Repeated Measures", Biometrics, 48: 1-17, 1992), Gallant suggested using the corrected Wald F-test, T-square/NDF (where NDF is the numerator degrees of freedom for a particular contrast of interest) in conjunction with tabulated values of the F-distribution with F(NDF, N-s) where, for p repeated measurements per subject, N = np is the total number of observations (not subjects) and s is the total number of regression parameters. So this is another option you could use, namely DDF = N-s. However, I would suspect that in most applications, the use of DDF=(n-v) will lead to more conservative inference versus use of DDF = (N-s). That being said, you can always specify your own value for DDF which best meets the needs of a particular application. 

 

Ask a Question
Discussion stats
  • 1 reply
  • 123 views
  • 0 likes
  • 1 in conversation