turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- Re: Is PROC GLM or PROC MIXED appropriate for a on...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-13-2012 08:18 AM

The proc mixed code is close to what you need, but some edits would give what you need:

PROC MIXED DATA = *my_data_in_a_long_format*;

CLASS IDN SUBSTANCE;

MODEL VALUEE = SUBSTANCE;

REPEATED **SUBSTANCE** /SUBJECT = IDN TYPE = **CSH** R RCORR;

LSMEANS **SUBSTANCE** /**DIFF** ADJUST = TUKEY CL;

RUN;

Edits are bolded. Because your data are not repeated in time, but are instead repeated on each subject, I think that a heterogeneous compound symmetry is probably in order, as the controls were much more similar. While the correlations are assumed to be equal, the variance for each substance is estimated separately (diagonal elements). There really is not enough data to accurately fit an unstructured covariance structure.

The lsmeans statement is edited to reflect marginal means of substances, with differences between them tested after Tukey's adjustment.

But there is one other thing to consider--your data are percentages, so they are restricted to the range 0-100 (as proportions this would be from 0 to 1). This distributional assumption is ignored in PROC MIXED, and is probably not a problem if your values were in the 20% to 80% range. However, looking at the data at the end of your link, you have a lot of values close to 1% and some less than 1%. I suggest you consider using PROC GLIMMIX. For example:

PROC GLIMMIX DATA = *my_data_in_a_long_format*;

**VALUEp=VALUEE/100;**

CLASS IDN SUBSTANCE;

MODEL VALUEp = SUBSTANCE/**DIST=BETA**;

**RANDOM SUBSTANCE/RESIDUAL SUBJECT=IDN TYPE=CSH ;**

LSMEANS **SUBSTANCE** /**DIFF ILINK **ADJUST = TUKEY CL;

RUN;

The first bold statement converts the percentages to proportions. The random statement with residual option is the GLIMMIX way of dealing with repeated measures. I chose the beta distribution primarily as a tribute to the time-honored arcsin-square root transformation for percentages/proportions. You could also use dist=binomial. In the lsmeans statement, but with your data it needs some tweeking as the log-likelihood gets into a flat region, I added the ILINK option to report means on the original scale as well as on the link scale.

This gave a nice result.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-05-2013 12:49 PM

Discussion forum weirdness. I didn't reactivate this thread intentionally. I did try to paste a link that resulted in the "null" green-box when I hit "Add Reply" for another thread. I hope I didn't break one of my favorite internet toys.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-05-2013 01:09 PM

Dear Steve,

I've followed your suggestion about using PROC GLIMMIX. Yet, I chose the binomial distribution. But now I face an obstacle: with DIST = BINOMIAL and TYPE = CSH for different data sets I have different outputs with opposite convergence statuses. Changes in DIST (beta, binomial) and TYPE (CSH, CS, UN) sometimes give expected results, sometimes don't. For me it seems strange to play with the parameters for the data sets having the same nature. I guess I do wrong things... What do I miss?

06 August 2013, morning UPDATE:

When the 'Didn't converge.' appears I get '.'-filled cells of 'Standard Error' column in the 'Covariance Parameter Estimates' table and in the 'Change' column in the 'Iteration History' table. So, I conclude that my model is wrong from time to time. And why isn't there 'Information Criteria' table in the PROC GLIMMIX output? I exploit PROC MIXED instead.

Sincerely,

Stan

P.S.

Yesterday, I unknowingly hit the 'Branch button' and splitted the thread. Unfortunately, I don't know how to correct it... Plus, due to the site maintenance I didn't edited the post completely.

So, the initial post is an answer actually marked by me as 'Correct' for my question asked here.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-05-2013 03:10 PM

Stan, it is not unusual at all to get differing convergence status for the various covariance structures, especially with the binomial distribution. It depends on the number of parameters being estimated compared to the amount of data and on the convergence criteria that are applied. GLIMMIX has a default of only 20 likelihood iterations, which is often not nearly enough to achieve convergence at the default criteria. I don't think you are doing wrong things, just maybe not enough experience to trouble shoot what is happening with non-convergence.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-06-2013 02:58 AM

Steve,

many thanks for your reply. I'll try to change the number of iterations (and default convergence criteria... ?). Unfortunately, I'm not able to enlarge my sample size...

Stan

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-06-2013 09:36 AM

It seems we're never able to change the sample size, yet we still have to produce an analysis. Good luck with the NLOPTIONS statement. Also note that outer loop convergence is handled through the PROC GLIMMIX option PCONV, while inner loop convergence is handled by a whole bunch of options in NLOPTIONS. Outer loop is what you're after, unless you get warning messages about inner loop convergence in the log.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-12-2013 09:29 AM

Dear Steve,

as you advised I set MAXOPT=500 and PCONV=1E-7 and saw "Convergence criterion (PCONV=1E-7) satisfied." for "DIST = BINOMIAL TYPE = CSH". But as I said the convergence wasn't achieved with some data sets without these manipulations under the DIST= and TYPE= options. My question is: Can't the manipulations be interpreted as questionable...? Well, Steve, I do trust you, but how could I explain this to a broad audience (my supervisor, etc.)? (MAXOPT I set empirically: if not 50, then 100, 200, and 500 finally.)

And another short question. SAS PROC GLIMMIX creates an Output with LS-Means, and their differences with p-, t-values, etc. relevant to them [LS-Means]. But I'm confused a bit with *negative* LS-Mean when I measure a *positive* events... If it is possible, how to interpret the Output in terms of arithmetic means? As far as I understand we should report results as *< Statistic alpha (n.d.f; d.d.f) = value, p-value >* and I can't use the PROC SUMMARY's arithmetic mean coupled with the Output's statistics. Right?

Sorry for naive questions.

Thank you.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-12-2013 09:54 AM

Part of fitting generalized linear mixed models is accepting that your data is unique, and that default options need to be addressed in a way that allows you to obtain interpretable results. Given that, relaxing the convergence criteria does bring up some problems, but these can usually be addressed by using different starting values, and seeing if you have converged to the same (or within a small epsilon of the same) place.

The LSmeans with dist=binomial are logits, so they will be negative for proportions less than 0.5. To get estimates on the original scale, be sure to use the ILINK option in the LSMEANS statement. What you get from PROC SUMMARY should only be regarded as approximate, as those estimates do not take into account any of the terms (fixed or random) or correlations that you are fitting in GLIMMIX.

Regarding the failure to converge for some datasets--try adding an NLOPTIONS statement.

NLOPTIONS tech=NRRIDG;

The ridged Newton-Raphson technique (NRRIDG) often works better for the binomial response.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-12-2013 12:33 PM

Steve, thank you very much for the explanations (!).

But I use the following line in GLIMMIX:

LSMEANS Condition / DIFF **ILINK** ADJUST = TUKEY CL PLOTS = DIFFOGRAM(NOABS CENTER);

And the Output contains those negative LS-Means , their differences, and so on...

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-12-2013 12:50 PM

Look in the output. There should be a column labelled 'Estimate' and another labelled 'Mean'. The Estimate is the LSmean--expressed as a logit. The Mean is the probability/proportion on the original scale. Those values should NOT be negative.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-14-2013 05:40 AM

Steve, it's like a joke: I found them in the "Least Squares Means" table! And they are as proportions as you told me (if I multiplied them by 100 and I would have them the same as in PROC SUMMARY output). (To make the picture full I'll say: SEMs in GLIMMIX's and SUMMARY' outputs are different a little. May be because of GLIMMIX.)

Could you please tell me: "Lower Mean" and "Upper Mean" columns in GLIMMIX's "Least Squares Means" table are 95% confidence limits for the original scale means and "Lower" and "Upper" 95%CI for LS-means. Right? If so, the GLIMMIX's CIs are wider a bit.

It is a bit strange for me (a nonstatistician) that GLIMMIX doesn't produce "Differences..." table for the original scale means. Is it possible? Or it is useless/incorrect?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

12-16-2013 08:58 AM

Lower Mean and Upper Mean are indeed confidence bounds on the original scale, while Lower and Upper are for the LSmeans.

Think about the differences on the original scale this way: Suppose you had a log transform. The diff option would report the difference on the log scale, and back transformation would give exp(A' - B'), which is exp(A')/exp(B'), or the ratio on the original scale. This is very useful in many cases. The problem with calculating the difference on the original scale is not so much the location estimate (A minus B), but the scale (standard error) associated with this difference. Because the transformations are nonlinear, the error associated with the difference is very difficult to compute. As a result, the "standard error" of the difference and confidence bounds on the difference aren't easily computed. If you really, really wanted to do this, you could program NLMIXED to do the fitting, and use the PREDICT statement to construct are meaningful difference. By using the delta method, an approximate standard error or confidence bound can be obtained.

Steve Denham

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-16-2014 10:40 AM

Dear Steve,

previously you advised me the NRRIDG technique in NLOPTIONS:

...Regarding the failure to converge for some datasets--try adding an NLOPTIONS statement.

NLOPTIONS tech=NRRIDG;

The ridged Newton-Raphson technique (NRRIDG) often works better for the binomial response.

I checked it and all of those listed at SAS/STAT(R) 9.22 User's Guide help web page and found the NMSIMP

dealt with the most of my data sets.

Namely:

With the CS covariance matrix structure along with DIST=BINOMIAL option I usually get the

"Gener. Chi-Square / DF" values close to zero (0.01, etc.) or zero, no matter NRRIDG or NMSIMP is applied if any.

(As I understood the "Gener. Chi-Square / DF" is used for control over-/under-dispersion, or heterogeneity,

and the correctness of model.).

As it seems reasonable to have different dispersion in different levels in my case I checked CSH type (as you advised).

With DIST=BINOMIAL and TYPE=CSH the "Gener. Chi-Square / DF" = 1.00. In most cases.

As I got with the other cases "Did not converge" I tried NMSIMP:

With DIST=BINOMIAL, TYPE=CSH and NMSIMP the "Gener. Chi-Square / DF" = 1.00. Always.

So, the NMSIMP has helped. Is it legal to exploit this technique instead of NRRIDG?

And my current version of the code is:

PROC GLIMMIX DATA = ff_long_sorted ORDER = DATA **MAXOPT = 500** PCONV = 1E-8;

VALUEp = VALUEE/100;

CLASS ExpID Condition;

MODEL VALUEp = Condition / DISTRIBUTION = **BINOMIAL** DDFM = KENWARDROGER;

RANDOM Condition / RESIDUAL SUBJECT = ExpID TYPE = CSH;

*RANDOM _RESIDUAL_ / SUBJECT = ExpID TYPE = CSH; /* 'If you don't have missing value

NLOPTIONS TECHNIQUE = **NMSIMP** **MAXITER = 500**;

LSMEANS Condition / **ADJDFE = ROW** DIFF ILINK ADJUST = TUKEY CL PLOTS = DIFFOGRAM(NOABS CENTER);

ODS SELECT ConvergenceStatus FitStatistics Tests3 DiffPlot;

RUN;

Sincerely,

Stan

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-16-2014 11:55 AM

One point: the goal is not to achieve a 0 for chi-square/df. Lack of overdispersion is indicated by chi-square/df = 1. Values less than 1 indicate underdispersion. It is OK to have a value less than 1, but you should not be including terms to specifically achieve this.

For some model formulations (some choices for the random effects structure), the model fit will guarantee a value of 1.

The NMSIMP method does not use derivatives in the optimization. It if fine to use, but is usually very slow to converge.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-16-2014 12:16 PM

lvm,

thank you for the point . I meant if '1' was an indicator for the correct model formulation or not.

Is it ? At the beginning Steve advised me the code, but we

didn't emphasize on this chi-square/df statistics that's why I ask.

Is the logic in the post №12 incorrect?

stan