turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- PROC GENMOD: I get non-estimable least squares mea...

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-07-2015 06:27 PM

I ran a PROC GENMOD code (see below). The output shows that the least squares means for a binary variable, "Q", are non-estimable, but there is an estimated difference in least squares means between "Q = 1" and "Q = 0".

How is this possible? I thought that the difference in least squares means is calculated by subtracting the 2 least squares means. If my least squares means are non-estimable, then shouldn't my difference in least squares means be non-estimable, too?

Thanks for your thoughts.

proc genmod

data = mydata;

class Q (ref = '0')

X (ref = '1')

W (ref = '1')

R;

model successes/trials

=

Q

X

W

X * W

Q * X

Q * W

/ dist = bin

link = logit;

repeated

subject = R;

lsmeans Q / exp diff cl e;

lsmeans X / exp diff cl e;

lsmeans W / exp diff cl e;

lsmeans X * W / exp diff cl e;

lsmeans Q * X / exp diff cl e;

lsmeans Q * W / exp diff cl e;

run;

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-08-2015 07:05 AM

Normally I don't try to answer these kinds of questions, but it is Saturday morning and I don't know if the experts will see this question until Monday. I am not an expert in this area, but I'll give it a shot.

In theory, it possible to estimate the difference of means without being able to estimate the means. Suppose I collect data for two variables, X and Y. I don't give you the original data, but instead decide to subtract off some reference value from both variables nd then give you the adjusted values. I might subtract 7 or 13 or 321...you don't know. As a consequence of my manipulation, there is no way that you can estimate the means of the variables. Howeer, you can easily estimate the mean of the difference X-Y, since my subtraction cancels out.

In your case, you have a binary variable Q. The level Q=0 is not estimable because it is getting lumped in with the intercept term. (I assume the level Q=1 has an estimate, right?) However, you can estimate the incremental effect of Q=1 as compared to Q=0, which means that you can estimate the difference.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-10-2015 12:46 PM

This can easily happen with fixed-effect models.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

08-10-2015 06:11 PM

Thanks for your reply on a Saturday morning, Rick!

No, actually - I do not get an estimate for Q = 1 in the least squares mean output. Both Q = 1 and Q = 0 are non-estimable in the least squares mean table.

As I understand, the difference in least squares means is calculated by subtracting the 2 least squares means - which makes my result very strange. If both the least squares means are non-estimable, then how can the difference in least squares means be estimable?