BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
niam
Quartz | Level 8

Hello

I have a panel data set and have estimated two regression models with the same set of independent variables but different response variable. I have done the estimation separately by random effects method. Please note that I have not used SUR method. So I run the regressions one for every model.

How can I test the difference between the coefficients of the same variables in two models

if I have

Y1= aX1+bX2

Y2= cX1+dX2

Then how can I test the difference between a and c?

In a journal article I saw a simple t-test for the difference between the means of two groups with equal sample sizes, but I think it is not correct

Please let me know if you have any suggestions for testing this using SAS

Thanks for reading this post

1 ACCEPTED SOLUTION

Accepted Solutions
niam
Quartz | Level 8

Thank you very much for your help. I will share any updates on this issue.

View solution in original post

6 REPLIES 6
SteveDenham
Jade | Level 19

It isn't often I look at standardized coefficients, but this looks like a situation where they would be useful.  Think about this, suppose Y1 is, on the average, an order of magnitude larger than Y2.  You would expect that coefficient "a" to be much larger (significantly) than "c", but that tells us little about what is going on.  Without doing SUR, the best I could offer would be to examine confidence bounds on the standardized coefficients, and couch the response in estimation terms rather than in testing terms.

Steve Denham

jrbrauer
Fluorite | Level 6

It is easy to find basic tests for coefficient equality across regression equations (e.g., see Paternoster et al. 1998 article published in the journal Criminology). However, random effects modeling adds a layer of complexity, and I'm not sure if such tests are applicable within the same sample using different outcome variables.

I would follow Steve Denham's logic by examining the relative size of standardized effects, with one modification. Instead of focusing on standardized coefficients, I would standardize the outcome variables, not the predictor variables. I might be wrong, but unless the outcome variable is standardized, I don't think examination of standardized coefficients solves the 'order of magnitude' problem to which Steve refers (X1 is measured the same way in both equations and thus has the same distribution; Y1 and Y2 distributions differ). With this approach, a one unit increase in X1 will be associated with a 'b1a' (or 'a') standard deviation unit increase in Y1 and with a 'b1c' (or 'c') standard deviation unit increase in Y2, which makes the coefficients more comparable. Then, as Steve suggests, I would examine whether the confidence intervals around the b1a (or a) and b1c (or c) coefficients overlap; if they overlap, you would conclude that the effects are not significantly different from one another.

(Note that while standardization of Y1 & Y2 will produce more comparable coefficients, I don't think standardization is necessary if one examines whether confidence intervals overlap, as standard errors reflect distributional differences across the variables. Then again, it's been a long day, so someone sharper than me should confirm. Smiley Wink Also, note that this approach uses the same logic applied in tests of mediation hypotheses - see http://www.quantpsy.org/pubs/preacher_hayes_2008b.pdf).

Jon Brauer

niam
Quartz | Level 8

Thanks for your great feedback.

in Xue.et. al. "Customer Efficiency, Channel Usage, and Firm Performance in Retail Banking " published in M&SOM 2007,  they suggest comparing the coefficients by a simple t-test. for example if variance of a and c is Var(a) and Var(c) , then by assuming that a and c are independent , VAR(a-c)  will be Var(a)+Var(c) so test the hypothesis that a-c>0 by the statistic as : a-c/(sqrt(Var(a)+Var(c))

Their approach seems to be an easy way to solve this problem, However I am not sure if its the correct way to do so. Whats you idea?

SteveDenham
Jade | Level 19

Jon's suggestion about standardizing the response variable is spot on for addressing the scaling problem, and is what I really wanted, rather than the standardized coefficients.

As far as the simple t-test goes, I disagree with the authors, as I really do not believe that the estimates a and c are independent.  If Y1 and Y2 are correlated in any way, then it seems to me that it would follow that regression coefficients for an explanatory variable (X1) for the two variables are going to be correlated as well.  And that means that the confidence intervals I suggested need to be adjusted for the correlation as well. Given all of this, I really wonder if this method is truly valid without accounting for the correlation between Y1 and Y2.  Multivariate regression perhaps?

Steve Denham

jrbrauer
Fluorite | Level 6

The article you cited appears to be using the same/similar formula as the one described in the Paternoster et al. 1998 article I mentioned. The article can be found here: http://www.udel.edu/soc/faculty/parker/SOCI836_S08_files/Paternosteretal_CRIM98.pdf . Note, however, that the formula described, (a-c)/(sqrt(SEa^2 + SEc^2)), is a z-test that is appropriate for comparing equality of linear regression coefficients across independent samples, and it assumes both models are specified the same way (i.e., same IVs and DV). (Also, note that if you use non-linear transformations or link functions (e.g., as in logistic, poisson, tobit, etc.), these tests are inappropriate as well, since coefficients represent conditional probabilities. I assume you are using a linear regression technique).

To compare within the same regression equation in SAS, you can use the 'test' command for proc reg (see: http://www.ats.ucla.edu/stat/sas/webbooks/reg/chapter4/sasreg4.htm). To compare across equations using different IVs, the same DV, and the same sample, you should be able to apply the logic used in tests of mediation hypotheses (for discussions, articles, and programs, see Andrew F. Hayes' page: http://www.afhayes.com/ .)

As Steve notes, two different (and likely correlated) DVs complicates matters. It is possible that the 'mtest' command for proc reg will do what you want (http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_reg_sect014....). However, as mentioned, random effects modeling adds another layer of complexity, and I'm unaware of a similar test procedure within proc mixed or glm.

You may find additional helpful information in this thread:

http://www-01.ibm.com/support/docview.wss?uid=swg21482832

or in this article:

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&ved=0CCYQFjAB&url=http%3A%2F%...

I'm sorry that I am unable to offer a concrete solution. If you find an appropriate solution, please share!

Jon Brauer

niam
Quartz | Level 8

Thank you very much for your help. I will share any updates on this issue.

SAS Innovate 2025: Register Now

Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!

What is ANOVA?

ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 6 replies
  • 28387 views
  • 6 likes
  • 3 in conversation