Thanks so much for your reply!
However, I feel that the ultimate solution might be related to the discussion here at SO: https://stats.stackexchange.com/questions/453715/how-do-you-apply-constrains-on-parameters-in-bayesian-modeling
Following that discussion, where they suggest that constraints in Bayesian Regression should be built-in to the conditional distributions, I introduced the following changes, which produced better- but still not accurate - results. The response probability (already a conditional probability in IRT models), P(Y=1Itheta) needs to be further modified by the additional condition, ItemOne1 = ItemOne2. That is, for this item, response probabilities behave in the same way, regardless of group-specific theta. Below theta_group1 ~ N(0,1), theta_group2 ~ N(mu,1), mu ~N(0,1). The Group2 theta has a hierachical prior.
I added the terms (theta-ItemOne2) or (theta -ItemOne1) inside the conditional probability in my second model.
I am not sure this is exactly right, though. My reasoning is that I want, for Group1, Item1(and similarly for Group2,Item1):
prob = P(Y|theta_group1), with the addition that ItemOne1=ItemOne2
=P(Y|theta_group1,ItemOne1=ItemOne2) , which implies
prob = P(Y|theta_group2,ItemOne1) *P(Y|theta_group2,ItemOne2), since both Y idepends on both ItemOne1 and ItemOne2 being the same.
From Patz and Junker's MCMC notes online, here are the conditional posteriors for this IRT model (except I additioanlly have two different groups as well. Also, I am not interested in the "a" parameter in my application).
ItemOne1 and ItemOne2 are the b_i in the equation above, but now ItemOne1 is from Group1 and ITemOne2 is from Group 2, which have different "theta" distributions. However, the conditional probability needs to be equal regardless of theta. Not sure how to express this, but the closest I came was the modification I mentioned. Please see below.
prior b1: b2: ~ normal(0, var=16);
prior a1 a2 ~ lognormal(0, var=9);
prior ItemOne: ~ normal(0,var=16);
parms mu_g 0;
prior mu_g ~ normal(0, var=1);
random theta ~ normal(mu[group], var=1) subject=_obs_;
llike=0;
do j=1 to 5;
if group=1 and j=1 then do;
prob = logistic(a1*(theta-ItemOne1)*(theta-ItemOne2));
llike = llike + x[j] * log(prob) + (1 - x[j]) * log(1 - prob);
end;
else if group=1 and j > 1 then do;
prob = logistic(a1*(theta-b1[j-1]));
llike = llike + x[j] * log(prob) + (1 - x[j]) * log(1 - prob);
end;
else if group =2 and j = 1 then do;
prob = logistic(a2*(theta-ItemOne2)*(theta-ItemOne1));
llike = llike + x[j] * log(prob) + (1 - x[j]) * log(1 - prob);
end;
else if group =2 and j > 1 then do;
prob = logistic(a2*(theta-b2[j-1]));
llike = llike + x[j] * log(prob) + (1 - x[j]) * log(1 - prob);
end;
All the other sections of the code stay the same except for the changed conditional probabilities while updating ItemOne1 and ItemOne2.. Here are my new results.
The actual results, from the SAS Press book, by Stone and Zhu (apparently out of print now), is below, which is different but not as bad as my first attempt. I feel like if I can get the right conditional probability equation, I can get the right values, but nore sure what that would be. Please note that "b11" and "b21" in Stone/Zhu code correspond to my ItemOne1 and ItemOne2 respectively, and their b12,b13,b14,b15 correspond to my b11, b12,b13, 14 and so on for the other b's.
The code for the last model , from the SAS book by Stone and Zhu is below.
title "Unconstrained model across groups";
proc mcmc data=lsat_g outpost=lsat_bayes_1p_unconstrained seed=23 nthreads=8 nbi=5000 nmc=20000;
array b1[5]; array b2[5]; array x[5];
parms a1 a2 1; parms b1: b2: 0;
prior b1: b2: ~ normal(0, var=16);
prior a1 a2 ~ lognormal(0, var=9);
random theta ~ normal(0, var=1) subject=_obs_;
llike=0;
do j=1 to 5;
if group=1 then do;
prob = logistic(a1*(theta-b1[j]));
llike = llike + x[j] * log(prob) + (1 - x[j]) * log(1 - prob);
end;
else do;
prob = logistic(a2*(theta-b2[j]));
llike = llike + x[j] * log(prob) + (1 - x[j]) * log(1 - prob);
end;
end;
model general(llike);
run;
Please do let me know if you find a solution! Thanks very much for your reply!
... View more