Not necessarily a contradiction. It is well known that individual confidence intervals can overlap (somewhat) when two means are significantly different, although many others are misled by the individual confidence intervals. This is true even with uncorrelated means (i.e., uncorrelated estimated expected values). Individual confidence intervals can have even more overlap when the means are correlated. By the nature of your model (with a random effect), there is the potential that the means are highly correlated. You didn't show all the output (such as the random effect variance), but I am guessing it is large (relatively speaking),which means there is high correlation in the means. For instance, if you square the SEs for each individual mean and for the difference, you have variances. V(1) is the square of the SE for mean 1, and so on. The standard formula for the variance of a difference (V(D)) is: V(D) = V(1) + V(2) - 2*COV(1,2) where COV(1,2) is the covariance of means 1 and 2. Using your results, you have: .01823^2 = .03846^2 + .03685^2 - 2*COV(1,2) .0003323 = .0014792 + .0013579 -2*COV(1,2) The covariance is not shown here. However, a little algebra shows that it would have to be 0.001252. You can use this to estimate the correlation of the two means. This would be: 0.001252/(0.03846*0.03685) = 0.88 . Thus, there is a high a correlation in your case. Bottom line, assuming you did other things correctly (can't tell from what you have given us), a consequence of your model fitted to the data is that the least squares means are highly correlated. Because of this, the standard error of a difference (SED) will be considerably smaller than the SED you would have with uncorrelated means. If the means were uncorrelated, COV(1,2)=0, and the SED would be: sqrt(.0014792 + .0013579) = 0.053. Your SED of 0.01823 is a lot smaller. I would definitely use the results of the differences of the LSMEANS. This is the test result that properly accounts for the correlation of the means (and the model fitted to the data). In fact, this is one example (assuming you did other things correctly) of the great advantage of using mixed models: you gain a great deal in precision in the comparison of LSMEANS.
... View more