Thanks Steve. Nope, absolutely sure the data did not change. For a start, nothing was done to perm.yogurt3 between the proc glimmix steps. Secondly, step1 reported: Number of Observations Read 53027 Number of Observations Used 51441 Step2 reported: Number of Observations Read 53027 Number of Observations Used 51441 . We would generally consider the first set of covariance estimates a disappointment: CHOL(1,1) 0.9208 . CHOL(2,1) 0.9208 . CHOL(2,2) 0.9208 . CHOL(3,1) 0.9208 . CHOL(3,2) 0.9208 . CHOL(3,3) 0.9208 . CHOL(4,1) 0.9208 . CHOL(4,2) 0.9208 . CHOL(4,3) 0.9208 . CHOL(4,4) 0.9208 . CHOL(5,1) 0.9208 . CHOL(5,2) 0.9208 . CHOL(5,3) 0.9208 . CHOL(5,4) 0.9208 . CHOL(5,5) 0.9208 . CHOL(6,1) 0.9208 . CHOL(6,2) 0.9208 . CHOL(6,3) 0.9208 . CHOL(6,4) 0.9208 . CHOL(6,5) 0.9208 . CHOL(6,6) 0.9208 . CHOL(7,1) 0.9208 . CHOL(7,2) 0.9208 . CHOL(7,3) 0.9208 . CHOL(7,4) 0.9208 . CHOL(7,5) 0.9208 . CHOL(7,6) 0.9208 . CHOL(7,7) 0.9208 . CHOL(8,1) 0.9208 . CHOL(8,2) 0.9208 . CHOL(8,3) 0.9208 . CHOL(8,4) 0.9208 . CHOL(8,5) 0.9208 . CHOL(8,6) 0.9208 . CHOL(8,7) 0.9208 . CHOL(8,8) 0.9208 . HouseHoldID 0.1690 0.01440 PermMissAlts 0.9367 0.6027 Scale 4.2298 0.06749 Whereas the second step set are a lot more interesting (and credible!), and the transformations into the G matrix and thence into the GCORR matrix yield quite useful and actionable inferences for brand managers: CHOL(1,1) -0.00559 . CHOL(2,1) -0.00493 . CHOL(2,2) -0.00481 . CHOL(3,1) -0.00568 . CHOL(3,2) -0.00525 . CHOL(3,3) -0.00950 . CHOL(4,1) -0.00170 . CHOL(4,2) -0.00172 . CHOL(4,3) -0.00136 . CHOL(4,4) -0.00307 . CHOL(5,1) -0.00523 . CHOL(5,2) -0.00481 . CHOL(5,3) -0.00552 . CHOL(5,4) -0.00160 . CHOL(5,5) -0.00525 . CHOL(6,1) -0.00639 . CHOL(6,2) -0.00528 . CHOL(6,3) -0.00478 . CHOL(6,4) -0.00097 . CHOL(6,5) -0.00590 . CHOL(6,6) -0.01573 . CHOL(7,1) 0.007528 . CHOL(7,2) 0.006518 . CHOL(7,3) 0.006129 . CHOL(7,4) -0.00018 . CHOL(7,5) 0.007475 . CHOL(7,6) 0.009907 . CHOL(7,7) -0.02040 . CHOL(8,1) 0.001548 . CHOL(8,2) 0.001323 . CHOL(8,3) 0.002397 . CHOL(8,4) -0.00020 . CHOL(8,5) 0.001542 . CHOL(8,6) 0.002046 . CHOL(8,7) -0.00379 . CHOL(8,8) -0.00168 . HouseHoldID 9.676E-9 0.006443 PermMissAlts 0.02020 0.005175 Scale 1.6093 . I was thinking that it may be that the first step stalled at a stationary point NOT the global optimum, despite both steps specifying the same optimizing algorithm (double dogleg). I noted that the double dogleg uses the gradient to update an approximate hessian matrix. I wonder if one or more of the (apparent) RANDOM statement output options somehow influenced the gradient calculation (maybe just in terms of accuracy), thereby influencing both the approximate hessian update and the endpont? Thoughts?
... View more