Hi, Suppose I have a matrix A, and a scalar I want to apply to A, called B. I then want to find apply an inverse so each row in A#B still sums to 1. proc iml;
A = { 0.20 0.80 0.00 0.00,
0.00 0.20 0.80 0.00,
0.00 0.00 0.20 0.80,
0.50 0.00 0.30 0.20}; /*Initial Matrix*/
B = A[ ,+];
C = { 1.0 1.2 1.0 1.0,
1.0 1.0 1.1 1.0,
1.0 1.0 1.25 1.40,
1.2 1 1 1}; /*Scalar Matrix*/
D = A#C;
E = D[ ,+];
F = (D#(C ^= 1))[,+];
G = (1-F)/(E-F);
H = (G-1)#(C=1) +1;
J = D#H;
J[loc(j=.)] = a[loc(j=.)]; /*Work around*/
K =J[,+]; /*Verify it sums to 1*/ All the code above works, in that it applies the scalar, then changes the values not impacted by the scalar in order for each row to sum to 1. So the values from A#B remain the same, while the others decrease. But the issue is stemming from when all the values in a row are impacted by the scalar. In the third row, both .2 and .8 are being multiplied by scalar values, thus throwing off the procedure of finding an inverse to multiply the row by. I've created a temporary workaround of setting the missing values back to the original values which is what should happen, but I'd like to use a cleaner way, if possible as this method still creates divide by 0 warnings. The divide by 0 comes from the e-f in the calculation of G, as both e and f are equal. Is there a clean way around this? Thank you.
... View more