Greetings from Phoenix AZ!
I am trying to solve a problem simplied as following:
There are N groups of 3-variable tuples {A, B, C} where each of A, B and C is a set of positive numbers.
Now I am keeping B constant while introducting scalers k1 and k2 (initialized as 1) to A and C, respectively where k1 and k2 are to be solved. So the scaled groups are {k1*A, B, k2*C}.
For each tuple, if k1*a(i) is the max of {k1*a(i), b(i) and k2*c(i)} then A gets a point, whereas if b(i) is the max of {k1*a(i), b(i) and k2*c(i)}, B gets a point, and so on. So among all N groups, the total points (occurance of peaks) are: m1, m2 and m3 respectively, where m1+m2+m3 = N. (ties are not allowed)
The objective function is to make the relative shares of m1, m2 and m3 confirm to those of an empirical distribution such as (0.21, 0.47, 0.32). That is:
MIN(ABS(m1/N-0.21) + ABS(m2/N-0.47) + ABS(m3/N-0.32)).
Because the shares add up to 1 so it is the same as:
MIN(ABS(m1/N-0.21) + ABS(m3/N-0.32)).
Since the derivative is not continuous, I introduce another objective so that the scalers should have least deviation from 1:
MIN(ABS(k1-1) + ABS(k2-1))
Also I'd like to avoid continuous parameters so that ties are not possible. So k1 and k2 only have values in steps of say 0.0001, for example, k1 can be 1.0432 or 1.0431 but not in between the two. Literatelly k1 and k2 are rounded to the 4th decimal points.
Belew are examples fo A, B and C
6676.9893595 6880.8221959 6786.7184412
6619.5370816 6822.7079452 6880.3302732
6607.8205979 6818.1900264 6747.7925799
6657.9905104 6858.2156051 6811.5079991
6718.8738087 6928.6034209 6755.4089264
6665.8284822 6878.2833438 6825.4992714
6620.7753966 6825.1398973 6821.0155341
6595.8532413 6794.697434 6684.7464211
6673.5539913 6880.3418271 6655.1373233
6703.0755563 6917.9515607 6729.234181
6772.622115 6979.2082157 6803.848444
6597.3543516 6804.815741 6810.938388
6537.8327024 6744.537828 6954.8562933
6743.6768387 6952.3375258 6828.0365376
6793.4671526 7002.9258906 6815.6942137
6602.4239045 6805.1313804 6742.5119131
6781.9580288 6992.3290857 6766.3824148
6692.3379323 6902.0332064 6658.3110901
6672.4406023 6872.4619659 6783.6966264
6775.5075961 6986.5189271 6848.9584797
6704.3222171 6912.3672245 6804.1217701
6714.6696212 6930.2340032 6827.5758676
6669.4335388 6873.3908134 6796.1910091
6732.5518936 6933.8927661 6753.1406367
6813.1790422 7024.5751751 6833.7151298
6719.9124575 6937.0097816 6842.5195035
6729.7275191 6939.5298878 6860.9728186
6661.1079454 6872.8882031 6793.984005
6657.4088276 6864.7300508 6665.6516863
6586.5374763 6798.4497988 6704.5729581
6605.7390947 6810.1337583 6595.5949127
6819.9070313 7035.3015391 6781.1365528
6571.7509448 6778.3162295 6749.4041907
6643.4549706 6848.0327328 6756.6977722
6493.0914606 6696.2620679 6743.8750324
6493.9408536 6699.670217 6771.5918001
6670.5817361 6875.7044028 6896.7256026
6706.3004676 6914.347585 6834.7197996
6778.484597 6986.1370442 6751.5713144
6795.3436901 7005.044883 6837.9491435
6655.7296854 6861.3910318 6878.9809629
6756.1863279 6968.318642 6753.345038
The empirical shares are (0.21, 0.47, 0.32), respectively.
I am having a hard time using OPTMODEL to solve it so I resorted to a brute force, starting from large steps such as 0.01 on A, B, and C, then narrowing down to smaller steps but it simply take too long.
Any suggestion would be highly appreciated!
Here's a more compact form that generalizes to more columns. Note that I fixed K[2] = 1 for B.
proc optmodel;
set COLS = 1..3;
num empirical {COLS} = [0.21, 0.47, 0.32];
num epsilon = 1e-4;
set OBS;
num v {OBS, COLS};
read data indata into OBS=[_N_] {j in COLS} <v[_N_,j]=col('v'||j)>;
var K {COLS} >= 0 <= 10;
fix K[2] = 1;
impvar Vnew {i in OBS, j in COLS} = v[i,j] * K[j];
var IsLargest {OBS, COLS} binary;
con OneLargest {i in OBS}:
sum {j in COLS} IsLargest[i,j] = 1;
/* if IsLargest[i,j] = 1 then Vnew[i,j] >= Vnew[i,j2] + epsilon for j2 ne j */
con Larger {i in OBS, j in COLS, j2 in COLS diff {j}}:
Vnew[i,j2] + epsilon - Vnew[i,j]
<= (v[i,j2] * K[j2].ub + epsilon - v[i,j] * K[j].lb) * (1 - IsLargest[i,j]);
var ErrorPlus {COLS} >= 0;
var ErrorMinus {COLS} >= 0;
impvar Share {j in COLS} = sum {i in OBS} IsLargest[i,j] / card(OBS);
con Error {j in COLS}:
Share[j] - ErrorPlus[j] + ErrorMinus[j] = empirical[j];
min AbsDeviation = sum {j in COLS} (ErrorPlus[j] + ErrorMinus[j]);
/* call MILP solver */
solve;
print Vnew IsLargest;
print K empirical Share ErrorPlus ErrorMinus;
quit;
Does the following do what you want?
data indata;
input v1 v2 v3;
datalines;
6676.9893595 6880.8221959 6786.7184412
6619.5370816 6822.7079452 6880.3302732
6607.8205979 6818.1900264 6747.7925799
6657.9905104 6858.2156051 6811.5079991
6718.8738087 6928.6034209 6755.4089264
6665.8284822 6878.2833438 6825.4992714
6620.7753966 6825.1398973 6821.0155341
6595.8532413 6794.697434 6684.7464211
6673.5539913 6880.3418271 6655.1373233
6703.0755563 6917.9515607 6729.234181
6772.622115 6979.2082157 6803.848444
6597.3543516 6804.815741 6810.938388
6537.8327024 6744.537828 6954.8562933
6743.6768387 6952.3375258 6828.0365376
6793.4671526 7002.9258906 6815.6942137
6602.4239045 6805.1313804 6742.5119131
6781.9580288 6992.3290857 6766.3824148
6692.3379323 6902.0332064 6658.3110901
6672.4406023 6872.4619659 6783.6966264
6775.5075961 6986.5189271 6848.9584797
6704.3222171 6912.3672245 6804.1217701
6714.6696212 6930.2340032 6827.5758676
6669.4335388 6873.3908134 6796.1910091
6732.5518936 6933.8927661 6753.1406367
6813.1790422 7024.5751751 6833.7151298
6719.9124575 6937.0097816 6842.5195035
6729.7275191 6939.5298878 6860.9728186
6661.1079454 6872.8882031 6793.984005
6657.4088276 6864.7300508 6665.6516863
6586.5374763 6798.4497988 6704.5729581
6605.7390947 6810.1337583 6595.5949127
6819.9070313 7035.3015391 6781.1365528
6571.7509448 6778.3162295 6749.4041907
6643.4549706 6848.0327328 6756.6977722
6493.0914606 6696.2620679 6743.8750324
6493.9408536 6699.670217 6771.5918001
6670.5817361 6875.7044028 6896.7256026
6706.3004676 6914.347585 6834.7197996
6778.484597 6986.1370442 6751.5713144
6795.3436901 7005.044883 6837.9491435
6655.7296854 6861.3910318 6878.9809629
6756.1863279 6968.318642 6753.345038
;
proc optmodel;
set COLS = 1..3;
num empirical {COLS} = [0.21, 0.47, 0.32];
num epsilon = 1e-4;
set OBS;
num v {OBS, COLS};
read data indata into OBS=[_N_] {j in COLS} <v[_N_,j]=col('v'||j)>;
var K {1..2} >= 0 <= 10;
var IsLargest {OBS, COLS} binary;
con OneLargest {i in OBS}:
sum {j in COLS} IsLargest[i,j] = 1;
/* if IsLargest[i,1] = 1 then v[i,1] * K[1] >= v[i,2] + epsilon */
con ALargerB {i in OBS}:
v[i,2] + epsilon - v[i,1] * K[1]
<= (v[i,2] + epsilon - v[i,1] * K[1].lb) * (1 - IsLargest[i,1]);
/* if IsLargest[i,1] = 1 then v[i,1] * K[1] >= v[i,3] * K[2] + epsilon */
con ALargerC {i in OBS}:
v[i,3] * K[2] + epsilon - v[i,1] * K[1]
<= (v[i,3] * K[2].ub + epsilon - v[i,1] * K[1].lb) * (1 - IsLargest[i,1]);
/* if IsLargest[i,2] = 1 then v[i,2] >= v[i,1] * K[1] + epsilon */
con BLargerA {i in OBS}:
v[i,1] * K[1] + epsilon - v[i,2]
<= (v[i,1] * K[1].ub + epsilon - v[i,2]) * (1 - IsLargest[i,2]);
/* if IsLargest[i,2] = 1 then v[i,2] >= v[i,3] * K[2] + epsilon */
con BLargerC {i in OBS}:
v[i,3] * K[2] + epsilon - v[i,2]
<= (v[i,3] * K[2].ub + epsilon - v[i,2]) * (1 - IsLargest[i,2]);
/* if IsLargest[i,3] = 1 then v[i,3] * K[2] >= v[i,1] * K[1] + epsilon */
con CLargerA {i in OBS}:
v[i,1] * K[1] + epsilon - v[i,3] * K[2]
<= (v[i,1] * K[1].ub + epsilon - v[i,3] * K[2].lb) * (1 - IsLargest[i,3]);
/* if IsLargest[i,3] = 1 then v[i,3] * K[2] >= v[i,2] + epsilon */
con CLargerB {i in OBS}:
v[i,2] + epsilon - v[i,3] * K[2]
<= (v[i,2] + epsilon - v[i,3] * K[2].lb) * (1 - IsLargest[i,3]);
var ErrorPlus {COLS} >= 0;
var ErrorMinus {COLS} >= 0;
con Error {j in COLS}:
sum {i in OBS} IsLargest[i,j] / card(OBS) - ErrorPlus[j] + ErrorMinus[j] = empirical[j];
min AbsDeviation = sum {j in COLS} (ErrorPlus[j] + ErrorMinus[j]);
/* call MILP solver */
solve;
print K;
num vnew {i in OBS, j in COLS} = v[i,j] * (if j = 1 then K[1].sol else if j = 3 then K[2].sol else 1);
print vnew IsLargest;
num share {j in COLS} = sum {i in OBS} IsLargest[i,j].sol / card(OBS);
print empirical share ErrorPlus ErrorMinus;
quit;
Thank you so much for such a prompt response! Let me give it a try...
Also I see you listed all 6 constraints representing the permutation of A, B, and C. In the actual problem I have 6 variables A through F. Rather than spelling out each one totaling 30, can they be parameterized into single constraint or a cofor loop mapbe?
Bo
Here's a more compact form that generalizes to more columns. Note that I fixed K[2] = 1 for B.
proc optmodel;
set COLS = 1..3;
num empirical {COLS} = [0.21, 0.47, 0.32];
num epsilon = 1e-4;
set OBS;
num v {OBS, COLS};
read data indata into OBS=[_N_] {j in COLS} <v[_N_,j]=col('v'||j)>;
var K {COLS} >= 0 <= 10;
fix K[2] = 1;
impvar Vnew {i in OBS, j in COLS} = v[i,j] * K[j];
var IsLargest {OBS, COLS} binary;
con OneLargest {i in OBS}:
sum {j in COLS} IsLargest[i,j] = 1;
/* if IsLargest[i,j] = 1 then Vnew[i,j] >= Vnew[i,j2] + epsilon for j2 ne j */
con Larger {i in OBS, j in COLS, j2 in COLS diff {j}}:
Vnew[i,j2] + epsilon - Vnew[i,j]
<= (v[i,j2] * K[j2].ub + epsilon - v[i,j] * K[j].lb) * (1 - IsLargest[i,j]);
var ErrorPlus {COLS} >= 0;
var ErrorMinus {COLS} >= 0;
impvar Share {j in COLS} = sum {i in OBS} IsLargest[i,j] / card(OBS);
con Error {j in COLS}:
Share[j] - ErrorPlus[j] + ErrorMinus[j] = empirical[j];
min AbsDeviation = sum {j in COLS} (ErrorPlus[j] + ErrorMinus[j]);
/* call MILP solver */
solve;
print Vnew IsLargest;
print K empirical Share ErrorPlus ErrorMinus;
quit;
Thanks again, Rob! It's working like a charm for small datasets so far! I am now solving the performace issue as the actual problem has 6 columns and 7500 observations. Last night I flushed out all 30 constraints with your original code and the execution tooks more than 3 hours and ended with "out of memory" error. I am truncating it down to 1500 observations and shrinking the parameters to between (0.8, 1.2) with initial values of 1. Hopefully this will speed things up a bit. Any suggesions would be appreciated!
Glad to help. Are you able to share your larger data set?
Rob,
Could you give me a hint as to why the following code doesn't work? I am re-purposing your code to solve the opposite problem -- conforming to the empirical shares of smallest occurance with 6 scalers, but optmodel exits reporting an optimal has been reached at the boundaries.
proc optmodel;
set COLS = 1..&NCOLS;
num empirical {COLS} = [&S1 &S2 &S3 &S4 &S5 &S6];
num epsilon = 1e-4;
set OBS;
num v {OBS, COLS};
read data indata into OBS=[_N_] {j in COLS} <v[_N_,j]=col('v'||j)>;
var K {COLS} >= 0.8 <= 1.2 init 1;
fix K[&FIXCOL] = 1;
impvar Vnew {i in OBS, j in COLS} = v[i,j] * K[j];
var IsSmallest {OBS, COLS} binary;
con OneSmallest {i in OBS}:
sum {j in COLS} IsSmallest[i,j] = 1;
/* if IsSmallest[i,j] = 1 then Vnew[i,j] <= Vnew[i,j2] - epsilon for j2 ne j */
con Smaller {i in OBS, j in COLS, j2 in COLS diff {j}}:
Vnew[i,j2] - epsilon - Vnew[i,j]
>= (v[i,j2] * K[j2].lb - epsilon - v[i,j] * K[j].ub) * (1 - IsSmallest[i,j]);
var ErrorPlus {COLS} >= 0;
var ErrorMinus {COLS} >= 0;
impvar Share {j in COLS} = sum {i in OBS} IsSmallest[i,j] / card(OBS);
con Error {j in COLS}:
Share[j] - ErrorPlus[j] + ErrorMinus[j] = empirical[j];
min AbsDeviation = sum {j in COLS} (ErrorPlus[j] + ErrorMinus[j]);
/* call MILP solver */
solve;
print K empirical Share ErrorPlus ErrorMinus;
CREATE DATA OPTSCL FROM [V] = COLS K;
quit;
Rob:
Nevermind -- the repurposed code does work. I got the input wrong.
Also I posted the actual data per your request.
Thanks!
Sure ... see the attachment PKSIMOUT0_H.
To get INDATA:
DATA INDATA;
SET IN.PKSIMOUT0_H;
RENAME M06H0=V1 M06H1=V2 M07H0=V3 M07H1=V4 M08H0=V5 M08H1=V6;
RUN;
and the empirical shares are:
num empirical {COLS} = [0.1090909091 0.2 0.1090909091 0.2363636364 0.0727272727 0.2727272727];
I am keeping K[4] fixed: Fix K[4] = 1;
and the bounds and initial values are:
var K {1..6} >= 0.8 <= 1.2 init 1;
It is not surprising that the larger instance takes a long time. The "big-M" constraints are known to cause difficulties. Here's an approach that exploits the fact that we can quickly solve instances with few enough observations. It uses a COFOR loop to solve (in parallel) independent "bins" of observations and then averages the resulting K values to use as an integer feasible solution to warm start the solve for the original problem.
data empiricalData;
input empirical @@;
datalines;
0.1090909091 0.2 0.1090909091 0.2363636364 0.0727272727 0.2727272727
;
%let KlowerBound = 0.8;
%let KupperBound = 1.2;
%let Kinit = 1;
%let fixcol = 4;
%let binSize = 100;
proc optmodel printlevel=0;
set COLS;
num empirical {COLS};
read data empiricalData into COLS=[_N_] empirical;
num epsilon = 1e-4;
set OBS_ALL;
num v {OBS_ALL, COLS};
read data indata into OBS_ALL=[_N_] {j in COLS} <v[_N_,j]=col('v'||j)>;
set OBS;
var K {COLS} >= &KlowerBound <= &KupperBound init &Kinit;
con Fix: K[&fixCol] = 1;
impvar Vnew {i in OBS, j in COLS} = v[i,j] * K[j];
var IsLargest {OBS, COLS} binary;
con OneLargest {i in OBS}:
sum {j in COLS} IsLargest[i,j] = 1;
/* if IsLargest[i,j] = 1 then Vnew[i,j] >= Vnew[i,j2] + epsilon for j2 ne j */
con Larger {i in OBS, j in COLS, j2 in COLS diff {j}}:
Vnew[i,j2] + epsilon - Vnew[i,j]
<= (v[i,j2] * K[j2].ub + epsilon - v[i,j] * K[j].lb) * (1 - IsLargest[i,j]);
var ErrorPlus {COLS} >= 0;
var ErrorMinus {COLS} >= 0;
impvar Share {j in COLS} = sum {i in OBS} IsLargest[i,j] / card(OBS);
con Error {j in COLS}:
Share[j] - ErrorPlus[j] + ErrorMinus[j] = empirical[j];
min AbsDeviation = sum {j in COLS} (ErrorPlus[j] + ErrorMinus[j]);
num binSize = &binSize;
num numSolved init 0;
set BINS = 1..ceil(card(OBS_ALL) / binSize);
num Ksol {BINS, COLS};
cofor {bin in BINS} do;
put bin=;
OBS = {i in OBS_ALL: ceil(i/binSize) = bin};
solve;
for {j in COLS} Ksol[bin,j] = K[j];
numSolved = numSolved + 1;
put 'Solved ' numSolved 'of ' (card(BINS)) 'bins.';
end;
create data Ksoldata from [bin]=BINS {j in COLS} <col('K'||j)=Ksol[bin,j]>;
/* call MILP solver for original problem */
OBS = OBS_ALL;
for {j in COLS} fix K[j] = (sum {bin in BINS} Ksol[bin,j]) / card(BINS);
solve;
unfix K;
solve with milp / primalin maxtime=60;
print K empirical Share ErrorPlus ErrorMinus;
quit;
Thanks again!
Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!
Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.
Find more tutorials on the SAS Users YouTube channel.