Hi,
I need to solve optimize quadratic function but the results need to be descrete  each od them can have value of eg. 0,2,4,6 .... 100 only.
Is it possible to enforce optmodel to give such results?
I saw that some optimization problems can be solved with IML module  mayby this is better track?
Thanks for help
Attached are two files. In both, I have rewritten the first READ DATA statement more simply as follows:
READ DATA &TABLE_NAME_VALUES(OBS=&OBS_COUNT) into {j in WSET} <values[i,j]=col('v'j)>;
I have also rewritten the objective function using implicit variables, as follows:
impvar r {i in VSET} = (sum {j in WSET} values[i,j] * weights
impvar avg = (sum {k in VSET} r
min min_std = sum {i in VSET} (r  avg)**2;
The following numeric parameter then calculates the standard deviation from the optimal objective value:
num std = sqrt(min_std.sol / NOBS);
The first file (tom022012.sas) includes the usual linearization sketched earlier, but the resulting MILP is big and the solver runs out of memory on my machine. So I added the compact linearization as well, and it solves in a couple of minutes. For your input data, the optimal solution still turns out to be trivial, putting all weight on j = 2.
The second file (tom022012_heur.sas) contains a heuristic approach that is much faster and scalable and might suit your needs better. The idea is to first solve the QP without discrete variables, and then solve a small MILP to find the closest (in absolute value) grid point to that optimal solution. The resulting solution is not guaranteed to be optimal to the original problem, but because your objective function is convex and you have such a fine discretization (2% increments), it will be very close to optimal.
I hope this helps.
IML does have a QP solver, but it does not allow discrete variables. The QP solver in OPTMODEL also does not allow discrete variables, but your problem can be linearized by introducing binary variables. And then you can use the MILP solver in OPTMODEL. If you can give more details, I would be glad to show you how to do this.
Rob Pratt
I don't know what details are important. I've got some quadratic function to be minimized (standard deviation of set of computed numbers) Each of those numbers depends on series of weights 8  optimal combination of those weights is the result of the optimization. The problem is that the results must be discrete and each of them must be one of those numbers [0, 2, 4, 6 ..... 100]. Basically they are interpreted as percent so the sum of them must me  in this case  100. I know that these results will not be optimal solution but I need to find the closest solution (that meets my conditions) to the optimal one.
Attached is a sketch of the idea. The optimal solution turns out to be trivial because I don't know what your other constraints are.
If this linearization makes your problem too big, an alternative "compact" linearization sometimes performs better. I can show you that, too, but please try this idea first.
Thank you so much. I will go through the code and see if I understand it
Hi, i tried to figure out something from your code but my problem seems to be a little bit different. Here is a sample of my optmodel proc  I need to force it to return discrete weights. thank you again for looking into it
/* data with historic values of parameters
taken to equations */
DATA Work.TABLE_VALUES;
INPUT i v1 v2 v3 v4 v5 v6 v7 v8 v9 v10;
DATALINES;
1 4.66 4.52 4.65 4.57 4.64 4.90 5.77 5.68 6.15 7.17
2 4.50 4.53 4.62 4.52 4.64 4.90 5.77 5.68 6.16 7.17
3 4.59 4.53 4.67 4.53 4.63 4.89 5.75 5.66 6.15 7.15
;
RUN;
DATA Work.TABLE_MARGINS;
INPUT i margin;
DATALINES;
1 0.63
2 0.67
3 0.44
;
RUN;
PROC PRINT data=Work.TABLE_VALUES;
TITLE "Input data values";
RUN;
PROC PRINT data=Work.TABLE_MARGINS;
TITLE "Margins to input values";
RUN;
%LET TABLE_NAME_VALUES = work.Table_Values;
%LET TABLE_NAME_MARGINS = Work.Table_Margins;
%LET OBS_COUNT = 3;
proc optmodel;
set WSET = 1..10;
var weights {WSET} >= 0 <= 1.0;
con sum_weights_100: sum {i in WSET} weights = 1.0;
*expand sum_wag_100;
/*** values to array ***/
num NOBS = 3; *&OBS_COUNT;
set VSET = 1..NOBS;
num values { VSET, WSET};
READ DATA &TABLE_NAME_VALUES (OBS=&OBS_COUNT) into values[i, 1]=v1 values[i, 2]=v2
values[i, 3]=v3 values[i, 4]=v4
values[i, 5]=v5 values[i, 6]=v6
values[i, 7]=v7 values[i, 8]=v8
values[i, 9]=v9 values[i, 10]=v10;
*print values;
/*** margins to array ***/
set MSET = 1..NOBS;
num margins{MSET};
READ DATA &TABLE_NAME_MARGINS (OBS=&OBS_COUNT) into margins=margin;
*print margins;
/*** Minimize Standard Deviation of function
for each row
r(i) = (sum{i in WGET} value * weights)  margin;
***/
min min_std = sqrt(
divide(
sum{i in VSET}(
/*function r(i) */
(sum{j in WSET} (values[i,j] * weights

/* avg of all r(i)*/
divide(
sum{k in VSET}(
sum{j in WSET} (values[k,j] * weights
),
NOBS
)
)**2,
NOBS
)
);
/*** ***/
solve;
print weights;
/*** ***/
quit;
Can you please provide the input data, too?
I updated my previous post with the data
Attached are two files. In both, I have rewritten the first READ DATA statement more simply as follows:
READ DATA &TABLE_NAME_VALUES(OBS=&OBS_COUNT) into {j in WSET} <values[i,j]=col('v'j)>;
I have also rewritten the objective function using implicit variables, as follows:
impvar r {i in VSET} = (sum {j in WSET} values[i,j] * weights
impvar avg = (sum {k in VSET} r
min min_std = sum {i in VSET} (r  avg)**2;
The following numeric parameter then calculates the standard deviation from the optimal objective value:
num std = sqrt(min_std.sol / NOBS);
The first file (tom022012.sas) includes the usual linearization sketched earlier, but the resulting MILP is big and the solver runs out of memory on my machine. So I added the compact linearization as well, and it solves in a couple of minutes. For your input data, the optimal solution still turns out to be trivial, putting all weight on j = 2.
The second file (tom022012_heur.sas) contains a heuristic approach that is much faster and scalable and might suit your needs better. The idea is to first solve the QP without discrete variables, and then solve a small MILP to find the closest (in absolute value) grid point to that optimal solution. The resulting solution is not guaranteed to be optimal to the original problem, but because your objective function is convex and you have such a fine discretization (2% increments), it will be very close to optimal.
I hope this helps.
Thank you for help. This is exaclty what I needed.
One more question concerning listing solutions close to optimal. Is there a possibility to get as a result list of solutions that solver ran through but were omitted because of the fact that better solution (an in the end optimal) was found.
Thank you
I'm glad this helped.
There is currently no feature to obtain multiple solutions from a MILP solver call, but we are considering such a feature for a future release.
Thank you for help. All your answers were helpful for solving my problem.
Anyway my problem got a little bit more complex. I need to modify function that is optimized. Now I need to maximize AVG/STDEV (instead minimizing STDEV). I know that now quadratic solver is not suitable as it's not quadratic function now. I will use some other solver istead. My problem, however, refers to obtaining discrete results (look your previous post with MILP solver). I assume that with quadratic fucntion MILP solution will be the one closest to optimal solution (will be optimal one within discrete solutions).
I wonder whether the same method guarantees the same with my new function (max AVG/STDEV)? Isn't there a posibility that some other discrete solution will be closer to optimal solution and I won't get it beacuse it's closer to some other local maximum and not the global maximum (the difference between those maximum may be very little)? How should I force MILP to search somehow the whole function or major part of it?
Even in the quadratic case, the code I sent (tom022012_heur.sas) to solve the continuous problem and then find the closest integer solution is only a heuristic. The resulting solution is integer feasible, but not necessarily optimal, to the original problem. But as long as the optimal MILP objective value is small, the nonlinear objective value of the resulting integer solution will not be much different than the optimal continuous nonlinear objective value, so you should get very tight bounds on the optimal objective (lower bound from the continuous solve, upper bound from the integer feasible solution).
Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.
If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website.
Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.
Find more tutorials on the SAS Users YouTube channel.