BookmarkSubscribeRSS Feed
Pyrite | Level 9

I need to set a constraint like this: x in [1,2] or x = 0

I tried with

constraint c1: 1 <= x <=2 or x = 0

but it doesn't work. What is the right syntax?






proc optmodel;
   var X >= 0 <= 2;
   var Y binary;

   /* if Y = 0 then X <= 0, else X <= X.ub (redundant) */
   con C1: X <= X.ub * Y;

   /* if Y = 1 then X >= 1, else X >= 0 (redundant) */
   con C2: X >= Y;



Pyrite | Level 9

Thanks! Unfortunately my problem is non linear, so the solver is nlp.

And using binary variables, it says:

ERROR: The NLP solver does not allow integer variables.

What does the rest of your model look like?

Pyrite | Level 9

Here is the whole model:


proc fcmp outlib=work.funcs.f;
    function f1(Q, H);
        return (

    function f2(Q, H);
        return (
    function g1(Q1,Q2); return ( 0.003134  * ((Q1+Q2)**2) + 0.003089 * (Q1**2) ); endsub;
    function g2(Q1,Q2); return ( 0.003134  * ((Q1+Q2)**2) + 0.003089 * (Q2**2) ); endsub;

%let Q1max = 30.4;
%let Q2max = 29.5;
%let P1min = 2.5;
%let P2min = 3.3;
%let P1max = 21.7;
%let P2max = 21.6;
%let PSETL = 1;
%let PSETR = 43;
%let HSETL = 70;
%let HSETR = 94;
%let STEP  = 0.5;

options cmplib=work.funcs;

proc optmodel nthreads=64;
   number P;
   number H;

   var Q1 >= 0 <= &Q1max;
   var Q2 >= 0 <= &Q2max;
   impvar HL1 = H - g1(Q1,Q2);
   impvar HL2 = H - g2(Q1,Q2);

   impvar P1 = Q1 * HL1 * f1(Q1, HL1) * 9.81 / 1000;
   impvar P2 = Q2 * HL2 * f2(Q2, HL2) * 9.81 / 1000;

   minimize Q = Q1 + Q2;

   constraint P = P1 + P2;

   constraint 0 <= P1 <= &P1max; /* (*) */
   constraint 0 <= P2 <= &P2max; /* (*) */

   set PSET = &PSETL .. &PSETR by &STEP;
   set HSET = &HSETL .. &HSETR by &STEP;

   num Qopt {PSET, HSET};
   num Q1opt {PSET, HSET};
   num Q2opt {PSET, HSET};
   num P1opt {PSET, HSET};
   num P2opt {PSET, HSET};
   str solstatus {PSET, HSET};
   num opterror {PSET, HSET};
   do P = PSET;
	  do H = HSET;
		solve with nlp / maxiter=10000 multistart=(maxstarts=10) seed=54321;
		put P= H= HL1= HL2= P1= P2= Q1= Q2= _solution_status_=;
		Qopt[P,H] = Q.sol;
		Q1opt[P,H] = Q1.sol;
		Q2opt[P,H] = Q2.sol;
		P1opt[P,H] = P1;
		P2opt[P,H] = P2;
		solstatus[P,H] = _solution_status_;
   create data Qdata from [P H] P1opt P2opt Q1opt Q2opt Qopt solstatus opterror;


Look at the rows marked with (*)

constraint 0 <= P1 <= &P1max; /* (*) */
constraint 0 <= P2 <= &P2max; /* (*) */

As you can see, P1 and P2 have 0 as lower bound, and it worked very well. Now I'm asked to restrict P1 and P2 interval from [0,PMAX] to [PMIN, PMAX], but zero must remain allowed, so the constraints should become

(P1 = 0) OR (&P1min <= P1 <= &P1max)
(P2 = 0) OR (&P2min <= P2 <= &P2max)

Hence this question. I hope I made myself clear now!


Thanks again,



I have suggestions for two alternative approaches.


One approach is to use the LSO solver:

   var Y1 binary;
   con P1 <= &P1max * Y1;
   con P1 >= Y1;
   var Y2 binary;
   con P2 <= &P2max * Y2;
   con P2 >= Y2;

/* and then replace the SOLVE statement as follows */
  solve with lso;

The second approach, based on the fact that you have only two disjunctions is to solve four problems and keep the best solution:

  1. Fix P1 = 0 and P2 = 0.
  2. Fix P1 = 0 and impose range constraint &P2min <= P2 <= &P2max.
  3. Fix P2 = 0 and impose range constraint &P1min <= P1 <= &P1max.
  4. Impose two range constraints &P1min <= P1 <= &P1max and  &P2min <= P2 <= &P2max.


By the way, the following additional approaches are valid syntax but unlikely to work well:

   constraint (P1 = 0 OR &P1min <= P1 <= &P1max) >= 1;
   constraint (P2 = 0 OR &P2min <= P2 <= &P2max) >= 1;
   constraint (P1 = 0) + (&P1min <= P1 <= &P1max) >= 1;
   constraint (P2 = 0) + (&P2min <= P2 <= &P2max) >= 1;
Pyrite | Level 9


Thanks, we are getting closer Smiley Very Happy


About the first approach with lso. I tried to run the model without changing the constraints (so my original problem) to see what are the results comparing nlp with lso.

Let me explain the result: the variables the model finds are Q1 and Q2, for every P and H in a grid (as you can see in the code); then I calculate P1 and P2, and since P1+P2=P I calculate the ratio P1/P to get one number as result; in this way, when the ratio is 100% it means that P1=P and P2=0, when it is 0% it means that P1=0 and P2=P, otherwise 0 < P1,P2 < P.

In the following graph I display in the P,H grid, this ratio with color: yellow (ratio=100%) blue (ratio=0%) brownish (0 < ratio < 100).


Here's the solution with nlp:



Here's the solution with lso:


The solution with nlp makes a lot of sense (from the physical point of view of the thing I'm optimizing): when P is low, P1=P and P2=0; when P is medium, P1=0 and P2=P; when P is high, a combination of P1 and P2 is used. And you can see from the picture a sense of "continuity" in the grid. I know it may sound obscure but trust me this result is good!


Now the solution with lso seems a lot more noisy: the sense of "continuity" seems not to apply, and also the P low/medium/high is unclear. So before modify the constraint to fit the new problem of this thread, I'd like to get a more stable solution with lso, as "good" as the nlp one. Do you know if the lso can be configured to be more accurate? I read the doc and I saw a lot of parameters, I'm not sure about what to do to improve the results.



As for the second approach: yes I taught about this, and I believe it works fine. However for now I wrote the model with two "units" (you can see the reference to indexes "1" and "2" all over the code), but I'm sure I will be asked later to generalize it to work with N "units". So with N "units", I should have to solve 2^N-1 problems with all the combinations of P_i (zero or non zero) and then compare them, and it sounds hard...


As for the thirds approach, I haven't tried yet, but I think it will not be accurate at all since in this way the error on constraints won't have a metric to see how wrong it is, it would only get a flag "correct" "incorrect".


Thanks again,


SAS Employee

Could you please try the experiment with these options:


solve with lso / absfconv=1e-10 nabsfconv=100 maxgen=500 popsize=500 feastol=1e-7;


Increasing the popsize often improves the solution quality.  

Pyrite | Level 9

Unfortunately the solution got worse...

Here's the result, and most of the points have status MAXGEN instead of ABSFCON.



SAS Employee

That is unfortunate.  It may be LSO is not a good fit.


I did try the first problem to verify LSO gave comparable results, but did not have time to run through the entire list.  If you had an instance in the loop where the objective Q.sol is much worse, that would help to diagnose.


Looking closer, I didn't realize the constraints were nonlinear.  The feastol is likely too tight as LSO is a DFO algorithm.


If you have time/interest, could you rerun but replace the solve command with:


solve with lso / absfconv=1e-6 nabsfconv=100 maxgen=100 popsize=500 feastol=1e-3;
solve with nlp;


The second nlp solve is to increase the feasibility of the solution found.  When moving to binary we can can add a relaxint option that may work analogously for the mixed integer case.



Pyrite | Level 9

Yes the constraints are nonlinear, but they are "nice" (in the sense that they are almost always polynomials and in general continuous and derivable, that is why I expect this sense of continuity of the solutions over the grid).


I picked up these two instances:



P=6, H=85.

With nlp: Q1=9.58, Q2=0.00, Q=9.58 (status=OPTIMAL).

With lso: Q1=5.30, Q2=7.60, Q=12.90 (status=MAXGEN).



P=10, H=91.5.

With nlp: Q1=13.63, Q2=0.00, Q=13.63 (status=OPTIMAL).

With lso: Q1=7.26, Q2=9.52, Q=16.78 (status=ABSFCON).


I'll try to rerun it with the double solve command.

In the mean time I think I'll start to implement approach no.2, let's see if it's reliable.


Thanks again for all you suggestions!


SAS Employee

Thanks a lot for the two examples!  I was able to get similar solutions (using sledgehammer sized population:) if I used these two solve commands:


solve with lso / absfconv=1e-6 nabsfconv=10 maxgen=10 popsize=10000 feastol=1e-2;
solve with nlp /algorithm=as;


But this could be a game of wack-a-mole.  Also it is much slower ... so likely a dead-end unless no 2 does not help.


Thanks a lot for the examples.  For better or worse, LSO treats all problems as black-boxes and avoids assumptions about derivatives existing.  So it may continue to optimize when other solvers get stuck--the price is when derivatives do exist it will still tick-tack to the solution.  Thus in general, if everything insight is smooth, the nlp solvers are more appropriate.  But adding binary variables will of course induce some (albeit structured) non-smoothness ...


SAS Employee

This is something we could do for you in the next release.  But for now would need to be done manually:


One potential heuristic to overcome this hurdle if N "units" is greater than 2 is to create a binary LHS sampling (over which variables should be fixed at 0 and which should be free) that predetermines which variables will get fixed for a set of B "tests".  There is example code here:


I.e. you could create an LHS sample over the unit-hypercube that is stored in data set "T" that is B x N and read into optmodel. For the ith row of T you would solve a nlp problem where Qj is fixed at 0 if Tij <= .5 and  So how many NLP problems you solve is controllable and a function of the number of "experiments" you decide to generate.    It is of course a heuristic ...



Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.

If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website. 

Register now!

Multiple Linear Regression in SAS

Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 12 replies
  • 3 in conversation