## How to do an optimization with a piecewise linear constraint?

Hello,

I'm trying to solve the following optimization problem:

min f(x) under h(x) > L, where x is n-dimensional and h is a piecewise linear function.

h(x) can also be written as h(x) = h_1(x_1)+...+h_n(x_n), where each h_i is a piecewise linear function. E.g.: h_i = 0 if x_i < 5 and 10 if x_i >=5.

Note that each h_i is monotonically increasing.

For the problem i'm trying to solve, n is > 100 and each piecewise function h_i consists of around 25 "cutpoints".

I have a dataset which describes h (columns: i, "start cutpoint", "end cutpoint", value).

Is there an easy way to solve this problem?

I tried to solve a really simple version of the problem:

n = 2, f(x) = x_1+x_2 and h(x) = 0, unless x_1 >=4, then h(x) = 3. Furthermore I set 1<= x_i<=10 and L = 2. Therefore the optimal solution is (4,1).

However the following SAS code fails to deliver the optimal solution:

``````proc optmodel;
number n = 2;
var x{1..n} >=1 <=10;
minimize f = x+x;
con loss: (x >= 4)*3 >= 2;
solve with NLP /multistart=(Maxstarts=20);
print x x;
run;``````

Without the multistart option, the solution is not feasible (1.15, 1.15). With the multistart option the solution is better, but not good (4.8,1.6).

It seems to me, that the solver is not able to work with the constraint. I tried to write the constraint in other ways, but so far I was not able to achieve a satisfying solution.

Obviously the feasible region of the problem I'm trying to solve is complicated, however since each h_i is monotonically increasing the region is connected and therefore it should be possible to solve this problem, or am I missing something?

So my question is: How to do an optimization with a piecewise linear constraint?

Greetings

Bernd

1 ACCEPTED SOLUTION

Accepted Solutions

## Re: How to do an optimization with a piecewise linear constraint?

Derivative-based NLP solvers have difficulty with discontinuities like this.  One alternative approach is to use the black-box solver:

``````proc optmodel;
number n = 2;
var x{1..n} >=1 <=10;
minimize f = x+x;
con loss: (x >= 4)*3 >= 2;
solve with blackbox;
print x;
quit;``````
Solution Summary
Solver Black-Box
Objective Function f
Solution Status Function Convergence
Objective Value 5.0000000097

Infeasibility 0
Random Seed Used 1

Evaluations 1578
Cached Evaluations 105
Iterations 30
Presolve Time 0.19
Solution Time 0.24

 x
1 1
2 4

The black-box solver does find an optimal solution here, but that is not guaranteed.  To guarantee an optimal solution, you can linearize the problem and use the MILP solver:

``````proc optmodel;
number n = 2;
var x{1..n} >=1 <=10;
minimize f = x+x;
set INTERVALS = 1..2;
var y {INTERVALS} binary;
num start {INTERVALS} = [1,4];
num end   {INTERVALS} = [4,10];
num value {INTERVALS} = [0,3];
con OneInterval:
sum {i in INTERVALS} y[i] = 1;
con loss:
sum {i in INTERVALS} value[i] * y[i] >= 2;
num epsilon = 1e-4;
/* y[i] = 1 implies start[i] <= x[i] <= end[i] - epsilon */
con Indicator1 {i in INTERVALS}:
start[i] - x[i] <= (start[i] - x[i].lb) * (1 - y[i]);
con Indicator2 {i in INTERVALS}:
x[i] - end[i] + epsilon <= (x[i].ub - end[i] + epsilon) * (1 - y[i]);
solve;
print x;
print y;
quit;``````
Solution Summary
Solver MILP
Algorithm Branch and Cut
Objective Function f
Solution Status Optimal
Objective Value 5

Relative Gap 0
Absolute Gap 0
Primal Infeasibility 0
Bound Infeasibility 0
Integer Infeasibility 0

Best Bound 5
Nodes 0
Solutions Found 2
Iterations 0
Presolve Time 0.25
Solution Time 0.33

 x
1 1
2 4

 y
1 0
2 1

By the way, we are working on automating the indicator constraints.  Stay tuned.

2 REPLIES 2

## Re: How to do an optimization with a piecewise linear constraint?

Derivative-based NLP solvers have difficulty with discontinuities like this.  One alternative approach is to use the black-box solver:

``````proc optmodel;
number n = 2;
var x{1..n} >=1 <=10;
minimize f = x+x;
con loss: (x >= 4)*3 >= 2;
solve with blackbox;
print x;
quit;``````
Solution Summary
Solver Black-Box
Objective Function f
Solution Status Function Convergence
Objective Value 5.0000000097

Infeasibility 0
Random Seed Used 1

Evaluations 1578
Cached Evaluations 105
Iterations 30
Presolve Time 0.19
Solution Time 0.24

 x
1 1
2 4

The black-box solver does find an optimal solution here, but that is not guaranteed.  To guarantee an optimal solution, you can linearize the problem and use the MILP solver:

``````proc optmodel;
number n = 2;
var x{1..n} >=1 <=10;
minimize f = x+x;
set INTERVALS = 1..2;
var y {INTERVALS} binary;
num start {INTERVALS} = [1,4];
num end   {INTERVALS} = [4,10];
num value {INTERVALS} = [0,3];
con OneInterval:
sum {i in INTERVALS} y[i] = 1;
con loss:
sum {i in INTERVALS} value[i] * y[i] >= 2;
num epsilon = 1e-4;
/* y[i] = 1 implies start[i] <= x[i] <= end[i] - epsilon */
con Indicator1 {i in INTERVALS}:
start[i] - x[i] <= (start[i] - x[i].lb) * (1 - y[i]);
con Indicator2 {i in INTERVALS}:
x[i] - end[i] + epsilon <= (x[i].ub - end[i] + epsilon) * (1 - y[i]);
solve;
print x;
print y;
quit;``````
Solution Summary
Solver MILP
Algorithm Branch and Cut
Objective Function f
Solution Status Optimal
Objective Value 5

Relative Gap 0
Absolute Gap 0
Primal Infeasibility 0
Bound Infeasibility 0
Integer Infeasibility 0

Best Bound 5
Nodes 0
Solutions Found 2
Iterations 0
Presolve Time 0.25
Solution Time 0.33

 x
1 1
2 4

 y
1 0
2 1

By the way, we are working on automating the indicator constraints.  Stay tuned.