BookmarkSubscribeRSS Feed
StefK
SAS Employee

Hey all,

 

I want to optimize a function using nlpqn with both boundary and non linear constraints (blc and nlc).

However, the algorithm often enters the non feasible region, that is certain parameter iterates

violate the non linear constraints. I would appreciate any thoughts on that matter.

 

 

4 REPLIES 4
Rick_SAS
SAS Super FREQ

I've never experienced the problem that you are describing, but mathematically it is possible for the optimum to occur on the boundary of the region. I can imagine that a Newton method algorithm might try to take an "uphill" step, only to discover that it has stepped into the forbidden region, which would cause it to decrease it's step size and repeat.

 

If you can share an example, we might be able to make more relevant comments.

 

In the absense of details, I recommend that you visualize the problem to try to understand what is going on.  You might try solving the problem without imposing the NLC to see where the "natural" optimum is. You can use the techniques in the article "How to find an initial guess for an optimization."  Also print out the iteration history (opt[2]=4) to see how the optimization is behaving.

StefK
SAS Employee

Thanks for the response Rick. Unfortunately I cannot share the code.

 

Here is a simple explanation of what is happening: At some point in my objective function I need to calculate log(λ+1) and log(1-λ) where λ is an eigenvalue of a 2x2 matrix whose entries are the optimization parameters. My constraints force λ to be between -1 and 1 (typical causality criterion in VAR(1) series), and hence I cannot evaluate the objective function if the constraints are violated.

 

Prior to the calculation of log(λ+1) and log(1-λ) I have an if statement that stops the optimization if λ>1 or λ<-1 returning the current matrix iterate. Now, despite the constraint function not allowing  λ>1 or λ<-1, the algorithm often enters the if statement returning a matrix with eigenvalues outside the feasible region (e.g. λ=1.2).

 

1. Is it possible that the algorithm makes a quadratic approximation of say a log(constraints)  penalty term and that allows it to consider nonfeasible solutions?

2. Would it help if I came up with a mapping from the space of all real 2x2 matrices to that of matrices with eigenvalues between -1 and 1? I would then eliminate the nonlinear constraints all together.

 

In any case I ll try your suggestions and respond when I have something new. Thanks so much again!

 

 

Rick_SAS
SAS Super FREQ

Prior to the calculation of log(λ+1) and log(1-λ) I have an if statement that stops the optimization if λ>1 or λ<-1 returning the current matrix iterate. 

 


I would look carefully at how you are handling this condition. Newton's method assumes a continuous objective function.  If you are short-circuiting the computation and "returning the matrix iterate," that could be the cause of the problem.

 


1. Is it possible that the algorithm makes a quadratic approximation of say a log(constraints)  penalty term and that allows it to consider nonfeasible solutions?


That doesn't sound likely.

 


2. Would it help if I came up with a mapping from the space of all real 2x2 matrices to that of matrices with eigenvalues between -1 and 1? I would then eliminate the nonlinear constraints all together. 


 

I hope you already have a NORMALIZATION constraint in the problem. If A*x = lambda*x satisfies your problem, then you need to make sure that simply scaling the matrix elements does not also satisfy the problem.  In other words, prevent (cA) from satisfying the problem.  One technique is to only consider matrices of the form

[1 b]

[b c]

rather than the general symmetric 2x2 matrix

[a b]

[b c].

 

If you can't tell me anything more about your problem, I doubt there is much more I can suggest. Maybe someone else will have additional ideas. Good luck

Rick_SAS
SAS Super FREQ

Here's a concrete suggestion. Switch to the regular Newton's method (NLPNRA). The quasi-Newton methods try to reuse the gradient/Jacobian information from a previous iteration.  Although the classical Newton's method is computationally more expensive, the step size and direction should always be current and therefore might help prevent the problem you are reporting.

sas-innovate-2024.png

Join us for SAS Innovate April 16-19 at the Aria in Las Vegas. Bring the team and save big with our group pricing for a limited time only.

Pre-conference courses and tutorials are filling up fast and are always a sellout. Register today to reserve your seat.

 

Register now!

Multiple Linear Regression in SAS

Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.

Find more tutorials on the SAS Users YouTube channel.

From The DO Loop
Want more? Visit our blog for more articles like these.
Discussion stats
  • 4 replies
  • 902 views
  • 0 likes
  • 2 in conversation