## how to change the termination criteria of nlpqn or nlpnra to improve the optimal results

``````proc iml;

start use_file(dataset, rettot);
file log;
call execute('use ',dataset,';');
read all var _num_ into rettot;
call execute ('close ',dataset,';');
finish;

call use_file('_targr',targr);
call use_file('_covmat1',cov);
nrow = nrow(cov);

/* create start weight as the inverse of vol */
x0 = vecdiag(diag(cov)##(0.5));
x0 = 1/x0`;

/* define the objective function */
start func(x) global(cov, targr);
margr = x` # (cov * x`); /* marginal risk */
totr = margr[+]; /* total risk */
rcon = margr / totr;
f = (rcon - targr)##2 [,+];
f = f`[,+];
return(f);
finish func;

opt = j(1,11,.);
opt = 0; /*0 = Minimize, 1 = Maximize*/
opt = 0 /*number of nonlinear constraints*/;
blc = J(1, nrow, 0); /*Lower bound constraint*/

tc = j(1, 13, .);

/* first run nlpnra; if not optimal, then run nlpqn; if still not optimal then error */
call nlpnra(rc, x1, "func", x0, opt, blc, tc);
objv1 = func(x1);
x = x1;
error = 0;
if objv1 > 0.0001 then do;
call nlpqn(rc, x2, "func", x0, opt, blc, tc);
objv2 = func(x2);
if objv2 > objv1 then do; x = x1; error = 1; end;
else if objv2 > 0.0001 then do; x = x2; error = 1; end;
else do; x = x2; error = 0; end;
end;
print error;  print objv1; print x; ``````

Hi all,

I'm using nonlinear optimization procedure (nlpqn or nlpnra) to solve a set of non-negative weights. I have encountered the problem that the optimation is "successfully" completed but with suboptimal weights (by comparing with solutions obtained from excel solver). Mostly the suboptimal results will have one or two weights assigned as 0 (i.e., lower bounds). The log says "ABSGCONV convergence criterion satisfied."  When I switch the procedure (nlpgn or nlpnra), I may or may not get the optimal weights.

Is there a way that I can change the input parameters (such as the termination criteria) so that optimal weights will be solved?

Thanks,

Kun

1 ACCEPTED SOLUTION

Accepted Solutions

## Re: how to change the termination criteria of nlpqn or nlpnra to improve the optimal results

Quick reply: Usually the convergence depends strongly on the initial guess, rather than convergence options.  The PRECISION of the local extrema is determined by the options. You can read some tips about "How to find an initial guess for an optimization."

If an initial guess iterates into a local minimum, you cannot use the convergence criterion to "get out of" the local mimimum and find the global minimum.

3 REPLIES 3

## Re: how to change the termination criteria of nlpqn or nlpnra to improve the optimal results

Are there additional constraints on the weights? For example, do they need to sum to 1?

Also, are you sure that you have coded the objective function correctly? The "marginal risk" expression (x` # Cov * x), which is then normalized, looks suspiciously like a miscoding of the expression x*Cov*x`.

Can you describe the problem you are trying to solve, preferably with a link to a reference (textbook, article, or web page)?

With regard to your problem AS WRITTEN, it looks like there might be several local minima for this problem. Your initial guess is 1 / sqrt(vecdiag(cov)).  The Newton or quasi-Newton algorithms converges, it just isn't converging to an answer that you like.  If you change your initial guess, you might converge to a different minimum. For example, if you set

x0 = x0 + 1;

and then call NLPNRA, you get a different solution that has much bigger weights of size 2, 3, 5, and 12.  This makes me think that the problem is not formulated correctly.

My advice is to use a 2- or 3-dimensional example while developing the program. That will enable you to visualize the parameter space.  To further simplify the problem, consider using an identify matrix in place of the covariance matrix.

## Re: how to change the termination criteria of nlpqn or nlpnra to improve the optimal results

Hi Rick,

Thanks for your helpful reply. Here is some additional information related to your questions. Regardless, my original question is more general than specific: in case the SAS nonlinear procedure fails to yield a global optimum (for some reason it just converges to local minimum), what shall we do? Can we modify the termination criteria (or else) to get the global minimum?

Kun

Are there additional constraints on the weights? For example, do they need to sum to 1?

-- I don't have any additional constraints, which means the optimal solutions are scalable. We can add the sum = 1 constraint but I don’t think it will solve the problem (that is, it still converges to a local minimum).

Also, are you sure that you have coded the objective function correctly? The "marginal risk" expression (x` # Cov * x), which is then normalized, looks suspiciously like a miscoding of the expression x*Cov*x`.

-- Yes, I think I have coded it right. The expression of x*Cov*x` is to calculate the variance of the weighted elements, while my formula  (x` # Cov * x`) is to get a decompose the variance into marginal contribution from each element (which is a n by 1 vector, and the sum of its elements will equal x*Cov*x` ).

Can you describe the problem you are trying to solve, preferably with a link to a reference (textbook, article, or web page)?

-- My objective function is to solve a nonnegative vector of weights such that the marginal contribution of each raw element to the final variance meets a specified target. This is in line with the "risk parity" concept that has been recently developed in the portfolio management literature. You can check this link (http://www.portfoliowizards.com/risk-parity-demo-workbook/) for some simple illustration using excel solver.

With regard to your problem AS WRITTEN, it looks like there might be several local minima for this problem. Your initial guess is 1 / sqrt(vecdiag(cov)).  The Newton or quasi-Newton algorithms converges, it just isn't converging to an answer that you like.  If you change your initial guess, you might converge to a different minimum. For example, if you set

x0 = x0 + 1;

and then call NLPNRA, you get a different solution that has much bigger weights of size 2, 3, 5, and 12.  This makes me think that the problem is not formulated correctly.

My advice is to use a 2- or 3-dimensional example while developing the program. That will enable you to visualize the parameter space.  To further simplify the problem, consider using an identify matrix in place of the covariance matrix.

-- You are very right in pointing out that the nonlinear algorithms converge to local minium. So my queston is very general: suppose we formulate the problem correctly, if a nonlinear SAS optimization fails to reach global optimum results, can we change the termination criteria (or else) to improve the results?

## Re: how to change the termination criteria of nlpqn or nlpnra to improve the optimal results

Quick reply: Usually the convergence depends strongly on the initial guess, rather than convergence options.  The PRECISION of the local extrema is determined by the options. You can read some tips about "How to find an initial guess for an optimization."

If an initial guess iterates into a local minimum, you cannot use the convergence criterion to "get out of" the local mimimum and find the global minimum.

From The DO Loop