Hi. I'm calling nlptr in IML after having run a grid search for initial values to give to nlptr. The grid search is good so that it's often the case that nlptr can't improve on the initial values. In these cases, I don't want to have :
ERROR: TRUREG Optimization cannot be completed.
WARNING: Optimization routine cannot improve the function value.
printed to the log. It's not an error. This is part of a big program used by many people in our organization so I can't just say "Turn your notes off."
Thanks Rick and team, we were finally able to solve this problem by switching optimizers. We had been using nlpntr and switched to nlpnms.
I do not think there is a way in SAS to suppress ERROR messages.
Some programmers use PROC PRINTTO to redirect the SAS log to a file, filter it (eg, remove the ERROR and WARNING messages that you want to ignore, and then display the log. Some programmers have even developed macros that automate this task. Here's one that I found with a quick search:
https://www.lexjansen.com/nesug/nesug04/ap/ap09.pdf
I have not used these macros myself, so I cannot vouch for them.
Since you don't regard nonconvergence as an error, you could also relax the convergence criteria. Instead of using the default (stringent) criteria for the termination criterion, you might choose to relax the parameters. If you think that "within 0.1 units" is "close enough", you can tell that to the NLP routines.
Well, I'd certainly regard nonconvergence as a problem. My issue is rather that the grid search has already found a good solution that is within the default (stringent) criteria for the termination criterion. Why should this be reported to the log as an error?
If your initial guess is the exact optimal solution, I do not think you will get an error message.If you can provide an example, where you get an ERROR when your guess is the exact solution, I would like to see it.
In my experience, that error message often indicates lack of convergence because the objective function is flat near the optimal solution. This often happens in MLE computations when the data set is small or doesn't fit the proposed model. The error could mean that the algorithm cannot improve the current position because the objective function is flat.
To demonstrate what I mean by "flat", here is an example of a one-dimensional flat function: f(x) = exp(-|x - 1/2|^10);
Most algorithms will stop before they converge to the optimal value at x=1/2. Here is a graph of the function:
data Flat;
do x = 0 to 1 by 0.01;
y = exp(-abs(x-0.5)**10);
output;
end;
run;
proc sgplot data=Flat;
series x=x y=y;
run;
There is an easy way to determine if you are correct: add a small random jitter to your initial guesses, such as
x0 = x0 + randfun(1||ncol(x0), "Normal", 0, 0.1);
If you still get the error message, then you are experiencing a convergence problem. If the error message goes away, then please provide a reproducible example so I can study it.
Thanks for the concise, easy algorithm for testing if my response surface is flat. But couldn't your software already be applying that algorithm? And so returning meaningful RCs and accompanying messages? But unfortunately I'm pretty sure that's not my problem.
Perhaps I wasn't clear. When the function is extremely flat, it is difficult for any algorithm to find the exact value of the mathematical extremum. I did not provide an algorithm for finding the maximum, I provided a way for you to examine whether the error you are seeing is because your initial guess is already at the maximum (as you claim) or because the algorithm cannot make further progress (as I claim).
In practical terms, this might not matter, since parameters for which the objective function give 0.99999 are often just as useful as parameters for which the objective function is 1.00000.
Since you have not posted your code or your data, I am merely guessing. There are dozens of issues that can arise in optimization that can lead to non-convergence. If you would like additional assistance, please post your code and data.
Thanks Rick and team, we were finally able to solve this problem by switching optimizers. We had been using nlpntr and switched to nlpnms.
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.
Find more tutorials on the SAS Users YouTube channel.