Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Posted 10-13-2017 09:25 PM
(1444 views)

I am attempting to run the Schwartz and Smith (2000) short term long term model for futures pricing. I have a highly nonlinear constrained function( with a kalman filter) that I'm trying to minimize through a call nlpqn. The problem is that the nlpqn subroutine has no iterations and doesnt move from the starting values even when using large combination of starting values through a do loop. Is there anything I can change with the termination criteria that will get the values to move from their initial starting value? The model has 7 parameters and two state variables so visualizing any good starting values is out the window.

5 REPLIES 5

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Please provide the code you are using and the output that you see. Are there any WARNING or ERROR messages in the SAS log? If so, please provide.

When you say that the parameters "do not move from their initial starting value," it sounds like the NLPQN routine thinks that it is already at an extremum. This can happen in two ways:

- if you misspecify the derivative information that NLPQN uses to evaluate the gradient. Have you defined a function to evaluate the gradients? If so, read "Two hints for specifying derivatives" to ensure that your function is working correctly.
- If you did not define your own analytical derivatives, then the most likely problem is that your objective function is not correctly evaluating the input arguments. If it returns the same value for all inputs, then NLPQN thinks it is a constant function, therefore will not improve the initial guess.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

```
dm log 'clear' output ;
dm output 'clear' output;
libname new "C:\Users\rst133\Dropbox\Thesis\Data Preparation\hogs";
ods html close; /*ods html;*/
/*ods graphics off;
ods exclude all;
ods noresults;*/
ods listing;
filename myoutput 'C:\Users\rst133\Dropbox\Thesis\Schwartz and Smith Model\Pre1996hogs\printfile'; /* change this accordingly */
filename mylog 'C:\Users\rst133\Dropbox\Thesis\Schwartz and Smith Model\Pre1996hogs\logfile'; /* change this accordingly */
proc printto log=mylog print=myoutput new;
run;
proc iml;
*reset print;
use new.hogfuturespre1996;
read all into data;
Num_Contracts = 6;
matur = j(6,1,0);
do i = 1 to 6;
matur[i,] = ((i-1)*4+1)/12;
end;
dt = 7/360;
* SELECT INITIAL VALUES ;
/* k = 1.489003;
sigmax = 0.276;*139252;
lambdax = 0.155066;
mu = -0.0213;
sigmae = 0.137067;
rnmu = 0.0115;
rho = 0.310058;
psi7 = k // sigmax // lambdax // mu // sigmae // rnmu // rho;*/
s_guess = 0.0000001;
init_dim = 7;
y = data[,6:(Num_Contracts+4)]; * y is a {nobs x N} Matrix, N = number of future contracts, nobs = number of observations;
nobs = nrow(y);
N = ncol(y);
boundary = 1e300; /* bounderies for constraints */
nboundary = -1*1e300;
start liklhd(x0,C0) global(y,c,d,G,F,V,N,nobs,dt,psi,matur,s_guess);
x0 = {4.24 0}; /* Initial state vector m(t)=E[xt;et] 1x2 */
C0 = {0.1 0.1, 0.1 0.1}; /* Initial covariance matrix for the state variables W(t)=cov[xt,et];*/
m = ncol(x0); * m = Number of state variables (number of rows in a0);
/* THE TRANSITION EQUATION */
* Extracting initial parameter values from initial psi ;
k = psi[1,];
sigmax = psi[2,];
lambdax = psi[3,];
mu = psi[4,];
sigmae = psi[5,];
rnmu = psi[6,];
rho = psi[7,];
/* NOTATION: x(t)=c+G*x(t-1)+n(t) n~N(0,R) */
c = j(m,1,0); * c is a {m x 1} Vector;
c[m,] = mu*dt;
G = i(m); * G is a {m x m} Matrix;
G[1,1] = exp(-k*dt);
* Defining R = var[n(t)] and W = var[w(t)];
W11=(1-exp(-2*k*dt))*((sigmax)**2)/(2*k); /* equation 3(b) in S&S */
W12=(1-exp(-k*dt))*((rho*sigmax*sigmae)/k);
W22=((sigmae)**2)*dt;
W=i(m);
W[1,1]=W11; W[1,2]=W12; W[2,1]=W12; W[2,2]=W22;
R=i(m);
/* THE MEASUREMENT EQUATION */
/* NOTATION: y(t)=d(t)+F(t)'x(t)+v(t) v~N(0,V) */
d = j(N,1,0); * d is a {N x 1} Vector;
F = j(N,m,0); * F is a {N x m} Matrix;
do i=1 to N;
d1=rnmu*matur[i]-(1-exp(-k*matur[i]))*(lambdax/k); /* equation (9) in S&S */
d2=(1-exp(-2*k*matur[i]))*((sigmax)**2)/(2*k);
d3=((sigmae)**2)*matur[i];
d4=2*(1-exp(-k*matur[i]))*((rho*sigmax*sigmae)/k);
d[i,1]=d1+0.5*(d2+d3+d4);
F[i,1]=exp(-k*matur[i]);
F[i,2]=1;
end;
* Measurment errors Var-Cov Matrix: Cov[v(t)]=V;
V_N = j(1,N,s_guess);
V = diag(V_N);
var = block(R,V); /* for Kalman Filter routine */
nz = nrow(G); nn = nrow(y); nk = ncol(y);
call kalcvf(pred,vpred,filt,vfilt,y,0,c,G,d,F,var,x0,C0);
et = y - pred*F`;
sum1 = 0; sum2 = 0;
do i = 1 to nn;
vpred_i = vpred[(i-1)*nz+1:i*nz,];
et_i = et[i,];
et_i1=et_i`;
ft = F*vpred_i*F` + var[nz+1:nz+nk,nz+1:nz+nk];
sum1 = sum1 + log(det(ft));
sum2 = sum2 + et_i*inv(ft)*et_i1;
sum = sum1 + sum2;
end;
return(sum);
finish;
start loglik(y);
nn = nrow(y); nn = nrow(y); nk = ncol(y);
pi = constant('pi');
const = nk*nn*log(2*pi);
sum = liklhd(x0,C0);
log_l=(-.5*const-.5*(sum)/(nk*nn));
return(log_L);
finish;
rank = 0;
do ii1 = 3.48 to 3.48 by 1;
do ii2 = 0.276 to 0.336 by 0.06;
do ii3 = 0.118 to 0.158 by 0.04;
do ii4 = -0.0225 to 0.0125 by 0.035;
do ii5 = 0.001 to 0.151 by 0.05;
do ii6 = 0.00125 to 0.01125 by 0.01;
do ii7 = 0.29 to 0.31 by 0.02;
rank = rank + 1;
k = ii1;
sigmax = ii2;
lambdax = ii3;
mu = ii4;
sigmae = ii5;
rnmu = ii6;
rho = ii7;
psi7 = k // sigmax // lambdax // mu // sigmae // rnmu // rho;
psi = j((init_dim+N),1,0);
psi = psi7 // s_guess // s_guess // s_guess // s_guess // s_guess;
bounds = j(2,(init_dim+N),.);
nb = ncol(bounds) - 2;
lb = j((init_dim+N),1,0);
lb7 = repeat(nboundary,init_dim,1);
lb7[1] = 0; lb7[2] = 0; lb7[5] = 0; lb7[7] = -1;
lb5 = repeat(0.000000001,5,1);
lb = lb7 // lb5;
ub = j((init_dim+N),1,0);
ub6 = repeat(boundary,6,1);
ub7 = 1;
ub5 = repeat(boundary,5,1);
ub = ub6 // ub7 // ub5;
do jj = 1 to nb;
bounds[1,jj] = lb[jj,];
bounds[2,jj] = ub[jj,];
end;
opt = {0 5 . 1};
tc = {2000 5000};
call nlpqn(rc,psi_opt, "loglik", psi, opt, bounds, tc,,,,,);
psi_opt1 = t(psi_opt);
sumd = 0;
do i = 1 to 7;
sumd = sumd + abs(psi[i,] - psi_opt[i]);
end;
if sumd > 0 then do;
print rank sumd psi_opt1 psi;
end;
end;
end;
end;
end;
end;
end;
end;
quit;
proc printto;
run;
quit;
```

Attached are the .txt files I am printing output and log to as well as a copy of the code.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Im sorry I added the wrong log and print files to that last message.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

OK, so as I suspected the gradient is being evaluated as exactly zero, which is why no iterations are being performed.

Before you start optimizing, you need to manually make sure that the objective function is working correctly. See

"Ten tips before you run an optimization." I suspect that you need to begin with Tip #1.

I cannot follow your long complicated program, but it looks like the problem is the way you are defining the LOGLIK function. The initial value of the local variable y is the vector of parameters PSI. Then NLPQN tries to approximate gradients by varying elements of the argument y. It sets y=PSI + dPSI. But it looks like the LOGLIK will return the same value because you are not passing y down into the LIKLHD function.

I see that you used PSI as a global variable in LIKLHD (and many other unnecessary globals). Perhaps you think that PSI is getting updated by the optimizer, but that is not how the optimizer works. The optimizer will call LOGLIK with various values of y, so you need to make sure LOGLIK is working correctly.

I strongly encourage you to start with a simpler problem and learn how to use NLPQN on a simple problem. Then use the Tips in the article to guide your programming.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.

**If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website. **

Multiple Linear Regression in SAS

Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.

Find more tutorials on the SAS Users YouTube channel.