<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: IML: optimizing a multivariate normal density in SAS/IML Software and Matrix Computations</title>
    <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376426#M3605</link>
    <description>&lt;P&gt;Hey Rick,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;thank you for the fast reply. I'am ashamed about point two and three, sorry for that. With respect to the mahalanobis distance: I had notice that there is an inbuilt function for calculating this distance in IML. However, because I will avoid that a singular covariance matrix is optained during the process of optimization (which will produce error message in the log), I have adopt the mahalanobis distance to reject such solutions (see adoptions of me in the modul).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Underneath is the code with your suggestions included. However, runig this code will produce an error message in the log "NEWRAP Optimization cannot be completed." and a warning message "Optimization routine cannot improve the function value.". The solution mue = {-5.7 4.97} and cove = {0.05 -0.83, -0.83 12.55} is far away from the sample estimates mu = {3 4} and cov = {8 1.1, 1.1 11.1}.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have comparde my objective function of the MVN with your pdf function of the MVN (&lt;A href="http://blogs.sas.com/content/iml/2012/07/05/compute-multivariate-normal-denstity.html" target="_blank"&gt;http://blogs.sas.com/content/iml/2012/07/05/compute-multivariate-normal-denstity.html&lt;/A&gt;) and I get the same result (when I log your solution). So I assume it is not a problem of the objective function. I also compare my gradient function (GradLogLik) with the results of finit difference approximation and the results are the same. Hence, I assume it is also not a problem of the gradient function. Do you have any further suggestions?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;proc iml;
/*** Write a module that computes the log-likelihood ***/
/** Mahalanobis module from Rick Wicklin (2012) **/
/** Cut and paste the definition of the Mahalanobis module. **/
/** See http://blogs.sas.com/content/iml/2012/02/22/how-to-compute-mahalanobis-distance-in-sas/ **/
start mahalanobis(x, center, cov);
   n = nrow(x);      
   m = nrow(center);
   d2 = j(n,m); 
   U = root(cov, "NoError");
   if all(u=.) then return(1E10); /* adoption of me: if cov is singular, this will force NLP to try another estimate of cov (see Gu &amp;amp; Yung, 2013: A SAS/IML program using the Kalman filter for estimating state space models) */
   Ur = round(trace(ginv(U)*U)); 
   if Ur &amp;lt; nrow(U) then return(1E10); /* adoption of me: if U is singular, this will force NLP to try another estimate of cov */ 
   do k = 1 to m;   
      v = x - center[k,];
      z = trisolv(2, U, v`);
      w = trisolv(1, U, z);
      d2[,k] = (v # w`)[,+];
   end;
   return (sqrt(d2));
finish;
start LogLikMVN(param) global(x,nmu);
   mu = t(param[1:nmu]);
   cov1 = t(param[(nmu+1):ncol(param)]);
   cov = sqrvech(cov1);
   rowx = nrow(x);
   eins = j(1,rowx,1);
   d = mahalanobis(x, mu, cov);
   if d=1E10 then return(1E10); /* if cov is singular, this will force NLP to try another estimate of cov */
   dd = d#d;
   dsd = eins * dd;
   logdet1 = logabsdet(cov);
   logdet = choose(logdet1[2],logdet1[1] * logdet1[2], 0);
   f = - (rowx*nmu/2 * log(2*constant("PI"))) - (rowx/2 * logdet) - (1/2 * dsd);
   return( f );
finish;



/*** Write a module that computes the gradient of the log-likelihood function ***/
start GradLogLik(param) global (x,nmu);
   mu = t(param[1:nmu]);
   cov1 = t(param[(nmu+1):ncol(param)]);
   cov = sqrvech(cov1);
   rowx = nrow(x);
   colx = ncol(x);
   eins = j(1,rowx,1);
   xdif = x - mu;
   xdifs = t(xdif[+,]);
   invcov=inv(cov);
   dfdmu = t(invcov*xdifs);
   B = j(colx,colx,0);
   do i=1 to rowx;
	xred = xdif[i,];
	A = t(xred)*xred;
	B = B + A;
   end;
   dfdcov1 = -0.5 * (invcov -  invcov*B*invcov);
   dfdcov = t(vech(dfdcov1)); 
   return ( dfdmu || dfdcov );
finish;


/*** Call an optimization routine ***/
nmu = {2};
x = {0  4,
     2  2,
     1  4,
     6  3,
     2  1,
     1  0,
     3  8,
     5  1,
     9  7,
     1 10};
con = { .   .   0   .   0,  
        .   .   .   .   .}; 
p = {2 3 7 1 10};
opt = {0,5};
call nlpnra(rc, result, "LogLikMVN", p, opt, con) grd="GradLogLik";
quit;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 17 Jul 2017 08:03:22 GMT</pubDate>
    <dc:creator>Daniel_Paul</dc:creator>
    <dc:date>2017-07-17T08:03:22Z</dc:date>
    <item>
      <title>IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/374470#M3584</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;actually I try to optimize the ML objective function of the multivariate normal density. For this, I basically follow the example of Rick Wicklin (&lt;A href="http://blogs.sas.com/content/iml/2011/10/12/maximum-likelihood-estimation-in-sasiml.html" target="_blank"&gt;http://blogs.sas.com/content/iml/2011/10/12/maximum-likelihood-estimation-in-sasiml.html&lt;/A&gt;). However, instead of using the univariate normal density as objective function I used the mutlivariate normal density (&lt;A href="http://blogs.sas.com/content/iml/2012/07/05/compute-multivariate-normal-denstity.html" target="_blank"&gt;http://blogs.sas.com/content/iml/2012/07/05/compute-multivariate-normal-denstity.html&lt;/A&gt;). To avoid/cope with singular covariance matrices I added a suggestion made by Gu &amp;amp; Yung. In additon, I introduce the gradient of the MVN and I place restrictions on the parmater estimates for the elements of the covariance matrices to be symetric.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When I run the code the optimizer success with different starting vectors. However I do not get the sample estimate for the input data set.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below is an example for two variables with mean mu={1 1.3} and cov={1 -1, -1 1.33}. With starting vector p={1 2 1 -1 -1 2} I get the estimates mue={1.15 4.56} and cove={1.19 -1.23, -1.23 0.31}. Can I trust this results?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Bye, Paul&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;proc iml;
/*** Write a module that computes the log-likelihood ***/
/** Mahalanobis module from Rick Wicklin (2012) **/
/** Cut and paste the definition of the Mahalanobis module. **/
/** See http://blogs.sas.com/content/iml/2012/02/22/how-to-compute-mahalanobis-distance-in-sas/ **/
start mahalanobis(x, center, cov);
 n = nrow(x);      
 m = nrow(center);
 d2 = j(n,m);    
 U = root(cov, "NoError");
 if all(u=.) then return(1E10); /* adoption of me: if cov is singular, this will force NLP to try another estimate of cov (see Gu &amp;amp; Yung, 2013: A SAS/IML program using the Kalman filter for estimating state space models) */
 Ur = round(trace(ginv(U)*U)); 
 if Ur &amp;lt; nrow(U) then return(1E10); /* adoption of me: if U is singular, this will force NLP to try another estimate of cov */ 
 do k = 1 to m;   
  v = x - center[k,];
  z = trisolv(2, U, v`);
  w = trisolv(1, U, z);
  d2[,k] = (v # w`)[,+];
 end;
 return (sqrt(d2));
finish;
start LogLikMVN(param) global(x,nmu);
 mu = t(param[1:nmu]);
 cov1 = shape(param[(nmu+1):ncol(param)],nmu,nmu);
 cov = (cov1 + t(cov1))/2;
 rowx = nrow(x);
 eins = j(1,rowx,1);
 d = mahalanobis(x, mu, cov);
 if d=1E10 then return(1E10); /* if cov is singular, this will force NLP to try another estimate of cov */
 dd = d#d;
 dsd = eins * dd;
 f = - (rowx*nmu/2 * log(2*constant("PI"))) - (rowx/2 * log(det(cov))) - (1/2 * dsd);
 return( f );
finish;

/*** Write a module that computes the gradient of the log-likelihood function ***/
start GradLogLik(param) global (x,nmu);
 mu = t(param[1:nmu]);
 cov1 = shape(param[(nmu+1):ncol(param)],nmu,nmu);
 cov = (cov1 + t(cov1))/2;
 rowx = nrow(x);
 colx = ncol(x);
 eins = j(1,rowx,1);
 xdif = x - mu;
 xdifs = t(xdif[+,]);
 invcov=inv(cov);
 dfdmu = t(invcov*xdifs);
 B = j(colx,colx,0);
 do i=1 to rowx;
  xred = xdif[i,];
  A = t(xred)*xred;
  B = B + A;&lt;BR /&gt; end;
 dfdcov1 = -0.5 * (invcov -  invcov*B*invcov); 
 dfdcov = shape(dfdcov1,1); 
 return ( dfdmu || dfdcov );
finish;

/*** Call an optimization routine ***/
nmu = {2};
x = {0 4,
     2 2,
     1 4};
con = { .  .  0  .  .  0  .  .,  
        .  .  .  .  .  .  .  .,
	.  .  .  1 -1  .  0  .}; 
p = {1 2 1 -1 -1 2};
opt = {1,5};
call nlpnra(rc, result, "LogLikMVN", p, opt, con) grd="GradLogLik";
quit;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 10 Jul 2017 12:47:51 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/374470#M3584</guid>
      <dc:creator>Daniel_Paul</dc:creator>
      <dc:date>2017-07-10T12:47:51Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/374492#M3586</link>
      <description>&lt;P&gt;1. You don't need to write your own MAHALANOBIS function. As it says in the blog post, the MAHALANOBIS function has been distributed with SAS/IML for many releases.&lt;/P&gt;
&lt;P&gt;2.The covariance matrix is symmetric so there are only five independent parameters, not six. &lt;A href="http://blogs.sas.com/content/iml/2012/03/21/creating-symmetric-matrices-two-useful-functions-with-strange-names.html" target="_self"&gt;You can form a&amp;nbsp;symmetric matrix from a vector by using the SQRVECH function.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;3. If you are trying to MINIMIZE a function, use opt[1]=0. To MAXIMIZE a function use opt[1]=1.&lt;/P&gt;
&lt;P&gt;4. You can &lt;A href="http://blogs.sas.com/content/iml/2014/10/20/log-determinant-of-a-matrix.html" target="_self"&gt;use the built-in LOGABSDET function to compute the log of the determinant&amp;nbsp;of a matrix.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Jul 2017 15:06:21 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/374492#M3586</guid>
      <dc:creator>Rick_SAS</dc:creator>
      <dc:date>2017-07-17T15:06:21Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376426#M3605</link>
      <description>&lt;P&gt;Hey Rick,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;thank you for the fast reply. I'am ashamed about point two and three, sorry for that. With respect to the mahalanobis distance: I had notice that there is an inbuilt function for calculating this distance in IML. However, because I will avoid that a singular covariance matrix is optained during the process of optimization (which will produce error message in the log), I have adopt the mahalanobis distance to reject such solutions (see adoptions of me in the modul).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Underneath is the code with your suggestions included. However, runig this code will produce an error message in the log "NEWRAP Optimization cannot be completed." and a warning message "Optimization routine cannot improve the function value.". The solution mue = {-5.7 4.97} and cove = {0.05 -0.83, -0.83 12.55} is far away from the sample estimates mu = {3 4} and cov = {8 1.1, 1.1 11.1}.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have comparde my objective function of the MVN with your pdf function of the MVN (&lt;A href="http://blogs.sas.com/content/iml/2012/07/05/compute-multivariate-normal-denstity.html" target="_blank"&gt;http://blogs.sas.com/content/iml/2012/07/05/compute-multivariate-normal-denstity.html&lt;/A&gt;) and I get the same result (when I log your solution). So I assume it is not a problem of the objective function. I also compare my gradient function (GradLogLik) with the results of finit difference approximation and the results are the same. Hence, I assume it is also not a problem of the gradient function. Do you have any further suggestions?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;proc iml;
/*** Write a module that computes the log-likelihood ***/
/** Mahalanobis module from Rick Wicklin (2012) **/
/** Cut and paste the definition of the Mahalanobis module. **/
/** See http://blogs.sas.com/content/iml/2012/02/22/how-to-compute-mahalanobis-distance-in-sas/ **/
start mahalanobis(x, center, cov);
   n = nrow(x);      
   m = nrow(center);
   d2 = j(n,m); 
   U = root(cov, "NoError");
   if all(u=.) then return(1E10); /* adoption of me: if cov is singular, this will force NLP to try another estimate of cov (see Gu &amp;amp; Yung, 2013: A SAS/IML program using the Kalman filter for estimating state space models) */
   Ur = round(trace(ginv(U)*U)); 
   if Ur &amp;lt; nrow(U) then return(1E10); /* adoption of me: if U is singular, this will force NLP to try another estimate of cov */ 
   do k = 1 to m;   
      v = x - center[k,];
      z = trisolv(2, U, v`);
      w = trisolv(1, U, z);
      d2[,k] = (v # w`)[,+];
   end;
   return (sqrt(d2));
finish;
start LogLikMVN(param) global(x,nmu);
   mu = t(param[1:nmu]);
   cov1 = t(param[(nmu+1):ncol(param)]);
   cov = sqrvech(cov1);
   rowx = nrow(x);
   eins = j(1,rowx,1);
   d = mahalanobis(x, mu, cov);
   if d=1E10 then return(1E10); /* if cov is singular, this will force NLP to try another estimate of cov */
   dd = d#d;
   dsd = eins * dd;
   logdet1 = logabsdet(cov);
   logdet = choose(logdet1[2],logdet1[1] * logdet1[2], 0);
   f = - (rowx*nmu/2 * log(2*constant("PI"))) - (rowx/2 * logdet) - (1/2 * dsd);
   return( f );
finish;



/*** Write a module that computes the gradient of the log-likelihood function ***/
start GradLogLik(param) global (x,nmu);
   mu = t(param[1:nmu]);
   cov1 = t(param[(nmu+1):ncol(param)]);
   cov = sqrvech(cov1);
   rowx = nrow(x);
   colx = ncol(x);
   eins = j(1,rowx,1);
   xdif = x - mu;
   xdifs = t(xdif[+,]);
   invcov=inv(cov);
   dfdmu = t(invcov*xdifs);
   B = j(colx,colx,0);
   do i=1 to rowx;
	xred = xdif[i,];
	A = t(xred)*xred;
	B = B + A;
   end;
   dfdcov1 = -0.5 * (invcov -  invcov*B*invcov);
   dfdcov = t(vech(dfdcov1)); 
   return ( dfdmu || dfdcov );
finish;


/*** Call an optimization routine ***/
nmu = {2};
x = {0  4,
     2  2,
     1  4,
     6  3,
     2  1,
     1  0,
     3  8,
     5  1,
     9  7,
     1 10};
con = { .   .   0   .   0,  
        .   .   .   .   .}; 
p = {2 3 7 1 10};
opt = {0,5};
call nlpnra(rc, result, "LogLikMVN", p, opt, con) grd="GradLogLik";
quit;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Jul 2017 08:03:22 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376426#M3605</guid>
      <dc:creator>Daniel_Paul</dc:creator>
      <dc:date>2017-07-17T08:03:22Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376530#M3606</link>
      <description>&lt;P&gt;There are general tips in the article &lt;A href="http://blogs.sas.com/content/iml/2016/05/11/tips-before-optimization.html" target="_self"&gt;"Ten tips before you run an optimization"&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regarding your particular problem, I hope to take a look at it later. &amp;nbsp;Two quick suggestions:&lt;BR /&gt;1. I believe you should return -1e10 to short-circuit the LogLikMVN function?&lt;/P&gt;
&lt;P&gt;2. If you use the sample moments for the initial parameter estimates, then the NLP iteration runs off to infinity. I would double-check the correctness of the objective function.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;sampleMean = mean(x);
sampleCov = cov(x);
p = rowvec(sampleMean) || rowvec(sampleCov[{1 2 4}]);
&lt;/PRE&gt;</description>
      <pubDate>Mon, 17 Jul 2017 13:13:48 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376530#M3606</guid>
      <dc:creator>Rick_SAS</dc:creator>
      <dc:date>2017-07-17T13:13:48Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376566#M3607</link>
      <description>&lt;P&gt;The finite-difference derivative does not seem to match the value from the GradLogLik function:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;sampleMean = mean(x);
sampleCov = cov(x);
print sampleMean, samplecov;
p = rowvec(sampleMean) || rowvec(sampleCov[{1 2 4}]);

LL = LogLikMVN(p);
grad = GradLogLik(p);
print LL, grad;

call nlpfdd(func, FDDGrad, hessian, "LogLikMVN", p);  
print func, FDDGrad;&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Mon, 17 Jul 2017 14:14:07 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376566#M3607</guid>
      <dc:creator>Rick_SAS</dc:creator>
      <dc:date>2017-07-17T14:14:07Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376606#M3608</link>
      <description>&lt;P&gt;1. I think we had some miscommunication earlier about the value of opt[1]. At the time I wasn't sure whether you were maximizing or minimizing the objective function. I now see that you are maximizing the LogLikMVN function, so use opt[1]=1, not opt[1]=0.&lt;/P&gt;
&lt;P&gt;2. The way you use LOGABSDET is wrong.&amp;nbsp;The second term in the loglikelihood is the log of the absolute value of the determinant. &amp;nbsp;You are using +/- of that value. Change to&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;logdet1 = logabsdet(cov)[1]; &amp;nbsp;/* log( |det| ) */&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can simplify some of the other computations. I believe the following gives the correct MLE. I'll leave you to figure out the&amp;nbsp;GradLogLik issue.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;start LogLikMVN(param) global(x);
   nmu = ncol(x);
   mu = t(param[1:nmu]);
   cov1 = t(param[(nmu+1):ncol(param)]);
   cov = sqrvech(cov1);
   rowx = nrow(x);
   d = mahalanobis(x, mu, cov);
   if d=1E10 then return(-1E10); /* if cov is singular, this will force NLP to try another estimate of cov */
   dsd = sum(d#d);
   logdet1 = logabsdet(cov)[1];  /* log( |det| ) */
   f = - (rowx*nmu/2 * log(2*constant("PI"))) - (rowx/2 * logdet1) - (1/2 * dsd);
   return( f );
finish;

con = { .   .   0   .   0,  
        .   .   .   .   .}; 
p = {2 3 7 1 10};
opt = {1,4};
call nlpnra(rc, result, "LogLikMVN", p, opt, con); /* grd="GradLogLik" */
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Mon, 17 Jul 2017 15:17:26 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376606#M3608</guid>
      <dc:creator>Rick_SAS</dc:creator>
      <dc:date>2017-07-17T15:17:26Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376624#M3609</link>
      <description>&lt;P&gt;Hey Rick,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;that's right, I also figure out that we have different objective functions in mind (log likelihood vs. negative log likelihood), so I already switch the parameter opt[1]=1. With this I get apparent right estimates of the parameters (mue = {3 4} and cove ={7.2 1, 1 10}}, if I don't use the gradient function module.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So in summary: Compared to the original code one has to reduce the number of parameters, use the LOGABSDET and change 1E10 to -1E10. In addition I have to adope the gradient function module. I will post the working code as soon as I have installed the correct gradient function module.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for helping me.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Bye, Paul&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Jul 2017 15:51:39 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/376624#M3609</guid>
      <dc:creator>Daniel_Paul</dc:creator>
      <dc:date>2017-07-17T15:51:39Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/378613#M3619</link>
      <description>&lt;P&gt;&lt;SPAN class="short_text"&gt;&lt;SPAN class=""&gt;For the sake of completeness&lt;/SPAN&gt;&lt;/SPAN&gt; here the working code with gradient function:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;proc iml;
/*** Write a module that computes the log-likelihood ***/
/** Mahalanobis module from Rick Wicklin (2012) **/
/** Cut and paste the definition of the Mahalanobis module. **/
/** See http://blogs.sas.com/content/iml/2012/02/22/how-to-compute-mahalanobis-distance-in-sas/ **/
start mahalanobis(x, center, cov);
   n = nrow(x);      
   m = nrow(center);
   d2 = j(n,m); 
   U = root(cov, "NoError");
   if all(u=.) then return(-1E10); /* adoption of me: if cov is singular, this will force NLP to try another estimate of cov (see Gu &amp;amp; Yung, 2013: A SAS/IML program using the Kalman filter for estimating state space models) */
   Ur = round(trace(ginv(U)*U)); 
   if Ur &amp;lt; nrow(U) then return(-1E10); /* adoption of me: if U is singular, this will force NLP to try another estimate of cov */ 
   do k = 1 to m;   
     v = x - center[k,];
     z = trisolv(2, U, v`);
     w = trisolv(1, U, z);
     d2[,k] = (v # w`)[,+];
   end;
   return (sqrt(d2));
finish;
start LogLikMVN(param) global(x);
   nmu = ncol(x);
   mu = t(param[1:nmu]);
   cov1 = t(param[(nmu+1):ncol(param)]);
   cov = sqrvech(cov1);
   rowx = nrow(x);
   d = mahalanobis(x, mu, cov);
   if d=-1E10 then return(-1E10); /* if cov is singular, this will force NLP to try another estimate of cov */
   dsd = sum(d#d);
   logdet = logabsdet(cov)[1];
   f =  - (rowx*nmu/2 * log(2*constant("PI"))) - (rowx/2 * logdet) - (1/2 * dsd);
   return( f );
finish;



/*** Write a module that computes the gradient of the log-likelihood function ***/
/** Diagonal module from Rick Wicklin (2013) **/
/** See http://blogs.sas.com/content/iml/2013/10/21/assign-the-diagonal-elements-of-a-matrix.html **/
start SetDiag(A, v);
   diagIdx = do(1,nrow(A)*ncol(A), ncol(A)+1);
   A[diagIdx] = v;             /* set diagonal elements */
finish;
start GradLogLik(param) global (x);
   nmu = ncol(x);
   mu = t(param[1:nmu]);
   cov1 = t(param[(nmu+1):ncol(param)]);
   cov = sqrvech(cov1);
   rowx = nrow(x);
   xdif = x - mu;
   xdifs = t(xdif[+,]);
   invcov=inv(cov);
   dfdmu = t(invcov*xdifs);
   ftwo = j(nmu,nmu,2);
   run SetDiag(ftwo, 1);
   B = j(nmu,nmu,0);
   do i=1 to rowx;
     xred = xdif[i,];
     A = t(xred)*xred;
     B = B + A;
   end;
   dfdcov1a = -rowx/2 * invcov;
   dfdcov1b = 1/2 * invcov*B*invcov;
   dfdcov1c = dfdcov1a + dfdcov1b;
   dfdcov1 = ftwo#dfdcov1c;
   dfdcov = t(vech(dfdcov1));
   return ( dfdmu || dfdcov );
finish;


/*** Call an optimization routine ***/
x = {0 4 2,
     2 2 1,
     1 4 5,
     6 3 3,
     2 1 2,
     1 0 1,
     3 8 0,
     5 1 8,
     9 7 1,
     1 10 12};
con = { .   .   .   0   .   .   0   .   0,  
        .   .   .   .   .   .   .   .   .}; 
p = {1 1 1 7 1 1 10 2 10};
opt = {1,5};
call nlpnra(rc, result, "LogLikMVN", p, opt, con) grd="GradLogLik";
quit;&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Mon, 24 Jul 2017 09:12:45 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/378613#M3619</guid>
      <dc:creator>Daniel_Paul</dc:creator>
      <dc:date>2017-07-24T09:12:45Z</dc:date>
    </item>
    <item>
      <title>Re: IML: optimizing a multivariate normal density</title>
      <link>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/378621#M3620</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/37291"&gt;@Daniel_Paul&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;I'm glad you got it working. If you replace the loop&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;   B = j(nmu,nmu,0);
   do i=1 to rowx;
     xred = xdif[i,];
     A = t(xred)*xred;
     B = B + A;
   end;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;with the matrix multiplication&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;   B = xdif`*xdif;
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;I suspect you will see a significant performance improvement.&lt;/P&gt;
&lt;P&gt;Best wishes!&lt;/P&gt;</description>
      <pubDate>Mon, 24 Jul 2017 09:39:03 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/IML-optimizing-a-multivariate-normal-density/m-p/378621#M3620</guid>
      <dc:creator>Rick_SAS</dc:creator>
      <dc:date>2017-07-24T09:39:03Z</dc:date>
    </item>
  </channel>
</rss>

