BookmarkSubscribeRSS Feed
rbettinger
Pyrite | Level 9

I am trying to use SAS/IML to perform computations with what in some cases are very small numbers, e.g., less than constant('maceps') and am stymied by my inability to make the expression ndx = loc( vector= max( vector )) return only one value instead of > 1 . When, for example, vector = {8.372E-26 4.63E-103}, the variable ndx  will contain {0 0} because the two values in vector are smaller than constant('maceps'), which is 2.2e-16.

I am enclosing the code of an IML module which demonstrates what I want to do, and have included a listing of the inputs and outputs to and from the module for your perusal. For small values of m and p, e.g., .01, computing ( x ## p - y ## p ) ## ( 1/p ) produces values close to 1, and I see that the results include 0 in the output, or some very small number such that computing 1 + number > 1 results in False, e.g., 0, and not True, e.g., 1 .

I will be appreciative for any suggestions because I have tried strategies such as scaling the x and y values by constant('maceps'') or constant('small') but to no avail. Perturbing x[ j ] and y[ i, j ] by adding a uniform random variate in [0,1] # constant('maceps') similarly fails when 1/p is large, e.g., 1/p >> 1. For example,

proc iml ;
a =8.372E-26 ;
b=  4.63E-103 ;
p = .01 ;

c = ( a ## p + b ## p ) ## (1/p) ;

print c ;
quit ;

----------
Output:
---------
c
4.985E-19

TIA,

Ross

options nonotes ;

proc iml ;

reset storage=featclus.featclus ;
load module=_all_ ;

start compute_similarity( x, y, m, p ) ;
   /* compute unweighted Lukasiewicz ("normal" (unweighted) Lukasiewicz structure) similarity
    * 
    * purpose: compute similarity between feature vector j, class mean(s) i
    *     
    * parameters: 
    *    x ::= 1 x n_feature vector of features, each column normalized into [0, 1]
    *    
    *    y ::= n_class x n_feature matrix of class means, produced from normalized x features
    */

   if nrow( x ) ^= 1 then run error( 'compute_similarity', 'Feature vector x must be row vector' ) ;
   
   n_classes  = nrow( y ) ; /* # of class means in class mean matrix */
   n_feat     = ncol( x ) ; /* # of features in x vector             */

   dist       = j( n_classes, n_feat, . ) ;
   similarity = j( 1, n_classes, . ) ;
   
   do i = 1 to n_classes ;
      do j = 1 to n_feat ;
         /* compute similarity btwn x, y for each feature element x[ j ], class mean value y[ i, j ] */

         dist[ i, j ] = ( 1 - abs( x[ j ] ## p - y[ i, j ] ## p )) ## ( 1 / p ) ;
      end ; /* j */
   end ; /* i */

   /* compute similarity btwn x, y for each class mean
    * dist[ i , : ]` is mean of row vector for class mean i
    */
   similarity = dist[ , : ]` ## ( 1 / m ) ;
   
   return similarity ;
finish compute_similarity ;

x = { .1 .2, .3 .4, .5 .6, .7 .8 } ;
if any( x = 0 ) then x[ loc( x=0 ) ] = constant( 'maceps' ) ;

y = { .15 .35, .351 .45 } ;
print x y ;

rslt = compute_similarity( x[1,], y, 1,1 ) ;
print rslt ;
rslt = compute_similarity( x[2,], y, .5, .5 ) ;
print rslt ;
rslt = compute_similarity( x[3,], y, .001, .01 ) ;
print rslt ;
rslt = compute_similarity( x[4,], y, .05, .05 ) ;
print rslt ;

rslt = compute_similarity( x[1,], y, 0.000024943605419,   4.148552428951820) ;
print rslt ;

***store module=compute_similarity ;
quit ;

----------
Output:
----------
x   y  
0.1 0.2 0.15 0.35
0.3 0.4 0.351 0.45
0.5 0.6    
0.7 0.8    
 
rslt
0.9 0.7495
 
rslt
0.6600432 0.8439022
 
rslt
0 6.03E-139
 
rslt
2.496E-10 3.9649E-6
 
rslt
8.372E-26 4.63E-103

 

9 REPLIES 9
Tom
Super User Tom
Super User

You lost me, but I think you are saying that when you try to compare a series of very small values to a constant that you are getting TRUE as the result because the DIFFERENCE is also very small?

 

In theory you could just multiply the values by some larger value instead.  Perhaps better to use a number that is a power or 2 since the values are stored using binary numbers.

rbettinger
Pyrite | Level 9

Thank you for replying. In the interests of using my time well, I am going to avoid trying to solve this problem by rewriting the code that produces the problem. Let's say that I am using Captain Kirk's Kobayashi Maru solution, e.g., he reprogrammed the simulation that was designed to cause him to fail.

Ksharp
Super User
Maybe you could use ROUND() to avoid this problem ?
if any( x = 0 ) then x[ loc( x=0 ) ] = constant( 'maceps' ) ;
----->
if any( round(x,1E-6) = 0 ) then x[ loc( round(x,1E-6)=0 ) ] = constant( 'maceps' ) ;
rbettinger
Pyrite | Level 9

Thank you for replying. In the interests of using my time well, I am going to avoid trying to solve this problem by rewriting the code that produces the problem.

Rick_SAS
SAS Super FREQ

>  am stymied by my inability to make the expression ndx = loc( vector= max( vector )) return only one value instead of > 1 . When, for example, vector = {8.372E-26 4.63E-103}, the variable ndx  will contain {0 0} because the two values in vector are smaller than constant('maceps'), which is 2.2e-16.

 

Can you explain your first sentence? It cannot be correct because LOC returns either an empty matrix or a set of positive integers. It will never return a zero. Here's what I see, which looks correct:

proc iml;
vector = {8.372E-26 4.63E-103}; 
ndx = loc( vector= max( vector ));
print ndx;

/*
ndx 
1 
*/

I will think about the second half of your question.

rbettinger
Pyrite | Level 9

Thank you for replying. In the interests of using my time well, I am going to avoid trying to solve this problem by rewriting the code that produces it.

Rick_SAS
SAS Super FREQ

@rbettinger wrote:

Thank you for replying. In the interests of using my time well, I am going to avoid trying to solve this problem by rewriting the code that produces it.


I have read your posting several times. I do not understand what you think the problem is. Please give ONE example that shows the problem and tell us what you think the correct answer should be.

rbettinger
Pyrite | Level 9

Rick,

Thank you for your persistence. I will describe my problem as best as I can so that you may detect any errors in my reasoning.

I am applying a fuzzy similarity measure from Lukasiewicz logic called a "normal Lukasiewicz structure". I have enclosed a partially-completed draft of a paper describing this concept. It is stated in reference [6] in the draft that is attached to this posting.

I am trying to classify a set of data represented as columns of features that contain measures of a process, in this case, the well-known Wisconsin Breast Cancer data downloaded from the UCI KDD ML archives. Each column has been linearly standardized into [0,1]. I have composed an objective function based on the Lukasiewicz similarity structure to compute the parameters m and p in equation (6) in the draft. Essentially, the Lukasiewicz structure is the generalized mean of the sum of the Minkowski distances between each feature value and the class mean of that feature computed from each subset of the data that are classified into groups by their class values. Equation (6) describes the similarity between feature vector j and class mean vector i. Similarity values lie in the interval [0,1].

I use differential evolution (DE) to minimize an objective function based on the Lukasiewicz normal structure. The DE algorithm computes pairs m and p for a set of scenarios. I have specified parameter values for the DE algorithm and once all of the scenarios have been run, I choose the set of parameters that produce the maximum Enhanced Matthews Correlation Coefficient. Using this optimal pair of m and p, I score a validation dataset and use the performance metrics to measure the efficacy of the algorithm.

My difficulties arise when I compute the Lukasiewicz normal structure using parameter values m and p that force the Minkowski differences computed in equation (6) to be very close to 0 or to 1.

I have attached a SAS program, compute_similarity.sas, that computes the similarity between a feature vector and one or more class mean vectors. It is invoked by the module, ffss_classify_score_test.sas (also attached),which performs preliminary processing before invoking compute_similarity. I also include a PDF file containing the output of ffss_classify_score_test, which contains numerous examples of the case when

ndx_pred_class  = loc( sim_mat[ j, ] = max( sim_mat[ j, ] )) ;

 

returns two values, indicating a failure to produce a unique maximum value. This result is not due to a fault in the loc() function, because sim_mat[] contains {0 0} or {1 1} when two values are produced. Rather, the process resulting in the nonuniqueness of the output of the classifier is of interest, to wit, how can I overcome the tendency of values like

.9999995284293320000

.0947296688581390000

.9999999949998210000

.0640424310483060000

.9999999999892080000

.9999995534842310000

.1050134801112600000

.9999999936572220000

.1031645294170150000

.9999999992193940000

which are the contents of the compute_similarity dist[] matrix to be converted to 1’s or 0’s in subsequent computation?

These are the inputs to compute_similarity:

x

.2574131800000000000  1.000000000000000000  .1562455800000000000  1.000000000000000000  .0796267300000000000

y

.0616844800000000100  .0947296700000000000  .0587415400000000100  .0640424500000000000  .0259968300000000000

.1864923200000000000  .1050134800000000000  .1709497800000000000  .1031645300000000000  .1276107500000000000

m

.0000170200000000000

p

9.106516920000000000

The dist matrix represents the Lukasiewicz similarities between each element of the feature and the respective elements of the two class means to which the feature is being compared.

dist

.9999995284293320000  .0947296688581390000  .9999999949998210000  .0640424310483060000             .9999999999892080000

.9999995534842310000  .1050134801112600000  .9999999936572220000  .1031645294170150000  .9999999992193940000

The rslt vector contains the similarities of the feature vector to each class mean. In this case, there appears to be no similarity, which I surmise is due to the limitations of finite precision arithmetic.

rslt

0.000000000000000000  0.000000000000000000

 

Rick_SAS
SAS Super FREQ

Can you explain what you think is wrong or problematic about the following code that you posted? The computations look okay to me, so I guess I do not understand your concerns:

proc iml ;
a =8.372E-26 ;
b=  4.63E-103 ;
p = .01 ;
c = ( a ## p + b ## p ) ## (1/p) ;
print c ;
quit ;

sas-innovate-white.png

Missed SAS Innovate in Orlando?

Catch the best of SAS Innovate 2025 — anytime, anywhere. Stream powerful keynotes, real-world demos, and game-changing insights from the world’s leading data and AI minds.

 

Register now

From The DO Loop
Want more? Visit our blog for more articles like these.
Discussion stats
  • 9 replies
  • 804 views
  • 0 likes
  • 4 in conversation