BookmarkSubscribeRSS Feed
Calcite | Level 5

I am feeling excited to know, if someone can really understand and crack my question.


Prco means data=ghy1;

var wew;



The above delivers proper result for sum upto 20 million records, it was a failure for 50 million records.


I am seeing a slight deviation calculated using data stepI(when decimal was rounded using round fuction)


Is there any option to tell proc means about the sum round for decimals.


Really happy if someone carcks this...

Super User

Proc means supports and option MAXDEC to control the maximum number of decimals displayed.


You may want something like

Prco means data=ghy1 maxdec=2;

var wew;




BUT if your data step rounded before accumulating then you introduced a lot of difference, likely some for each record.

We would have to see your data step code to provide a better reason for differences.

Calcite | Level 5

Sum by Proc means

Prco means data=ghy1;
var wew;


Sum by data step

Data GHY1;
infile xxx;
input yyyy;

wev = round(wev,.0001) + wew;


Maxdec  option don't have any affect on sum function(any statsitical analysis), it only effects printing of decimal.



Super User

You're processing the data before you do the sum. The only way I can think to match this is with a custom format that uses a function to round the data. Or maybe the round in proc format. 


Its not a reasonable expectation to have the results from proc means and the datastep match across 50 million observations in this situation. . 

Super User


@CoolBoy wrote:

I am feeling excited to know, if someone can really understand and crack my question.


 We can't understand your question if you don't provide enough details.

For example, what does 'failure' mean? Did it error out or were the results not what you expected?


Are you rounding before summing the variables in the data step?

I don't think there's going to be a way to mimic that in Proc Means.




Opal | Level 21

This is probably related to precision of storing large numbers.


SAS can accurately store integers up to around 1e15 or 1e16.  Decimal fractions can be problematic, even when the numbers are smaller. 


Here, you are saying that 20M records produces the "right" result, but 50M records the "wrong" result.  Since you are calculating a sum, this leads me to believe that the sum of 50M values increases the sum to beyond what SAS can accurately store.  Internally, it is entirely possible that SAS uses different methods in PROC MEANS vs. a DATA step.  The addition process would be the same, but the order in which the numbers are added might be different.


By the way, how do you know which one is correct (DATA step vs. PROC MEANS)?  Or could they both be incorrect?  You can check this by getting the sum of batches:  first 20M records, next 20M records, final 10M records, and comparing their total to the 50M sum.




SAS is headed back to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team.

Interested in speaking? Content from our attendees is one of the reasons that makes SAS Innovate such a special event!

Submit your idea!

What is Bayesian Analysis?

Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.

Find more tutorials on the SAS Users YouTube channel.

Get the $99 certification deal.jpg



Back in the Classroom!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 5 replies
  • 1 like
  • 4 in conversation