I am feeling excited to know, if someone can really understand and crack my question.
Prco means data=ghy1;
var wew;
sum(wew)=wev;
The above delivers proper result for sum upto 20 million records, it was a failure for 50 million records.
I am seeing a slight deviation calculated using data stepI(when decimal was rounded using round fuction)
Is there any option to tell proc means about the sum round for decimals.
Really happy if someone carcks this...
Proc means supports and option MAXDEC to control the maximum number of decimals displayed.
You may want something like
Prco means data=ghy1 maxdec=2;
var wew;
sum(wew)=wev;
run;
BUT if your data step rounded before accumulating then you introduced a lot of difference, likely some for each record.
We would have to see your data step code to provide a better reason for differences.
Sum by Proc means
Prco means data=ghy1;
var wew;
sum(wew)=wev;
Sum by data step
Data GHY1;
infile xxx;
input yyyy;
wev = round(wev,.0001) + wew;
Maxdec option don't have any affect on sum function(any statsitical analysis), it only effects printing of decimal.
You're processing the data before you do the sum. The only way I can think to match this is with a custom format that uses a function to round the data. Or maybe the round in proc format.
Its not a reasonable expectation to have the results from proc means and the datastep match across 50 million observations in this situation. .
@CoolBoy wrote:
I am feeling excited to know, if someone can really understand and crack my question.
We can't understand your question if you don't provide enough details.
For example, what does 'failure' mean? Did it error out or were the results not what you expected?
Are you rounding before summing the variables in the data step?
I don't think there's going to be a way to mimic that in Proc Means.
This is probably related to precision of storing large numbers.
SAS can accurately store integers up to around 1e15 or 1e16. Decimal fractions can be problematic, even when the numbers are smaller.
Here, you are saying that 20M records produces the "right" result, but 50M records the "wrong" result. Since you are calculating a sum, this leads me to believe that the sum of 50M values increases the sum to beyond what SAS can accurately store. Internally, it is entirely possible that SAS uses different methods in PROC MEANS vs. a DATA step. The addition process would be the same, but the order in which the numbers are added might be different.
By the way, how do you know which one is correct (DATA step vs. PROC MEANS)? Or could they both be incorrect? You can check this by getting the sum of batches: first 20M records, next 20M records, final 10M records, and comparing their total to the 50M sum.
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.