Well the time depends upon many things. Is it on a local or remote drive, the hardware and software resources available, your code etc, the state of the file (compressed or uncompressed etc). And you are reading 40 million records and there could be many variables in each record.
It is difficult to give any suggestion specific to your case.
@Sirisha1520 wrote:
Its a mainframe job using Ps file having few char and date columns around 15 variables with record length 80.
What slice of the machine did your job get? Maybe it was busy doing something else that had higher priority than you.
Please use units that the rest of the world understands. Wikipedia told me that a "crore" is 10 million (10 mega).
So you are reading 40 million lines from a mainframe partitioned dataset, which means that it took one CPU second to process ~ 150000 rows. Not bad at first glance.
Looking deeper, 80 bytes per row means a raw data throughput of 11.5 MB/s, which is not really fast.
Please run the import step again, with
options fullstimer;
set, and then post the complete log, so we can see any transformations you make, and the real time/CPU time rate.
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
Learn how use the CAT functions in SAS to join values from multiple variables into a single value.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.