08-12-2013 02:40 AM
We are planning to migrate all the SAS programs in Mainframe to a Windows or UNIX server, as part of the MIPS savings and currently in the estimation phase. Is there any specific things to be considered for estimation ? We believe it will be straight lift and shift with changes in the input and output file access alone. Please let us know whether any significant items to be considered for the development effort.
08-12-2013 05:01 AM
All depends on what is the ultimate goal in the longer time.
- Completely abandon the IBM mainframe? The hardware can be used as a Unix server (Suse)
- Some cooperation/rightsizing of some processing parts? Building op mixture on Windows/Unix/Mainframe?
- The reason is mostly data is created and processed already on the mainframe and some additional analytics is needed.
- SAS as language is very often used in the normal technical operational support, some glue wihtin that. These sas scripts are common very simple SAS-base programs using sometimes dedicated featuresof z/OS. They cannot be migrated but have to solved in a other way.
- Scheduling and solving the related dependicies with jobs are commonly avaible on z/OS.
It is not commonly available on Unix and Windows. Spread of workload over different hardware and machines will need some signalling mechanism and a way to deal with logs and operations.
The mips saving is indicating a license/cost issue. there are possible other approaches to achieve this eg:subcapacity
The 3270 approach (terminals) to a mainframe is possible still being used. Being compared to what is become commodity (browsers desktop) it is completely outdated. Replacing that is sensible. An other way of using mainframe is running SAS jobs in batch. I do not expect there is still being build new code in this way. Editting has become that more easy at Windows.
- Merrill Consultants (mxg) is often used analysing SMF. It is part of SAS/ITRM.
There are notes on runnning it on Windows/Unix while SMF-records are still originating from z/OS.
- ASCII/Ebcdic and Unicode are having character ordering differnces and some code issues. The Not-sign of ebcdic does not exist at ascii and there is more. ( Normally unknown mostly not big problems).
- The VSAM dataset structure is rather unique within z/OS being difficutl to migrate. QSAM types and SAS-dataset are easier.
VBS FB is also different as having recordlengts defined and not the text based approach (CR/LF LF) of windows Unix.
This may be requiring binary downloads (do not have converted) and have SAS translated it all while processing.
You will need possible an ebcid viewer/editor at Windows (ascii)
- DB/2 is something well integrated in a mainframe. In Windows and Unix the security approach is completely different.
With Unix building a well designed security concept is mostly badly understood. It has no good central secuyrity concept.
- If you have SAS/connect it would be easy to migrate to a different machine environment
You can let different machines easy cooperate whil also moving code and data. By that having a smooth transition path.
- a SAS/ BI/DI environment with Scheduling (LSF) would be nice as a Mainframe replacement.
But it will require new advanced skills and staff.
- The Eguide usage approach could be something as new user-interface. SAS institute is promoting this one very often
08-12-2013 08:54 AM
We are planning to completely move away from mainframe. COBOL to microfocus, SAS to UNIX environment.
Also we would like to know whether UNIX should be able to handle the same volume of records that is being processed in mainframe (around 1.3 Billion). Can this be addressed by adding additional space in the UNIX server?. We saw few industry study that it should be able to. but need some confirmation from the SAS experts too.
08-12-2013 09:39 AM
That are more easy questions. There are several factors for performance
- Cpu -
This resource is not really improving anymore in speed (2-3Ghz being the values last 5 years or more)
It is moving to multiprocessors. There is no real differences in all environments
- Memory -
This resource is still improving (and fast). In the commodity hardware a lot of figures can be found on it.
- IO -
This one is where classic z/OS with the 3390 virtualziation (ca 9Gb/volume) has issues.
To overcome some of them, SAS advices to use HFS. That is based on the Unix service within z/OS. SAS/connect is another part of SAS needing the Unix parts. Also IBM is doing the same.
At real Unix environments IO/dasd is organized completely different. Mountpoints HFS etc connected to the security in owner/group/world. The performance of a Unix System is needing a good design tuning etc on this part.
For comparing some hardware see: http://www.tpc.org/ A machine equiped with 1Tb of memory and 64 cpu's is not weird.
Designing the IO part will be the hardest see: http://support.sas.com/rnd/papers/sgf07/sgf2007-iosubsystem.pdf
Segregation of all area's (like saswork) being critical to performance is easy possible.
At suppliers sometimes documents may be found. IBM is very good at it with AIX. IBM Techdocs White Paper: SAS AIX 5L, AIX 6 and AIX 7 Tuning Guide Also redhat has something http://www.redhat.com/f/pdf/RH_SAS_GridComputing_web.pdf
My experiences is processing with having some Tb's of data and using files of manyl Gb's is no problem. The smaller datasets (ca 1Gb) originally being downloaded from mainframe and runnig there did shows up the same turnarround times (several years ago).
I have seen a lot of SAS usage of others in SAS-forums their environments described..
Still nice to see more of others.