Hi mjack, That some confusion is understandable. Sometimes I hip/hop in my mind and I am going too fast. On other occasions we are using a word but use that with a different intention/meaning. Using other words for same intention is part of communications not always clearing up things. The words with SAS-DI are transformations jobs packages. SAS(R) Data Integration Studio 4.9: User's Guide - transformation SAS(R) Data Integration Studio 4.9: User's Guide is how the data the logic/process is done - Jobs SAS(R) Data Integration Studio 4.9: User's Guide are just the generated SAS-code - packages / deploy SAS(R) Data Integration Studio 4.9: User's Guide is getting the SAS -code to operations In a SAS environment I would prefer to use these words - meanings. Using other tools like SSIS I see what SAS is called a transformation MSFT has called a job and what SAS has called a job MSFT has called a package. No the package I did not find at MSFT. No wonder this all is confusing, knowing one tool and going into an other all words/meanings are different. Going into an operational schedule environment the word job (and application) is also used. In that world a job is the script that is run to do some processing. It could consist of several steps calling some code. The DI-job-s are SAS-code. SAS code (in this is some kind of executable) should be able to run unchanged on the new release new machine. In an release-management approach going from develop to test acceptance production this is the same approach. The requirement for this is not getting hard code physical names in the code. for me it is something like keep on the right side. You do not develop in production (regulations). For people living in the trial and error approach of building code this looks strange. The ideal (Di SAS-jobs *.sas files): having them tested approved validated you have exactly the same code/version being used at every stage. When the developer would sign his code/executable a checksum hash will deliver exactly the same for all environments. No recreation of other code/job changes are allowed. With SPD clusters I would ask is the a SPD-server based based or a libname SPD approach. For the long term I think it should be replace for one of the many Hadoop implementations or the hardware is becoming that much better while your information needs do not increase you can do this with more common parts. But seeing what is new with 5.1 you have also very good things to keep at that way. going into the spd libname SAS(R) 9.4 SPD Engine: Storing Data in the Hadoop Distributed File System, Second Edition hmmmm SPD As server approach: SAS Scalable Performance Data Server (5.1) There will be a need for conversion. See SAS(R) Scalable Performance Data Server 5.1: Administrator's Guide, Second Edition SAS(R) Scalable Performance Data Server 5.1: Administrator's Guide, Second Edition and this looks like a SAS metadata binding (version dependent) SPD Libname approach: SAS(R) 9.4 Scalable Performance Data Engine: Reference, Second Edition There are new options but none of incompatibility for migration, see: SAS(R) 9.4 Scalable Performance Data Engine: Reference, Second Edition
... View more