BookmarkSubscribeRSS Feed
mjack
Calcite | Level 5

Hi Experts,

While version upgrade of SAS9.1 to SAS9.4 , I guess
for programs/metadata/applications/jobs
1) all the ETL jobs will have to be compiled again and tested in TEST environment and then moved to production.
2) all Info maps can be directly migrated (or do they need to be created again?)
3) all SAS Web report Studio reports can also be migrated directly (or do they need to be created again?)
4) the table data can also be migrated directly?

Can some expert give some suggestions on the approach to be taken with programs/metadata/applications/jobs/data  during a version upgrade?

Thks

4 REPLIES 4
jakarman
Barite | Level 11

Migrations questions, actions:

0/ SAS code should run unchanged in 9.1 / 9.4

    Verify the correctness of this assumption. Sometimes enhancements updates or fixes are causing minor but catastrophic problems.

    Try to define a representive set of some SAS code you can use as a regression test.

1/ ETL jobs are SAS code. They should be able to run unchanged.

   Your SAS metadata is the source-environment and the generated SAS-code is like an executable.

   Knowing this you are needing a defined release management approach using metadata supporting the Develop Test Acceptance Production stages (DTAP).

   Be aware that version-tools are only intended for developers as version management is different to release management.

2/ Info maps are part of metadata.

    This repeats: "you are needing a defined release management approach using metadata" 

3/ Web report Studio is using metadata I expect SP processes those are part of metadata.  Content is stored in the contenserver.

    This repeats: "you are needing a defined release management approach using metadata"

    For the content server you should have some backup/restore process as operational service.

4/ Migrating data can be necessary. SAS-datasets are release dependent as native-format. In a foreign format you can access a lot of SAS datasets but there is some functionality missing aside the performance penalty.

SAS catalogs are needing always a conversion. This can be done using Connect/share. (hidden in proc migrate).

There are also some binary types, there is no support for migration of these type. They will need a "recreation" action.

5/ Would you have release management in place:

-  the a/ D,T and b/ A and c/ P are segregated machines that are migrated after each other.

   The timing should be planned within several weeks. roll-out of new sas-applications (ETL WRS usage & users) could be a problem as of going from a newer version to an older one.  The D,T as first machine will deliver some experiences for the next one. The A is the ne to be checked by your users. P should go perfect.

- The release management process should support some deployment of sas-applications (ETL WRS usage &users) going through DTAP of ETL/WRS. This will ask some special attention.

6/ For migration there is a tool SMU see: SAS(R) 9.4 Intelligence Platform: Migration Guide it will support most of the technical work.    

---->-- ja karman --<-----
mjack
Calcite | Level 5

Thanks Jaap, for your reply.

There is a confusion. You mentioned about 'the code should move unchanged' and also you have mentioned about need for 'release management'.

There are ETL jobs and then there are executables derived from them. From 'the code should move unchanged' what I understand is that the jobs will not require reruning to generate executables and so both jobs and their executables will move as they are(unchanged)?

From 'need for release management' the understanding is that all the jobs will have to be rerun(just rerun and no addition/modification/removal anywhere) and their executables tested(in new version) and then migrated to production(new version)?

Also there will be huge data in SPD clusters. Do you think that data structures for all clusters/tables require recreation and reloading or they are just moved to new environment?


Thks

ballardw
Super User

An additional element is to make sure you have code to remake any permanent format catalogs depending on your environment. Your old ones may not be usable if going from a 32 bit to 64 bit version of SAS.

jakarman
Barite | Level 11

Hi mjack, That some confusion is understandable. Sometimes I hip/hop in my mind and I am going too fast. On other occasions we are using a word but use that with a different intention/meaning. Using other words for same intention is part of communications not always clearing up things.

The words with SAS-DI are transformations jobs packages. SAS(R) Data Integration Studio 4.9: User's Guide

- transformation SAS(R) Data Integration Studio 4.9: User's Guide is how the data the logic/process is done  

- Jobs SAS(R) Data Integration Studio 4.9: User's Guide are just the generated SAS-code 

- packages / deploy SAS(R) Data Integration Studio 4.9: User's Guide is getting the SAS -code to operations

In a SAS environment I would prefer to use these words - meanings.

Using other tools  like SSIS I see what  SAS is called a transformation MSFT has called a job and what SAS has called a job MSFT has called a package. No the package I did not find at MSFT. No wonder this all is confusing, knowing one tool and going into an other all words/meanings are different.

Going into an operational schedule environment the word job (and application) is also used. In that world a job is the script that is run to do some processing.
It could consist of several steps calling some code. 

The  DI-job-s are SAS-code. SAS code (in this is some kind of executable) should be able to run unchanged on the new release new machine. In an release-management approach going from develop to test acceptance production this is the same approach. The requirement for this is not getting hard code physical names in the code. for me it is something like keep on the right side. You do not develop in production (regulations). For people living in the trial and error approach of building code this looks strange.

The ideal (Di SAS-jobs *.sas files): having them tested approved validated you have exactly the same code/version being used at every stage. When the developer would sign his code/executable a checksum hash will deliver exactly the same for all environments.  No recreation of other code/job changes are allowed.

With SPD clusters I would ask is the a SPD-server based based or a libname SPD approach.
For the long term I think it should be replace for one of the many Hadoop implementations or the hardware is becoming that much better while your information needs do not increase you can do this with more common parts.  But seeing what is new with 5.1 you have also very good things to keep at that way.

going into the spd libname SAS(R) 9.4 SPD Engine: Storing Data in the Hadoop Distributed File System, Second Edition  hmmmm


SPD As server approach:  SAS Scalable Performance Data Server (5.1)        
   There will be a need for conversion. See SAS(R) Scalable Performance Data Server 5.1: Administrator's Guide, Second Edition

   SAS(R) Scalable Performance Data Server 5.1: Administrator's Guide, Second Edition and this looks like a SAS metadata binding (version dependent) 

        
SPD Libname approach: SAS(R) 9.4 Scalable Performance Data Engine: Reference, Second Edition
  There are new options but none of incompatibility for migration, see: SAS(R) 9.4 Scalable Performance Data Engine: Reference, Second Edition 

---->-- ja karman --<-----

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

CLI in SAS Viya

Learn how to install the SAS Viya CLI and a few commands you may find useful in this video by SAS’ Darrell Barton.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 4 replies
  • 1900 views
  • 0 likes
  • 3 in conversation