BookmarkSubscribeRSS Feed
AndrewHowell
Moderator

A site with multiple, separate Dev environments (different departments, different solutions, different release cycles - some Devs are permanent, others are fired up only when required). These separate Dev environments DO share some metadata-defined resources (database tables, in particular).

 

In each environment, changes are done using DI Studio (CheckIn/CheckOut project repositories, etc).

 

Currently, SAS packages are exported from each Dev environment, and carefully (with lots of manual checking) imported/merged into a Test environment.

 

Now, SAS Metadata can manage metadata objects (properties, associations, etc) within any one environment, but not across multiple environments.

 

Scenarios:

  • Modify a table metadata definition and/or a DI job (which may also change table definitions) in one Dev platform
  • Modify a parameterised stored process (which is not metadata-associated with any specific table and/or job) in one Dev platform
  • Probably others, but they are the main ones
  • Wondering about security models (ACTs, user groups, etc)

Issues:

  • How to determine impact analysis on other Dev platforms.
  • How to merge changes from different Dev platforms into a Test environment.

 

Mandating a linear SDLC (one shared Dev, promote to Test, promote to Prod) is not an option.

 

 A (non-SAS) suggestion for a solution has been made to export every metadata object (many!) as separate SPK packages, and check all of them into GitHub, and use Git to check the package's XML against what is already checked into Git.

 

I'm uncertain if this suggestion includes each object's dependencies (I'm presuming not, as the aim is to break the multi-platform solutions into their individual objects and check each one in/out independently.)

 

I am NOT a Git expert (in fact, I'm a novice), but I'm having difficulty seeing how this is viable. One concern I have (feel free to add more or correct me): the more granular (atomic?) we export each object (table, job, etc), the more we lose the relationships between the objects.

 

That said, I also cannot come up with a thorough-enough "SAS" solution to manage change across multiple Dev environments, either.

 

Constructive suggestions would be appreciated.

 

4 REPLIES 4
ronan
Lapis Lazuli | Level 10

  • @AndrewHowell wrote:

     

    ... 

    Scenarios:

    • Modify a table metadata definition and/or a DI job (which may also change table definitions) in one Dev platform
    • Modify a parameterised stored process (which is not metadata-associated with any specific table and/or job) in one Dev platform
    • Probably others, but they are the main ones
    • Wondering about security models (ACTs, user groups, etc)

    Issues:

    • How to determine impact analysis on other Dev platforms.
    • How to merge changes from different Dev platforms into a Test environment. 

 

Interesting case, thanks for sharing !

 

 

I would suggest, maybe, to duplicate the multiple Dev SAS MD respositories into a single instance with several custom, each CopyDev_n ( CopyDev_1, CopyDev_2 etc.) synchronised with a custom repo. There might be integrity issues with specific object types like Libraries if the multiple Dev_n share same named Libraries, or Generated Transformations (a very specific kind of object as regards integrity rules).

Then the automatic synchronisation processes will rely on some rules to apply the changes, rules to be explicited in order to know which change has prevailed eventually.

Or this process can be made manually at regular intervals in order for a "Data steward / Data team" to decide how to reconciliate the changes.

 

Once the changes have been accepted, they would be retro applied to each initial Dev repositories impacted then safely exported towards a single Test environment, I suppose.

 

Process would look like :

 

1) 

Dev_1 => { CopyDev_1 , ..., CopyDev_n } => Merge and reconciliation  

Dev_2 => { CopyDev_1 , ..., CopyDev_n } => Merge and reconciliation 

...

Dev_n => { CopyDev_1 , ..., CopyDev_n } => Merge and reconciliation

 

2)

Evaluate impact from { CopyDev_1 , ..., CopyDev_n }

 

3)

if necessary apply changes into several customs repo { CopyDev_1 , ..., CopyDev_n }

 

4)

export SPK with changes from CopyDev_i impacted => Dev_i 

export SPK with changes from CopyDev_k impacted => Dev_k 

 

5)

export changes from Dev to Test

 

This isn't very straightforward but at first thought the only practical solution I could come up with.

 

 

 

EDITED

AndrewHowell
Moderator

Thanks @ronan.

 

I'm feeling the challenge, since I was invited to present at last year's SGF, specifically on managing SLDC across SAS environments & within a single environment - although I had presumed a linear Dev-to-Test-to-Prod scenario. I hadn't considered how to manage parallel development across multiple Dev environments, and this is the first time I've had to deal with it at a client site.

 

As you indicated, my first thought is the only way for this to work is to ensure all Dev metadata repositories are synched.

 

There will be changes underway in each Dev environment, but presumably these will be checked out into project repositories, so (in theory) any metadata not checked out should be identical in each environment.

 

So now I'm looking at:

  • how to compare metadata packages & report on differences
  • how to flag if an object in a package is checked out in one of the Dev environments.

Hope this kicks off some good discussions.

 

ronan
Lapis Lazuli | Level 10



@AndrewHowell wrote:

 

... 

So now I'm looking at:

  • how to compare metadata packages & report on differences
  • how to flag if an object in a package is checked out in one of the Dev environments.

Hope this kicks off some good discussions.

 

You can use the ImportPackage tool with the -noexecute option : that would print out the SPK's content

but, unfortunately, not at the smallest level : for instance, table columns might not be displayed, only column table name...

 

ObjectType

/folder_path/ObjectName

 

With such a report, you could then launch a request into each of the Dev Repo project to look for any check-out status :

a little bit time-consuming, perhaps with large SPK.

AndrewHowell
Moderator

-noexecute: good to know, produces report without making change.

 

I'll try & let you know. Thanks.

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

CLI in SAS Viya

Learn how to install the SAS Viya CLI and a few commands you may find useful in this video by SAS’ Darrell Barton.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 4 replies
  • 1243 views
  • 12 likes
  • 2 in conversation