Has anyone automated a Metadata sync process between a PROD linux environment and a "cold" bcp linux environment on linux?

Reply
Occasional Contributor
Posts: 5

Has anyone automated a Metadata sync process between a PROD linux environment and a "cold" bcp linux environment on linux?

We have created a number of ExportPackage that run on our PROD server.  The packages are replicated to our BCP server, and we have created a number of ImportPackage.  These 2 servers are Linux servers.  We would like to use cron to schedule these processes on a weekly basis, but we are not having any luck with running the Import/Export Package in a true background mode.

Script Example -

#!/bin/ksh

sasdir=/sas/share/SASPlatformObjectFramework/9.3
packdir=/users/apps/stg/superbid/metadata_migration
saslogdir=/users/apps/stg/superbid/metadata_migration


$sasdir/ExportPackage -host xxxxxxx -port xxxx -user xxxxxxx -password xxxxxx -package $packdir/SASGrid1_ACT_export_package.spk -objects "/System/Security/Access Control Templates(Folder)" -subprop -log $saslogdir/SAS_BATCH_EXPORT_ACT.log -since "Month to date"

Super User
Posts: 3,104

Re: Has anyone automated a Metadata sync process between a PROD linux environment and a "cold" bcp linux environment on linux?

If you are synching SAS metadata between SAS metadata servers that are the same SAS versions and same operating systems, then you can simply file copy the metadata folders from one to the other without exporting/importing. 

Super User
Posts: 6,936

Re: Has anyone automated a Metadata sync process between a PROD linux environment and a "cold" bcp linux environment on linux?

Our solution for the backup server (AIX) is this:

SAS install directory (/usr/local/SAS) and /home (among others, there is the home directory of the SAS install user) are mounted from the SAN.

Using the AIX LVM, RAID1 is set up so that both datacenters have a complete mirror of all SAN data.

On the backup server, scripts are present that allow the server to "switch personalities" and become the main server.

In case of emergency, the backup server changes name and IP, mounts the SAN volumes and edits (via awk) all the start files (inittab, rc). Then eiter a script starts all the services or the server is simply rebooted.

Things that cannot be mounted from the SAN (like /etc) have their necessary files copied daily (user&group, mainly)

That way, there is only one metadata repository that is always up to date, and no copy is necessary.

I don't know that much about Linux, but I think a similar setup should be possible.

---------------------------------------------------------------------------------------------
Maxims of Maximally Efficient SAS Programmers
Valued Guide
Posts: 3,208

Re: Has anyone automated a Metadata sync process between a PROD linux environment and a "cold" bcp linux environment on linux?

BCP and cold, that is a DR Disaster Recovery question. A DR implementation is a common requirement it can be solved in several ways.

DR Using VM:
I would suppose the virtulized machine could do that. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100093...  
When you are using data on a SAN this is less trivial as you need the data on the SAN.  A SAN should offering mirroring to an other one.

DR Using mirrors/clones:
In this case you are having an exact copy of the machine (cold) that can be activated when needed.
As the machines are cold you cannot use anything that is needing running services. The contradiction you are wanting is a running service for importing metadata.
A full mirror of all data (business installation configuration using eg SAN features) is the way to go.      
There are several attention points:
- connections to other machines must be included. External connections to be faked an being sure not to connect them.
- Open datasets can possible get corrupted. There must be a plan to tackle that.
  The SAS metadataserver is an in-memory process not necessary having all updates done to dasd. 
  You need to solve the often confusing naming of sas,metadata backup. It is no backup as commonly used it is an offloaded consistent version.
  The web-content server is possible needing also attention on this. http://support.sas.com/documentation/cdl/en/bisag/67481/HTML/default/viewer.htm#n1n8fnuni6kbjgn1805i...
For testing purposes you will need an isolated network segment. Bringing up the machine and verify it is working as should be. Than closing/archiving that as cold again. 

DR Clustering (different locations):
Building a cold unused datacenter was usual but with all things getting into cloud possible getting outdated.  In a clustered approach the DR is included.
http://support.sas.com/documentation/cdl/en/bisag/67481/HTML/default/viewer.htm#n1w2q4quib18udn1h8oj...  (Metdataservers clustered) Clustering is aside availability also done for performance reasons. 


Backup & Recovery
This is not the same as DR although technical solutions can have some overlap. There can be confusing about it.
The issue:
- A well taken complete backup can be part of a DR plan.
  Restore the backup to a new location and get it running. 
- An DR implementation usually does not fulfill the Backup/Restore requirements  
  Getting a single or several objects back to a previous version in the operational environment.

For the Backup&Recovery you are needing the export of the metadata being capable to restore a dedicated parts to previous versions.
When there are just components going through a development life cycle there must be already something. Archiving and documenting those should be sufficient.
The metadata can be used for a lot more things.
Allowing/promoting on an adhoc way of EGuide projects PDF/Word documents is placing the SAS metadata at the same way as a Windows share.
Having backup&restore requirements defined as business governance there must be something beign stated on that.      


---->-- ja karman --<-----
Ask a Question
Discussion stats
  • 3 replies
  • 353 views
  • 0 likes
  • 4 in conversation