Hi Patric, Lets go throw your proposals(questions) step by step:)(I'll mark your text in bold). 1. What you basically try to do is re-writing the deployment "scripts" that come with DIS. That sounds like a very hard task. Is it worth it? Probably yes, it worth it because on Friday I finished first version of this set of macros that compare ETLs and in majority of tests(I made a lot) macro return correct result, I mean it shows correct difference between jobs or the fact that there were no found any difference BUT SEARCH were performed ONLY by these and these meta objects(Transformations name desc uw codes, all transformations options, where group having join clauses,mapping, columns, expressions etc.). Plus the macro runs a few seconds(avg time for avg ETL) so it exactly what customers wonted. I understand that difference can be stored in very very "fare" meta sub-object but I created the macro in such way that it very simple add new rules(things to compare) so it(macro) will be improved time after time. Couldn't you just firstly run such a script for all jobs (job names retrieved via metadata query) in both environments and then compare the generated SAS code (excluding things like metadata id's which always will differ). It also shouldn't be too hard to exclude job pre-processing stuff as all steps in DI generated code have clear headers (so you just exclude the section under the job pre-process header). It's very good idea, actually when I started work on the task I thought about possibility compare deployed ETLs codes, but we lived in no-ideal world))) and there are a lot of reasons that will not allow me achieve needed goals using such solution. Fist of all deploy all jobs on two meta servers, for example, each night will take a huge amount of time, I'm almost sure - the party will сщтештгу till the mornings:) I am not responsible person for this part of work but I have the tasks with flows and I made deployment(generate code) of few ETLs, - it take a few seconds for each ETL, and what if we have few thousandth of ETLs?Few thousandth multiply on few seconds will be few hours, actually I am not sure that our system administrator, dba will accept such approach. When we move new jobs to production (or new version of job)we made partial promotion, so all rest ETLs staid the same, without changes. Next reason why this variant isn't ok - our developers can edit some ETLs , so morning deployed code will not be actual on the afternoon, outsource we can check in metadata timestamps if the job were not changed from the morning, but if it is, we would not be able compare it with job from production, for example. And there are also few another minor things(deployment on test servers, difference between deployed code and real ETL on production(don't ask me why)) that will not allow me compare ETLs deployed code. But in general your idea is probably the only one correct that give 100% correct reliable result, but unfortunately I can't use such approach... Another way to go (also in DIS4.3): Before starting any new development cycle a DEV and TEST environment needs to get "baselined" with production code/metadata objects. There is now archiving with version control in DIS4.3 Actually I'm at home and can't check version of DI studio, but we still use SAS 9.1.3 and moving to next SAS version postponed for now because our data warehouse is huge, a lot of ETLs, tables , flows etc. I suppose DI studio 4.3 comes with new version of SAS(probably 9.3)... But this solution is also interesting, I'' look into it more deep when I'll come from vacations. Thanks a lot, you with Linus gave really good advises, I'll try propose something from them to customers if actual variant will gave unreliable or wrong results.. Thanks one more time.
... View more