SAS Data Integration Studio, DataFlux Data Management Studio, SAS/ACCESS, SAS Data Loader for Hadoop and others

Using Macro Variable in Column Name DI Studio

Reply
Occasional Contributor
Posts: 12

Using Macro Variable in Column Name DI Studio

In SAS, I can attach a macro variable to a column name.  For example:

%let year=2013;

proc sql;

create table student as

select id, major as MAJOR_&YEAR

...

This results in a column name of MAJOR_2013.

I'm trying to do the same thing in DI Studio.   When I map from the source table to the target, I rename the target table column MAJOR_&YEAR.    The result is then MAJOR_&YEAR rather than MAJOR_2013 like I get in SAS.   Even if I put that macro variable in quotes, like MAJOR_"&YEAR", my resulting target table column name is MAJOR_"&YEAR".   I am either doing something wrong in DI Studio or DI Studio doesn't behave the same as SAS in this particular situation.   Has anyone successfully used a macro variable in a column name in DI Studio?   Thanks.

SAS Employee
Posts: 75

Re: Using Macro Variable in Column Name DI Studio

Is it possible that the SAS Application Server that executes the job is not able to access the library where the macros are stored? If so, regsiter the librarty and assign it to the app server.

Occasional Contributor
Posts: 12

Re: Using Macro Variable in Column Name DI Studio

I create the macro variable in a user-written transformation at the start of the job, so the macro variable is available.  I can use it in logic successfully.

Respected Advisor
Posts: 3,900

Re: Using Macro Variable in Column Name DI Studio


"...or DI Studio doesn't behave the same as SAS"

I'm picking this bit from your post because it's important that you understand what DI Studio does and what "SAS" does.

DI Studio is a client where one can define ETL jobs in metadata. Based on this metadata DI studio then creates SAS code which gets executed on the SAS server. So it's always the same "SAS" which actually executes.

What you need to do is to look at the code generated - and eventually the log - because this will tell you what and why things are not working.

Creating columns via code (eg. using Proc Transpose) is quite easy in SAS but it's kind-of an issue with any ETL (and also databases by the way). You need (and want) stable table structures to map columns from source to target.

You can of course re-create the table always when you're running the code and change the table structure (number and names of columns) every single time. If you then also need to have table metadata for this you can execute Proc Metadata which can create or update the corresponding table metadata for you. But what are  you going to do with this table metadata then? The new columns won't be mapped to anything in your flow.

You might need to re-think your approach in an ETL context and try to go for a long table structure instead of a wide one (=have a year (or date) column and several rows instead of several columns).

Ask a Question
Discussion stats
  • 3 replies
  • 918 views
  • 0 likes
  • 3 in conversation