Thanks I-Kong for the answer. For transformation logic I mean the ability to define calculated fields based on the initial ones using the functions that SAS provides. The scenario is the following. I am loading data from a Teradata DWH. My users want to analyze facts, the sales, from different point of view: the Customer, the Territory, the Product etc. They want to attach and detach, on the fly, one or more o these dimensional tables from the fact table to produce several analyses on the data. The dimensional tables are the same for the entire enterprise and can be used attaching them to several others facts tables. To reduce data redundancy coming from entirely denormalized tables, obtained pushing down the join and transformation logic in Teradata, I'm thinking to make the join after loading separately the facts and dimensional tables in LASR memory in the Visual Data Builder for example. In this way each Data Steward, deciding which in memory join execute, can define the appropiate star schema, with eventually calculated fields, to publish to the data analyst and report builders. This one is the more flexible way, I have found, that let data analysis and exploration. The other option should be the construction of a Star Schema in the Data Builder. But this option limit on the selection of the fields forcing to move the build of calculated fields to the report or to the exploration phase. What is your opinion pros and cons of this two scenario? What is more efficient way in term of speed of analysis? .....and in term of memory allocated in the TD720 nodes? Thanks
... View more