Hi All, We have two separate environment (both SAS 9.4). In one environment (on premise Linux) we have all our flows, DI jobs and therefore all warehouse tables. In another environment (multiple AWS instances) we just run our distributed VA (7.3). Currently, we zip up our warehouse tables, push them to VA server, unpack them and autoload to LASR. It involves a number of scripts to be scheduled, and also requires us to have enough space to drop the archive which raises our AWS cost. We would like to write from DIS jobs directly to LASR however our VA environment does not have SASConnect. A suggestion was, to write the code defining LASR libname, and running the data step: data lasr_lib.table_name;
set source_lib.table_name;
run; That way did not work, as the table did not load to LASR (and it also has to be unloaded beforehand). However, the abovce piece of code work with (append=yes) option. Hence, I don't have unload the table but just purge all the records from LASR table and append fresh data. However, I am not sure how APPEND will perform for huge datafiles. My question is, what is the best suggestion to push tables to VA LASR if it is a separate environment. May be push them to VA HADOOP first and then locally to LASR. Thanks!
... View more