I'm currently using the Cloudera ODBC driver for Impala to bulkload datasets to Hadoop. Part of the underlying workflow is the dataset is written to a temporary file in text format and then transferred to HDFS. Is there a setting to have that text file compressed in order to speed up this transfer (that is, compressed on the client, uploaded, and then uncompressed on the host HDFS system)? The dataset is written to a text file and then transferred to a directory in HDFS (/tmp); having this text file compressed before the transfer would greatly speed up the process. The client is a SAS 9.4 (M4) workstation with ACCESS to Impala (V9.43), on a Windows 7 machine. The host HDFS is a Kerberized environment running Cloudera distro of Hadoop. Connection is via ssh tunnel to the host. Below is an example of the SAS syntax for creating the table using the bulkload option. /* Connect to cluster using Impala ODBC driver */
libname hdp impala dsn='DPL Impala 64bit' schema= test ;
/* Create the table using bulkload*/
data hdp.my_table (bulkload=yes dbcreate_table_opts='stored as parquet') ;
set work.my_table ;
run;
... View more