SASKiwi is correct. You'll need to add the mapreduce option using PROPERTIES. See example below:
PROPERTIES="hive.fetch.task.conversion=minimal;hive.fetch.task.conversion .threshold=-1";
i.e.
proc sql; connect to hadoop ......
PROPERTIES="hive.groupby.orderby.position.alias=true");
( select X_FACILITY_OFFER_CD,
count(*) as count, sum(X_FACILITY_OFFERED_AMT) as X_FACILITY_OFFERED_AMT_SUM, sum(X_FACILITY_OFFER_SEQ_NO) as X_FACILITY_OFFER_SEQ_NO_SUM
from UC5_TEST3A group by 1 order by 1
); disconnect from hadoop;
quit;
The PROPERTIES option can be added on either the libname statement or Hadoop connection string in Explicit pass-through, as you used in your example.
Also, depending on how the Hadoop environment has been set up, altering the memory for a map task via SAS code may not change the memory for the map task. This could be locked down by the Hadoop administrator.
... View more