BookmarkSubscribeRSS Feed



We are facing a strong showstopper with the current limitation of not allowing to mix INNER and OUTER joins in queries. Which is impacting the usage of our data providing system, to which SAS connects to.


Giving more options in the "libname jdbc" statement would give the user the ability to "match" the query generation in SAS with the real capacity of the underlying system using JDBC.


Here are a few of those options that would be nice to add:

- underlying system supports using inner and outer joins together - this limitation in particular is much more impactful than the others. Most DBMS systems allow that today so SAS should allow that for JDBC datasources.

- underlying system does not support JDBC escape sequences

- pass custom connection properties (Mapping for java class Properties) - sometimes some JDBC drivers do not support passing all parameters as URL.


Thank you !


What version of SAS does this apply to? What JDBC driver and version are you using? In my experience more recent drivers work better. What external database are you reading?


Have you consulted SAS Tech Support? If so what did they say? 

Tourmaline | Level 20

@SASKiwi the documentation specifically states that you can't mix inner and outer joins, so I think this ballot is legit.

"Outer joins between two or more tables can be passed to the DBMS. However, the outer joins must not be mixed with inner joins in a query."

Fluorite | Level 6

@SASKiwi As mentioned by  @LinusH the documentation states that it is a limitation.

I have created that ballot after the SAS Tech support asked me to do so, for that particular reason.


The version of SAS/ACCESS for JDBC we have is 9.4M6. The db vendor's drivers are the latest available. But as mentioned, it has nothing to do with the driver in that case.

The data providing platform we are access is a denodo platform installation, which supports mixing outer and inner joins (we have many software in the organization querying that system with no issue. SAS is the only one limiting).


@sdut  - Is ODBC an option here if you have SAS/ACCESS to ODBC?


I'm assuming thus JDBC restriction doesn't apply to SQL Passthru though. 

Fluorite | Level 6

@SASKiwi Denodo comes indeed with an ODBC driver. However, the same restriction applies in SAS regarding mixing up inner and outer joins in both ODBC and JDBC. As stated in the documentation and confirmed from our tests.


SQL passthrough is currently the only way to make it work. But it is a convoluted, not user-friendly solution that our users have already rejected.

SAS Employee
Status changed to: Suggestion Under Review
SAS Employee

Hello @sdut , I was reviewing old tracks today and had some questions on this.


Firstly, I saw the comment about not being able to mix INNER/OUTER joins from within SAS. This is documented as a limitation. However, the following snipet worked for me and fully pushed down. Is there a better example I can use for testing?


data x.class; set sashelp.class; run;

proc sql;
select * from x.class t1
inner join x.class t2 on =
left outer join x.class t3 on =;


One workaround that I've seen people use is to create a DBMS VIEW with the necessary JOIN operations, and then have the SAS user query this DBMS view instead of writing their own JOIN operations.


Regarding the "underlying system does not support JDBC escape sequences." I think this was in regards to a Denodo limitation. There is undocumented option 

JDBC_ESCAPES= ALL | NONE with default of ALL. By setting this to NONE, SAS will not generate JDBC escape syntax, which is useful for some databases, like Denodo, that do not support this syntax. I think this question is answered, but let me know if there is more info required here.
Lastly, regarding custom connection properties, I haven't yet seen an example database where this is required, although certainly there may be some out there. Was there a specific target of interest that we shoud look into?
Thanks for the post, and let us know if there is anything else to add.
Fluorite | Level 6

Hello @billyb532 . Nice to see this old suggestion getting a bit of attention 🙂


I will try to address your points. SAS is not really my forte and I act as a middle man in this situation.

  • Yes, we are aware it is a limitation. However, the essence of that suggestion was precisely to lift that limitation. Same goes for ODBC, ideally that limitation would be lifted on ODBC as well.
  • Here's the anonymized code snippet we are using for our testing. I can confirm the issue is still happening
      %let jdbcstring= %NRSTR('jdbc:denodo://<host>:<port>/<db>?<jdbc options>');
      libname mylib jdbc 
      classpath="<directory of jars>"
      proc sql;
      create table ttt as 
      		inner join mylib.exp 
      			on cust.fielda = exp.fielda
      		left join 
      			on exp.fieldb = int.fieldb
      			and exp.fieldc = int.fieldc
      			and int.fieldd = value
      		exp.fielde = value		
      		and cust.fieldf = value;
  • Unfortunately using DBMS views would not be an option in that case. We are using EG as a self-service tool precisely to prevent users from creating views in the DBMS
  • Thank you for letting us know about the escape sequence hack, we will definitely look into that!
  • Regarding connection properties, some JDBC drivers do not implement passing all possible connection parameters as URL, but instead force to pass it at the time of instantiation of the java Driver object. It is the case for some big DB vendors such as Vertica or MariaDB for example. The idea would be to be able to pass those instantiation parameters through the libname statement instead of the jdbc URL.

I hope my answers helped you! Thanks again for looking into that.





SAS Employee

Hello @sdut, thank you for the reply!


I tried this query, creating my own test tables, and it did pass down.  I ran this with Denodo, backed by Hive. If your user is still having issues with query pass down, might want to enter a tech support track for investigation.



options sastrace=',,,d' sastraceloc=saslog nostsuffix msglevel=i
linesize=132 pagesize=max validvarname=any validmemname=extend noquotelenmax;
LIBNAME x SASIOJDB jdbc_escapes=none
user=XXXXXX password=XXXXXX
dbmax_text=1024 dbmax_text_types=all
preserve_names=no /* materialized=yes */

proc sql; connect using x; execute by x (create materialized table cust (personal_id varchar(10), fielda varchar(10), fieldf varchar(10) )); execute by x (create materialized table exp (balance_amt varchar(10), fielda varchar(10), fieldc varchar(10), fieldb varchar(10), fielde varchar(10) )); execute by x (create materialized table int (interest_rate_pctv varchar(10), fieldb varchar(10), fieldc varchar(10), fieldd varchar(10) )); quit; proc sql; select  cust.personal_id,  exp.balance_amt,  int.interest_rate_pctv  from  x.cust inner join x.exp  on cust.fielda = exp.fielda left join  on exp.fieldb = int.fieldb and exp.fieldc = int.fieldc and int.fieldd = 'test' where  exp.fielde = 'test' and cust.fieldf = 'test'; quit; JDBC_10: Prepared: on connection 0 SELECT * FROM "sas_accesstesting"."cust" WHERE 0=1 JDBC_11: Prepared: on connection 0 SELECT * FROM "sas_accesstesting"."exp" WHERE 0=1 JDBC_12: Prepared: on connection 0 SELECT * FROM "sas_accesstesting"."int" WHERE 0=1 JDBC_13: Prepared: on connection 0 select TXT_1."personal_id", TXT_2."balance_amt", TXT_3."interest_rate_pctv" from "cust" TXT_1 inner join "exp" TXT_2 on TXT_1."fielda" = TXT_2."fielda" left join "int" TXT_3 on ((TXT_2."fieldb" = TXT_3."fieldb") and (TXT_2."fieldc" = TXT_3."fieldc")) and ((TXT_3."fieldd" = 'test')) where (TXT_1."fieldf" = 'test') and (TXT_2."fielde" = 'test') ACCESS ENGINE: SQL statement was passed to the DBMS for fetching data. NOTE: No rows were selected.



Also, while researching this track I noticed that version  8.0u20210209 of the Denodo JDBC driver does support escape syntax. From their web page..

The driver now supports the JDBC escape syntax for date and time literals. For date literals, this is {d 'yyyy-mm-dd'}, for timestamp is {ts 'yyyy-mm-dd hh:mm:ss.f . . .'} and for time literals is {t 'hh:mm:ss'}


Calcite | Level 5

does the same issue occur with JDBC/ODBC connections with Databricks? if so it would be great to add the capability to a JDBC/ODBC connection to  Databricks