Since replacing Cloudera with Snowflake a few years ago, we constantly have to battle with the Snowflake ODBC connection. Getting data to and from Snowflake is fine - but when we have to access the metadata the process slows to a crawl. It appears to be doing a scan of all metadata, possibly because of the underscore in the schema names of *****_dev, *****_test and so on. Yeah, bit silly, but there you go.
I've written my bespoke ingestion code within DI to compensate for that: using the exist function takes thirty seconds or so, whereas using the open function and checking for success/fail is virtually instantenous. While we investigate the processes required to change the schema names, there's another issue with pass-through SQL joins, related to automatic deletion of the target table.
If I set up a user-written transformation, I can tell it not to delete the target table (Inputs/Outputs * Automatically generate delete code for outputs). But if I do an SQL join, it generates proc datasets/delete and there doesn't seem to be any way to get around it. For a SAS output dataset, it doesn't matter, but if it's a table accessed via a Snowflake libname, again we're hit with the 30-second delay while the whole schema metadata is scanned to find the table. If there are half a dozen joins in a job, that's quite a hit.
Is there any way to turn this behaviour off? I've looked everywhere in all the options and I can't find it.