- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hi experts,
We are trying to copy a work table (72 millions obs) to a POSTGRES database and it is taking a long time to run. We had a similar issue to copy a huge work table to SQL Server database and we solve it put INSERTBUFF options (32767) at library level.
Down below I share the comparison between copying a work table to SQL Server (we already have it optimized) VS copying it to POSTGRES database:
Postgres execution (STG_PG.PRUEBA3):
SQL Server execution (staging.PRUEBA3):
At the odbc.ini level this is the configuration for SAS/ACCESS Interface to PostgreSQL:
Finally, we execute the following code (same as before) with BULK_LOAD, BL_PSQL_PATH and BL_DELETE_DATAFILE at dataset options level and real time decreased:
Is there a way to configure these options at the library level (BL_PSQL_PATH and BL_DELETE_DATAFILE)? How can I set these options so that BULK LOAD is always active?
Please, if anyone has faced this problem before or knows what is missing, please, let us know. It will be more than welcome.
Thanks in advance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I would suggest opening a support ticket with SAS so they can walk you through the options available.
Not sure why you went to the effort of taking photographs of the text of your SAS code and log and then annotating the text with little red squares. It would be a lot easier to just copy and paste the actual text. Plus it would be a lot easier to read.
What ENGINE are you using to connect to POSTGRES? What ENGINE did you use to connect to the other database? Have you licensed any of the database specific SAS/Access products from SAS?
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Tom , thanks for the suggestions.
I am using POSTGRES and ODBC engine to connect to POSTGRES and SQL Server respectively. These products are licensed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Performance is quite hard to tackle.
You want to isolate the different system components.
For instance, can you do isolated (local) bulk-loads in each DB? That will tel you if this a DB specific problem or not.
Then there's networking.
Also, is there's a way to verify that bulk load is actually in effect for Postgres?
Finally, I get a bit confused by the log for Postgres, since it states "SQLSVR engine" - maybe a "bug" in the SAS/ACCESS to Postgres engine?
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
You may need to try the DBCOMMIT as well to see if that improves performance. Setting the bulk load options for all POSTGRES queries seems a bit dangerous to me as it is really only appropriate for large data loads. As @Tom says, working through the options with Tech Support is your best option.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
PROC SQL
PROC COPY
PROC APPEND
to copy data into PG .