Hi @Hb21Pzn,
I think the performance question is still not answered. Why does a "delete from table" takes so long for an RDBMS? First of all this is not a SAS issue. If you do this in Teradata admin or BTEQ it will take equally long. For starters it is because the RDBMS does journalling. When a transaction fails it needs to be rolled back. Creating such a rollback log is very expensive and caqn potentially overflow buffers and bring a system down. Secondly an index may need rework for every deleted row. Also very expensive indeed. Another cause can be the repeated validation of referential integrity (RI) constraints. If another table has a integrity dependency on your table every delete will require a validation of that constraint. These checks can be disabled or postponed but sooner or later they will cost you. Due to al this a "delete from table" is considered bad practice and will lead to a visit and a fatherly speach from your trusty DBA if you do it often (meaning more than once).
Oracle has a TRUNCATE function. TD has the ALL keyword: DELETE FROM TABLE ALL;. In TD the ALL keyword avoids transaction logging of each row's deletion. It wipes the cyliner headers and marks the tables as empty. It is called a fastpath delete. This means a brute force deletion of rows neglecting indexes and RI constraints. Messy but fast. Maybe just what you need
The DI Studio data loader has options for you to choose how to clear data from a table because of all this. Hope you can now tell why.
So that's why dropping and re-creating the table altogether is often your best bet. Or maybe, if you use explicit passthrough, you can code the ALL keyword in your SQL.
Hope this helps,
- Jan.
* Edited for typos.
... View more