12-02-2013 02:57 AM
I am thinking of changing the read isolation level for our SQL Server libraries to Read Uncommitted. This means that it will not lock the database when reading, but may read dirty data. In our case the timing of our overnight run means that there should not be any dirty data at that time to be read. I have heard that this change is likely to provide a performance boost.
Can anyone advise whether this is likely to provide any performance boost? I know it all depends on what we are doing.
Also, is it likely to help with locking problems which we are also having at times? It seems to me like it would help since there would be no need to lock tables any more.
We are using SQL Server on a Windows 2008 R2 platform, which is accessed via OLEDB since there is no Access to SQL Server on Windows (for some strange reason). If anyone wants to do this then the following instructions can be used to change a library to use uncommitted reads.
If you are using your own libname statements then you would use the read_isolation_level parameter ... SAS/ACCESS(R) 9.2 for Relational Databases: Reference, Fourth Edition
Beware though that this might not perform well if you are joining data from some libraries that use this and some that dont... SAS/ACCESS(R) 9.3 for Relational Databases: Reference, Second Edition
12-02-2013 04:12 AM
Thanks for a (great?) tip and the quick guide. Please get back to us if you test this!
For your question however, is aquite SQL Server specific, and I blieve that you might find a better respone in such a forum.
12-02-2013 03:24 PM
At one place I've worked in the past dirty reading was used for querying a data warehouse in DB2. I don't remember the details but the performance improvements were quite significant and also reduced workload for the DB.
I would have thought that multiple parallel reads shouldn't cause table locking issues. So if there are such issues does this mean there are also write operations happening at the same time? If so then you could actually get "dirty data" and you would need to find a way to control the job flow and only read if no write operations to the tables are happening.
For the DB2 warehouse example:
There had been several cases where a SAS EG user locked a table overnight. As this was a corporate warehouse and a failed overnight batch had quite an impact, one other measure which had been taken was to lock-out all SAS users during the overnight batch window for loading the database (by revoking the grants and then re-establish them after the load). - This was years ago and I'm sure with a current DB2 version there are better means available to avoid that "normal" users lock tables.