DATA Step, Macro, Functions and more

Concurrent read/write to datasets

Occasional Contributor
Posts: 10

Concurrent read/write to datasets


I'm running SAS BASe on Solaris. I have one small table which is used for logging. Many processes use it. They use it only once and they just get new ID from this table - increment last record and store new ID in table. That's all what they do with it.

The issue is that processes frequently fail because this table is locked.
What are the general ways to workout this?

We also have an SPD engine, but i didn't have good knowledge about this stuff now. Some people around me say that SPD helps with this issue. But there is and issue with SPD service itself - it fails frequently too.(actually not so frequently but one fail makes big problems). Message was edited by: Opa4ki
Super Contributor
Posts: 474

Re: Concurrent read/write to datasets

Several paths are possible.

First, you should be aware that this is a system services concern.
SAS datasets are stored in system files, and system files access is managed by the OS.

Some systems do provide file locking management (z/OS) other don't (UNIX, Windows).

SAS have it's own solution through the SAS/Share Server ( ).

But, alternatively, you can implement your own file locking management through the explicit LOCK statement.


Online documentation for the LOCK statement here:

You'll probably need this too (checking for successful LOCK):

I have seen other implementations, such as using an external relational DB (Oracle, DB2, SQL Server) to manage the locking problem (since the implementation of lock management is native nearly in all DB). But surely it is the more complex way to do this.

If you're going for own locking management implementation, be aware of the DEADLOCK situation ( ).

Cheers from Portugal.

Daniel Santos @
Occasional Contributor
Posts: 10

Re: Concurrent read/write to datasets

Hi again
Thanks for answer.

When i was writing first message i didn't have enough knowledge about how do we use shared table and how we lock it.

We are trying to lock it with Lock statement exactly the way it is written in this paper
But it doesn't work well. In spite of some trick with SCL open\close strategy which is given in paper.
The problem is, that when one process has locked the table, and another one is trying to acquire the lock with "lock TableName". Lock statement writes ERROR message in log. And it is not appropriate for our code. Because it is made in such a way, that if there is an ERROR message in log, it thinks that the process failed.
And we have a long batch tree of processes, and when one of them fails(even if it worked well, but had just dummy ERROR message in log) others don't start.
Super Contributor
Posts: 474

Re: Concurrent read/write to datasets

Well in theory, the QUERY or SHOW options of the LOCK statement should allow you to check if a table has been locked by some other process.

Actually there is a bug, and those options might not work as expected:

But, if the log is the only problem, try the following macro:

%macro lock_try(DATA);
filename tempfile temp;
proc printto log=tempfile; run;
lock &DATA;
proc printto; run;
%mend lock_try;


It will attempt to lock the provided table, and redirect the log to a temporary file.
No message will be displayed in the log.

To see if the lock was successful, just check the value of the SYSLCKRC automatic macro variable.

More about redirecting output/log to a file:

Cheers from Portugal.

Daniel Santos @
Ask a Question
Discussion stats
  • 3 replies
  • 2 in conversation