I'm running SAS BASe on Solaris. I have one small table which is used for logging. Many processes use it. They use it only once and they just get new ID from this table - increment last record and store new ID in table. That's all what they do with it.
The issue is that processes frequently fail because this table is locked.
What are the general ways to workout this?
We also have an SPD engine, but i didn't have good knowledge about this stuff now. Some people around me say that SPD helps with this issue. But there is and issue with SPD service itself - it fails frequently too.(actually not so frequently but one fail makes big problems).
Message was edited by: Opa4ki
I have seen other implementations, such as using an external relational DB (Oracle, DB2, SQL Server) to manage the locking problem (since the implementation of lock management is native nearly in all DB). But surely it is the more complex way to do this.
When i was writing first message i didn't have enough knowledge about how do we use shared table and how we lock it.
We are trying to lock it with Lock statement exactly the way it is written in this paper http://www.lexjansen.com/pharmasug/2005/posters/po33.pdf
But it doesn't work well. In spite of some trick with SCL open\close strategy which is given in paper.
The problem is, that when one process has locked the table, and another one is trying to acquire the lock with "lock TableName". Lock statement writes ERROR message in log. And it is not appropriate for our code. Because it is made in such a way, that if there is an ERROR message in log, it thinks that the process failed.
And we have a long batch tree of processes, and when one of them fails(even if it worked well, but had just dummy ERROR message in log) others don't start.