Hi. Thanks for the response. I am going to give more details. We use IRM for flows that we built, which are related to solutions outside what SAS is providing. I mean we have other processing (custom) units that are written in SAS language and that exchange data with SAS solution packages provided by SAS. It therefore makes sense to run those custom units in IRM. Within these custom unit, we create their flows from the scratch and modify as needed. Let's denote our new custom processing unit in IRM as XX. Then what happens is the following. We build XX flow with all nodes and headers required. We restart IRM server to have the flow into the system. We test it and runs like a charm, we can create instances and they run until the end. Then next month comes. We need a live ETL to refresh the entity table as well as links to the base date. We do it, say, on the 1st of the month. Data for the flow XX however is not yet there on the 1st. Data for XX comes on the 6th calendar day, and we save it into landing area. Therefore we want to start the XX instance with irm_rest_create_jobflow_instance macro(...XX...), provided by SAS. The macro fails, it says "Input data is not there". Bang. The whole point is that on 6th calendar day, when we want to start the instance, input data IS there. However, in the last live ETL, which happened on 1st calendar day, input data WAS NOT there. Therefore, somehow, it looks as if live ETL checks data availability and keeps track about which flows can be run (data is there) and which cannot (data is not there). This information is written 'in stone' somewhere in the postgres database, so we cannot start those instances for which data WAS NOT there. As soon as we perform a new live ETL, we can start the instance. So, I would like to see a parameter somewhere in IRM system files which says: check_data_availability_in_Live_ETL = Y so that I can turn it off: check_data_availability_in_Live_ETL = N or something similar. Best regards and thanks for replying.
... View more