01-14-2014 06:59 AM
I have created a complex job which branches out to around 10 flows. It starts with a SQL read node which is reading data from a SQL Server database. The table from which data is being read contains around 3.2 million records. A filter has been added and around 1.6 million records are being written. After reading, the data is being processed for quality.
All these is taking up a lot of time - approximately 20 minutes for reading the data and around 4 hours for processing the data and completing the job.
Is there any way that DataFlux can be configured to complete jobs/read and process data at a much faster way? Can any configurations be done in order to fine tune the jobs?
I have attached a screenshot to illustrate the time the job is taking.
01-14-2014 09:30 AM
Can you provide a sample of the logic you are executing following the branch node?
Are you doing any filtering in the 'Add GAR...' expression node? If so add that logic to the Data Input node.
What is being done in the embedded jobs?
Give us some additional details and we can help you further.
Preformance can also be impacted by the hardware capabilities of the machine and should be a consideration as well.
01-15-2014 01:02 AM
Thanks a lot for the reply.
Following the branch node, i am just adding a few columns and passing some values to those columns. In the "Add GAR.." expression node, I am adding a column. No filtering over there.
In the embedded jobs, I am using a data job to check for a certain pattern using a if else statements.
if(ascii(x)==49 or ascii(x)==50 . . . . . . or ascii(x)=57)
01-15-2014 04:17 PM
Is the "Lookup map i..." node the exact same, just repeated 14 times? If so can you add that expression logic into the "Add GAR c..." node and put the branch after that? What your job is doing is taking all 3.2 million records and evaluating the "Lookup map i..." 14 times. Which is 3.2m records * 44.8m records to process. Moving the "Lookup map i..." logic into the "Add GAR c..." node reduces that overhead. Also you can combine some of the "Global Accou..." and "Accounting Re..." nodes and add branches after those to reduce processing.
01-16-2014 03:31 AM
I have put that "Lookup Map..." in front of branch. Its executing a bit faster. Anything apart from that which can be implemented, such as changing any of the advanced properties or the configuration files??
01-16-2014 08:24 AM
The Branch node has a memory cache size option that you can increase which could help. What node are you using to output data? If you are outputting data to a database you can change the commit frequency in the options to commit every 100,000 records (as an example) instead of committing every row which is the default.
01-16-2014 09:26 AM
Will definitely try increasing the branch memory cache size option. Also will change the commit frequency and check.
BTW, for our job, we are writing to the DB using an expression where we are using the Expression Engine Language and DSNs to write data.
01-16-2014 10:03 AM
01-16-2014 11:23 AM
The Expression DSN may be opening and closing the cursor for each record you wish to write to the database. This could definitely be your bottleneck.