I have a job which reads from a database table TABLE-A, joined with a dataset and then update the same table TABLE-A. TABLE-A is a db2 table installed in mainframe Z/OS. My job runs fine in dev region with 600k records. I am using db2 API stage for reading and writing into the table.
My question is ,will this ever end up in a deadlock as I am reading and writing to the same table in one job?. I have heard people facing deadlock issues when using db2 enterprise stage in the same job.
I do not want to migrate job to SIT where there are large amount of data and the job be hung. So your valuable suggestions are needed.
Lets assume that even if there won't be a deadlock issue, why would you want to have a job that reads and update the same table? Isn't one of the best practice is to separate each database operation (insert, delete, update etc)?
It is a best practice to seperate database operations. But I thought datastage internal processes executes in such a way that the update happens for rows that have been read and transferred to the output link.
Anyways, I will just seperate out the jobs. That would be a safer bet I beleive.