deadlock error

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

sush26283
Participant
Posts: 68
Joined: Thu May 20, 2010 11:55 am

deadlock error

Post by sush26283 »

I have a Job.source and target both sql server
source--->transformer--->target

when i run the job it gets aborted with the error message :

Transaction (Process ID 65) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

sometimes it would run with some rows at target and abort and sometimes with no rows at all. Am using the UPSERT(update then insert) at the target.

please give some insight.

Thanks
rajeshananthapalpu
Participant
Posts: 4
Joined: Mon Oct 19, 2009 2:57 am

Re: deadlock error

Post by rajeshananthapalpu »

Re-partition before target stage. You may have duplicates in data...

Thanks
sush26283
Participant
Posts: 68
Joined: Thu May 20, 2010 11:55 am

Re: deadlock error

Post by sush26283 »

Thanks for the reply have been waiting for someone to help out.
Currently am using the AUTO for partitioning in transformer, but I tried with HASH, as it was suggested on one of the posts, but still it gets aborted, with the mentioned error. Also, tried with disabling the RCP but no result. :(
BI-RMA
Premium Member
Premium Member
Posts: 463
Joined: Sun Nov 01, 2009 3:55 pm
Location: Hamburg

Post by BI-RMA »

I would expect that DataStage - if You didn't do it manually - set the array-size and the commit-level of the target-stage to 1 on the target-stage, because this is necessary for insert/update operations.

So I would not expect the deadlock to be caused by two rows being written by this same job. There may be uncommited write-operations by other processes though, that do not allow you to update your target-table. This could be of the result of an aborted job, but it may also be completely unrelated to any activities within DataStage.

Check with Your DBA to find out wether there are any locks on the target table.

Just for certainty: Replace the target-stage by a peek to see the values that are being written out to your target-stage. Make sure Your partitioning is correctly corresponding to the update-keys of Your target-stage.
"It is not the lucky ones are grateful.
There are the grateful those are happy." Francis Bacon
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

As the error indicates, there is a deadlock (conflict) with another process which is updating the same block of data within the database. This usually indicates that your data is not partitioned properly.

Have you partitioned (and optionally sorted) your data on the table primary key?

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
sush26283
Participant
Posts: 68
Joined: Thu May 20, 2010 11:55 am

Post by sush26283 »

I tried partioning on with HASH, but the job failed...but moments ago I ran the job in Sequential Mode and it ran successfully with no warnings or error..
sush26283
Participant
Posts: 68
Joined: Thu May 20, 2010 11:55 am

Post by sush26283 »

I again tried with HASH and sort, but the job still fails, but this time it failed at the last step, after reading all the data.
pandeesh
Premium Member
Premium Member
Posts: 1399
Joined: Sun Oct 24, 2010 5:15 am
Location: CHENNAI, TAMIL NADU

Post by pandeesh »

Wit hash partitioning try to peek your data, so that you can identify what's the problem.
pandeeswaran
sush26283
Participant
Posts: 68
Joined: Thu May 20, 2010 11:55 am

Post by sush26283 »

i replaced the output stage with the peek stage it shows all records were trasferred from transformer to peek stage, with no errors or warnings
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Re: deadlock error

Post by SURA »

When you use WRITE method you will get this issue and in upsert rare.

In the target STAGE-->Advance--> Execution mode to Sequential.

You will not get this issue again. Which version of Datastage and SQL Server you are using?

Datastage 8.1 will support SQL Server 2005 or lesser version, whereas Datastage 8.5 / 8.7 support SQL Server 2008 (Not R2) or lesser version.

DS User
Kryt0n
Participant
Posts: 584
Joined: Wed Jun 22, 2005 7:28 pm

Post by Kryt0n »

Have you got a suitable index matching against the update key set?

I don't know much about sql server but we had similar on DB2 which appears to be fixed by an index on the update keys. (I'm putting this down to table scan locks which I believe is something SQL Server also does when no suitable index is available)
prdpreddy
Participant
Posts: 3
Joined: Wed Sep 29, 2010 1:42 am
Location: Hyderabad

Dead lock cause

Post by prdpreddy »

I Faced this issue and resolved.

"When multiple jobs are running and trying to write to the same table ". We get this error
sush26283
Participant
Posts: 68
Joined: Thu May 20, 2010 11:55 am

Re: deadlock error

Post by sush26283 »

Hi Sura,
Thanks for the insight...I am using the UPSERT(update then insert), as advised by you I did change the execution mode to sequential in the target stage, keeping the transformer as parallel. Ran the job and it was successful, but still I dont understand the reason for this occurance.

am using datastage 8.5 and sql server 2008
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Re: deadlock error

Post by SURA »

Earlier you can't use more than on SQL Server EE stage at the same job to load the data. For that IBM provided a fix in DS 8.5 and that is resolved, still having this issue and the problem is, it may not happen for all the run. It is a bug. If you can, please raise a PMR with IBM.

DS User
ds_avatar
Premium Member
Premium Member
Posts: 6
Joined: Sun Dec 11, 2011 8:51 pm

Post by ds_avatar »

We did experience the same error ... the target table was opened by one of our team member in edit mode (Toad), hence the records were not inserted into the table by DS job :)

Hope this shouldn't be an issue in your case.
Thanks,
ds_avatar
Post Reply