Updating non partitioned table in Parallel with ODBC Stage

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
SwathiCh
Premium Member
Premium Member
Posts: 64
Joined: Mon Feb 08, 2010 7:17 pm

Updating non partitioned table in Parallel with ODBC Stage

Post by SwathiCh »

Hi All,

We are trying to update a table in SQLServer2008 using ODBC Connector/Enterprise stages using 4 node config file. Job is aborting with message "Deadlock".

Table is having a Clustered Unique index on key combination that I am using in Update statement in Job.

My Database is not partitioned DB so my table and index is also not partitioned.

Question here is, Can we update a non partitioned table with non partitioned Index in parallel using multiple nodes from Datastage??

Thanks,
SwathiCh
Premium Member
Premium Member
Posts: 64
Joined: Mon Feb 08, 2010 7:17 pm

Post by SwathiCh »

Looks like an issue with ODBC Connector stage...!

Running fine in sequential mode so we are OK for now.

Please post me if any one having any better ideas to resolve this issue....
--
Swathi Ch
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: Updating non partitioned table in Parallel with ODBC Sta

Post by chulett »

SwathiCh wrote:Can we update a non partitioned table with non partitioned Index in parallel using multiple nodes from Datastage??
Can you do it outside of DataStage? I suspect the answer is no.
-craig

"You can never have too many knives" -- Logan Nine Fingers
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

In the job design, did you try to hash partition that data based on the key combination?
Choose a job you love, and you will never have to work a day in your life. - Confucius
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Yah, should have thrown that question out there as well as it is the 'go to' question in situations like this. Guess the 'clustered' part threw me off a bit, just went and read up on them and I don't think that plays a role.

Using the proper hash partitioning should keep the four updates from throwing random updates at the table and causing the deadlocking you are seeing now. Let us know.
-craig

"You can never have too many knives" -- Logan Nine Fingers
SwathiCh
Premium Member
Premium Member
Posts: 64
Joined: Mon Feb 08, 2010 7:17 pm

Post by SwathiCh »

Hi Craig and qt_ky,

Thanks for your replies.

We tried with proper partitioning but not useful. We are getting the same issue with hash and modulus partitioning too.

And also we collected the data from peek by partition wise and verified it. Datastage writing the data properly in partitions but somehow on SqlServer end, same record going into different sessions. We couldn't figure it out it is either ODBC connector stage or Sql Server issue?
--
Swathi Ch
Post Reply