An insert, update, or delete statement failed to run

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

patelamit009
Participant
Posts: 20
Joined: Fri Jan 27, 2006 12:17 am

An insert, update, or delete statement failed to run

Post by patelamit009 »

Hi All,

I am also facing this problem with mentioned fatal below.

Code: Select all

An insert, update, or delete statement failed to run. (CC_DB2DBStatement::processRowStatusArray, file CC_DB2DBStatement.cpp, line 1,984).
I am using the target as DB2 connector and had set the write mode as Insert then Update. Curious things is the job works fine for less number of volume and aborts for huge volume (in millions ).

Has anyone found the rootcause of this issue?
Regards,
Patel
pandeesh
Premium Member
Premium Member
Posts: 1399
Joined: Sun Oct 24, 2010 5:15 am
Location: CHENNAI, TAMIL NADU

Post by pandeesh »

Could you post how you are configuring the connector stage?
Especially, the array size ..
And also diagnose by checking for locks while the job is running.
pandeeswaran
patelamit009
Participant
Posts: 20
Joined: Fri Jan 27, 2006 12:17 am

Post by patelamit009 »

Hi Pandeesh,

Thanks for the quick reply.


My Connector stage settings are
  • Write Mode : Insert then Update
    Record Count : 5000
    Isolation Level : Cursor stability
    Auto commit : On
    Array size : 5000

    Failed on size mismatch : Yes
    Failed on type mismatch : Yes
    Dropped unmatched fields : Yes

    Lock wait mode : Use the lock timeout database config parameters.
I could not able to diagnose the job and check while it is running because it abort once the job starts. Could you please help if you mean some thing else?
Regards,
Patel
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

:!: Split to your own post.
-craig

"You can never have too many knives" -- Logan Nine Fingers
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

Hi patelamit009,
The only solution which we tried was to run the job on a single node or at least try to write the data on 1 node only.
My DBA told me that while writing the data multiple processes(active/inactive) try to write at the same time which in turn leads to the locking at the database level. We still haven't got the right solution, the only thing which we are doing is to write the data at 1 node (using node constraint property).
We are thinking to raise a PMR with IBM so the get the answer that why this happens with some particular jobs/tables only.
Thanx and Regards,
ETL User
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

On second thoughts, try to run the job with auto commit as "off" and then let us know.
Thanx and Regards,
ETL User
Kryt0n
Participant
Posts: 584
Joined: Wed Jun 22, 2005 7:28 pm

Post by Kryt0n »

Make sure there is a suitable index based on the keys you use to determine an update... at least that is what helped us but since you are doing an insert else update, you would hope an index already exists...

If the insert/update can use an index, it's less likely to lock unused rows that the other side of the partition wants to update.
sri_vin
Premium Member
Premium Member
Posts: 20
Joined: Wed Aug 25, 2010 10:58 pm

Post by sri_vin »

1. First check the record lock
2. Second turn the score on and check where it is failing
3. If it is failing as soon as you start the job then reduce the no of nodes.

hope it helps
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

Ensure that you selected the KEY columns for Update / Insert.

DS User
patelamit009
Participant
Posts: 20
Joined: Fri Jan 27, 2006 12:17 am

Post by patelamit009 »

Hi All,

Thanks for the valuable replies. But, I forgot to mention that the table is DB2 z/OS which is catalog in DB2 environment and i am loading it using DB2 connector stage.

The fatal mentioned is with respect to DB2 connector stage while loading MF table. I had tried all the options that are discussed above and no luck with job execution. It aborts with the same error as mentioned in the begining of the post.

Few observations are:

1. Primary Keys are defined on the Table.
2. Job still aborts when setting Auto commit as Off.
3. Have executed with single node in sequential and parallel but no luck.


As per the IBM documentation,
Insert then update: Writes data to the target table; runs the INSERT statement first, and if the INSERT fails with a duplicate key violation, runs the UPDATE statement.
But, in my case when INSERT statement fails UPDATE statement is not been executed and hence job aborts.

Can any one please share further updates on it?

Thanks in advance.
Regards,
Patel
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

WHY is your insert statement failing? Is it for a... duplicate key violation?
-craig

"You can never have too many knives" -- Logan Nine Fingers
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

With the "Insert then Update" setting you would need to set the commit count (record count) to 1. You have it set to 5000 so that may be causing problems, unless the connector is smart enough to override that setting to 1 in the background w/o telling you, which it may or may not be doing.

Set your record count to 1 and see if it works. If it does, then you may come to know the root cause.

I would not run millions of inserts and updates into DB2/z using the "Insert then Update" setting unless it's always a low volume of records. For better performance, split the two and run an insert job then an update job with large commit counts (record counts).
Choose a job you love, and you will never have to work a day in your life. - Confucius
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

Did you try running the job with auto commit off ?
Or running the job on single node??
Thanx and Regards,
ETL User
patelamit009
Participant
Posts: 20
Joined: Fri Jan 27, 2006 12:17 am

Post by patelamit009 »

Thanks Eric. The job certainly worked after keeping the record count to 1.So, my assumption about the issue would be,
The DB2 connector for the z/OS table requires each record to be put in array when write method is set as "Insert then Update". (Here i mentioned array size as 1 which has dependent on record count )

Also, due to performance i have taken your suggestion to split the Insert and Update into two differnet jobs and it is working as expected.


Hi Chandra, have executed the job with auto commit off also with single node. But i couldnt able to successful on it. Please have a look on my previous post. Thanks.
Regards,
Patel
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

That's good. Problem resolved now?
Choose a job you love, and you will never have to work a day in your life. - Confucius
Post Reply