DB2 Enterprise Edition Satge- Max Log issue

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ysrini9
Participant
Posts: 108
Joined: Tue Jul 12, 2005 2:51 am

DB2 Enterprise Edition Satge- Max Log issue

Post by ysrini9 »

Hi,

The job cleans the old data in staging table for unique month,year(i.e.,delete the records from target Db2 table- records count: 7.5 lakhs)
write Method:Delete rows
Delete Rows Mode:Auto-generated Delete.
fatal errors:
1.tblCCMIPreStagdel,0: ERROR during execution of DB2-CLI requests: SQL_ERROR; SQLSTATE = 40003; NativeErrorCode = -1224; Message = [IBM][CLI Driver] SQL1224N The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated the specified request because of an error or a forced interrupt. SQLSTATE=55032.
2.tblCCMIPreStagdel,0: Failed to a execute a DB2statement; examine the printed SQL error.
3.tblCCMIPreStagdel,0: ERROR during execution of DB2-CLI requests: SQL_ERROR; SQLSTATE = 08003; NativeErrorCode = -99999; Message = [IBM][CLI Driver] CLI0106E Connection is closed. SQLSTATE=08003 4.tblCCMIPreStagdel,0: Rollback failed.
---
Note: we Tested with diffrent array size and row commit interval.
pls do the needful...
Thansk in advance.
Srini Y
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The Database has run out of space to (temporarily) store the records in your delete transaction. Either define a smaller transaction size or have your DBA increase the rollback space.

What commit sizes did you experiment with? Which stage are you using on which release?
ysrini9
Participant
Posts: 108
Joined: Tue Jul 12, 2005 2:51 am

DB2 Enterprise Edition Satge- Max Log issue

Post by ysrini9 »

Thanks for your response...
we tried with
1.array size : 1000 and commit size: 2000
2.array size : 10000 and commit size: 20000
3.array size : 5000 and commit size: 10000
4.array size : 10000 and commit size: 10000
5.array size : 100 and commit size: 200

DB2_UDB_Enterprise stage(DB2 Version:9.5)
Datastage 8.1.
ysrini9
Participant
Posts: 108
Joined: Tue Jul 12, 2005 2:51 am

Re: DB2 Enterprise Edition Satge- Max Log issue

Post by ysrini9 »

For this issue.. We have one script .. that script( for unique month,year) is working fine from command line.. we are getting unique data based on the month,year columns from source file.
How can I call the same script in the datastage job level?
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

What does your DELETE statement look like. Basically, if the delete is only called one time, i.e. one input row and the delete is "DELETE * from <table> where <condition>;" then it might burst the rollback space regardless of the stage settings and that is what it looks like is happening. This is a typical problem with DB2 when trying truncate a table (there used to be no "truncate" command, one needs to execute a tricky "LOAD FROM /dev/null of del REPLACE INTO <table>").
If performance is not the highest priority, you could do a job where you SELECT all the keys of records to be deleted in one stage, then call up the DB2 stage with DELETE mode - in that case your settings for commit frequency and array size will be used.
MT
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 09, 2007 3:51 am

Re: DB2 Enterprise Edition Satge- Max Log issue

Post by MT »

Hi ysrini9


SQL1224N The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated the specified
request because of an error or a forced interrupt. SQLSTATE=55032.

If you read the message carefully it says that your connection has been terminated - i.e. forced of by an admin.
So I would suggest you talk to your DBA first.

It can be log space problem - which is a common one for deletes - but it can also be a WLM rule which has been violated.

For these kind of deletes you could also talk to your DBA and suggest using MDC tables which have a very good delete performance and reduced logging. I like them for getting a good delete performance when deleting a big number of rows from a huge table.
regards

Michael
Post Reply