Job do not progress after certain percent...

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

ds_search2008
Participant
Posts: 75
Joined: Thu Nov 27, 2008 10:12 am

Job do not progress after certain percent...

Post by ds_search2008 »

Initially I searched this forum for a long time and tried all possible suggestions. However, the job is in running state for a very long time. Process percent in log is not proceeding after 40 percent. No error/warning message is found in the log file.

Its a simple job that involves two stages only.
Source:DB2 stage and target is also DB2 stage. I tried writing the source DB2 stage into a Data Set it works fine. But DB2 to DB2 the data is not getting loaded. The log throws no error message. The job link is in blue/green color. I can see the rows passing (initially 102 rows, 198 rows...) but the target table has no data.

I increased the transaction size and array size slowly from 1 till 40000. Can you please share your ideas on this issue.

Many Thanks
Pagadrai
Participant
Posts: 111
Joined: Fri Dec 31, 2004 1:16 am
Location: Chennai

Re: Job do not progress after certain percent...

Post by Pagadrai »

ds_search2008 wrote: Its a simple job that involves two stages only.
Source:DB2 stage and target is also DB2 stage.
Hi,
I understand that datastage is not doing any validation / transformation of the records here. You are picking records from one DB2 table and inserting / updating into another DB2 table.

I think writing a DB procedure will be useful in this case.
v2kmadhav
Premium Member
Premium Member
Posts: 78
Joined: Fri May 26, 2006 7:31 am
Location: London

Post by v2kmadhav »

does it work any better using an active stage in between the two read and write processes ...
ds_search2008
Participant
Posts: 75
Joined: Thu Nov 27, 2008 10:12 am

Post by ds_search2008 »

Thanks both of you for your replies.

v2kmadhav

Code: Select all

does it work any better using an active stage in between the two read and write processes 
I tried this possibility also. However the job is in running state only. :cry: Please help.
v2kmadhav
Premium Member
Premium Member
Posts: 78
Joined: Fri May 26, 2006 7:31 am
Location: London

Post by v2kmadhav »

which stage are you using? API? when you set the array size to 1.... u still dont see records in your table?? you said it passed 102 records??

can you try using the EE stage to see if makes a difference...

how big is that target table??

i hope u are not reading and writing to the same table :D

all the best...
ds_search2008
Participant
Posts: 75
Joined: Thu Nov 27, 2008 10:12 am

Post by ds_search2008 »

I'm using DB2 API stage. I tried different array size starting from 1. The performance statistics shows me that 102 records are passing into target. However back-end has no rows. Source and target are different.

Everything seems to be perfect. I wonder where the problem is....
v2kmadhav
Premium Member
Premium Member
Posts: 78
Joined: Fri May 26, 2006 7:31 am
Location: London

Post by v2kmadhav »

the insert update.....
can you try running that as a script outside datastage in DB2 administrator or any other tool you have...

does it work fine there....
ds_search2008
Participant
Posts: 75
Joined: Thu Nov 27, 2008 10:12 am

Post by ds_search2008 »

Thanks v2kmadhav.

Yes, I can the insert/update the target table successfully through DB2 tool, ie., outside DataStage. This proves that there are no locks in the table. I wonder why we are facing the issue with DataStage.
nani0907
Participant
Posts: 155
Joined: Wed Apr 18, 2007 10:30 am

Post by nani0907 »

Hello,

Have you checked if there are any triggers set for that particular table to commit after each row ?
thanks n regards
nani
sima79
Premium Member
Premium Member
Posts: 38
Joined: Mon Jul 16, 2007 8:12 am
Location: Melbourne, Australia

Post by sima79 »

What is your transaction isolation setting in the API stage? Can you post your SQL for both source and target stages as well?
ds_search2008
Participant
Posts: 75
Joined: Thu Nov 27, 2008 10:12 am

Post by ds_search2008 »

Thanks for the replies.

There is no trigger being set in any of the tables. Even the target is made very simple without any keys or not null columns for test purpose. We still face this issue.

Code: Select all

What is your transaction isolation setting in the API stage?
Transaction size =1; Array Size=1.

Sima sorry if I understood your question wrong.
v2kmadhav
Premium Member
Premium Member
Posts: 78
Joined: Fri May 26, 2006 7:31 am
Location: London

Post by v2kmadhav »

How many inserts, how many records do you have in your target table that are going to be updated....

Did this job run faster ever before? Has there been any considerable change to datastage, the table or the database?? im sure you have looked at indexes too...

Did you try using EE stage?

Are that target table and source table on the same database..if different how are they cataloged on your machine?
if your target table is on a remote machine.... what is the bandwidth of that line?
ds_search2008
Participant
Posts: 75
Joined: Thu Nov 27, 2008 10:12 am

Post by ds_search2008 »

Thanks v2kmadhav.


The records are around 2000 (we have started with test data to check the load). I tried loading field by field I found a bit improvement meaning the progress percentage got increased to 90%. However, the data is not getting loaded at the backend. I changed the transaction size and array size starting from 1. There are no indexes set on target table.

Yes we have tried EE. We face the same issue.

To fix this issue we now use sequential file stage as source instead of table. So there shouldn't be any issue. I tried adding an active stage between source and target. Even that is not working.

:cry: I have no idea on this issue. Nothing is positive about this job. :cry:
v2kmadhav
Premium Member
Premium Member
Posts: 78
Joined: Fri May 26, 2006 7:31 am
Location: London

Post by v2kmadhav »

so whats with the network then....

You have checked that writing the data from
db2table---->dataset or file is fast enough

How fast is the
dataset or file -----> db2 target table?

again... are those two tables on the same database? did you check the cataloging? is that fine?
sima79
Premium Member
Premium Member
Posts: 38
Joined: Mon Jul 16, 2007 8:12 am
Location: Melbourne, Australia

Post by sima79 »

Your transaction isolation should be in your source Db2 stage. Try setting it (if not set) to "Uncommitted Read"
Post Reply