Hi,
I have a requirement in which there are about 4000 records in one file and 5212 records in another file. And i'm using merge stage to merge these two files on one id. But it is taking more time to load infact the datastage is strucking in the middle without giving even one error or warning.
I would like to know about how to tune this job to run faster?
OR
How can i do performance tuning in DS (Parallel)
I'm using sequential files,transformers,one merge and ODBC stages in my job.
Thanks in advance,
How can we do Performance tuning?
Moderators: chulett, rschirm, roy
How can we do Performance tuning?
Thanks n Regards,
Jaleel
Jaleel
You need to get the job to run through before you can attempt to tune it; but with 4000 and 5212 records you should be finished in less than a second - unless your ODBC connection is slowing things down. What is your speed reading ODBC into a peek stage or a dataset? That is going to be your speed limiting factor.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
With such a small number of rows I'd be inclined to use a server job with a Merge stage. If you must use a parallel job, be aware that the Join and Merge stages both require sorted input (added overhead). A Lookup stage would be preferable.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.