Page 1 of 1

How can we do Performance tuning?

Posted: Mon Sep 04, 2006 6:55 am
by Jaleel
Hi,

I have a requirement in which there are about 4000 records in one file and 5212 records in another file. And i'm using merge stage to merge these two files on one id. But it is taking more time to load infact the datastage is strucking in the middle without giving even one error or warning.

I would like to know about how to tune this job to run faster?
OR
How can i do performance tuning in DS (Parallel)

I'm using sequential files,transformers,one merge and ODBC stages in my job.

Thanks in advance,

Posted: Mon Sep 04, 2006 7:18 am
by ArndW
You need to get the job to run through before you can attempt to tune it; but with 4000 and 5212 records you should be finished in less than a second - unless your ODBC connection is slowing things down. What is your speed reading ODBC into a peek stage or a dataset? That is going to be your speed limiting factor.

Posted: Mon Sep 04, 2006 7:19 am
by kcbland
Why use the Merge and not the Join?

Posted: Mon Sep 04, 2006 9:02 pm
by ray.wurlod
With such a small number of rows I'd be inclined to use a server job with a Merge stage. If you must use a parallel job, be aware that the Join and Merge stages both require sorted input (added overhead). A Lookup stage would be preferable.

Posted: Mon Sep 04, 2006 9:42 pm
by rasi
First make the job to run without any error. Then post how much time it took to run the job... This should help ...