Help DataStage in a public ETL benchmark test

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Help DataStage in a public ETL benchmark test

Post by vmcburney »

A French firm called ManApps is running a public benchmark comparing DataStage to Talend, Pentaho and Informatica. They don't know much about DataStage and have some design flaws. Have a look at this job and comment on how you would improve it and I will email feedback back to ManApps.

(For more background you can read my December blog post ETL Benchmark Favours DataStage and Talend and my March update ManApps ETL Benchmark Redux: Informatica Surges to Lead.)

Scenario:
Reading X lines from a file input delimited, looking up to another file input delimited, for 4 fields using id_client column. Writing the jointure result into a file output delimited.

Parallel Job Design:
Image

File_input_delimited has three fields, file_lookup_delimited has nine fields. There are four test cases based on increasing the row volume, in all four the lookup has 100K rows and the input has 100K, 1M, 5M and 20M rows.

How can this job be improved?
sima79
Premium Member
Premium Member
Posts: 38
Joined: Mon Jul 16, 2007 8:12 am
Location: Melbourne, Australia

Post by sima79 »

Definately as you have posted in your blog, change the join to a lookup stage. I would also go with a 1 node configuration as well. I have had a look at the other parallel job designs and have no idea why they have chosen to use a join stage which requires a sorted input, given the number of rows in the reference file.

Another possible option is to separate the file I/O from the parsing of the columns using a sequential file and a column import stage.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Lose the repartitioning by moving the partitioning to the inputs of the Modify stage especially in a multiple-machine environment.

For larger volumes, reading the second file into a Lookup File Set may help. So might multiple readers per node.

I take it that the delimited file format is a given? Fixed-width format, if available, can be parsed faster.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
stewarthanna
Participant
Posts: 3
Joined: Mon Mar 09, 2009 8:15 am

Post by stewarthanna »

They should change to lookup.

Use Configuration files that match the volume, scaling from 1node to 2, 3, 4....using a larger config file will only incur startup time when processing small volumes

Remove unnecessary partitioning and sorting, lookup does not require it

Add -readers to Sequential File Stage to improve performance reading data, but I doubt this will be a bottleneck
Stewart
rwierdsm
Premium Member
Premium Member
Posts: 209
Joined: Fri Jan 09, 2004 1:14 pm
Location: Toronto, Canada
Contact:

Post by rwierdsm »

Thought I read somewhere that reading a sequential file as a single wide column and then doing a column split as the second stage makes the reads faster. Allows the column determination to happen in parallel instead of sequetial.

Rob
Rob Wierdsma
Toronto, Canada
bartonbishop.com
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

While using joiner also (Not a good option), modifying it a bit outperform informatica as there is marginal differnece. As Ray mentioned moving the partioning to the first link will help because its first partitioning and then repartioning.

Wonder if they are using stable sort or performing any thing inside modify stage.

Get rid of extra logging information and switch off job monitor.


and the best option would be to use lookup because it will love to handle 7 MB or 34MB or 68 MB (Test 12) of lookup file in whatever partition and with whatever number of nodes.

As they have mentioned that they changed job for informatica to use 1 partition for less number of records. why not used single node for less number of records. I think the startup time consists of around half of the time taken to process 100,000 records.

I would love to see the timings of all tools when running on data 10 times more than they are using now. but i can't see their machine much capable of handling that.
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Are you permitted to use a server job for small volumes?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

Good question - the benchmark has a set of results for server edition and parallel edition. The server job version has a hash file lookup. I will post a couple more job designs later including a server edition one.
balajisr
Charter Member
Charter Member
Posts: 785
Joined: Thu Jul 28, 2005 8:58 am

Post by balajisr »

Interesting Post.

What does modify_1 do? Does it drops certain columns or renames column name? If it only drop columns we can use sequential file property "Drop on Input" instead of modify stage modify_1.

Setting Env variable APT_TSORT_STRESS_BLOCKSIZE variable to an appropriate value will speed up join for huge volumes.

Another option is to use lookup stage which is already mentioned.
bobyon
Premium Member
Premium Member
Posts: 200
Joined: Tue Mar 02, 2004 10:25 am
Location: Salisbury, NC

Post by bobyon »

I see this post is a little old and maybe I'm to late but how about putting in explicit sort stages so you can specify the amount of memory to use.
Bob
Post Reply