Help DataStage in a public ETL benchmark test
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Help DataStage in a public ETL benchmark test
A French firm called ManApps is running a public benchmark comparing DataStage to Talend, Pentaho and Informatica. They don't know much about DataStage and have some design flaws. Have a look at this job and comment on how you would improve it and I will email feedback back to ManApps.
(For more background you can read my December blog post ETL Benchmark Favours DataStage and Talend and my March update ManApps ETL Benchmark Redux: Informatica Surges to Lead.)
Scenario:
Reading X lines from a file input delimited, looking up to another file input delimited, for 4 fields using id_client column. Writing the jointure result into a file output delimited.
Parallel Job Design:
File_input_delimited has three fields, file_lookup_delimited has nine fields. There are four test cases based on increasing the row volume, in all four the lookup has 100K rows and the input has 100K, 1M, 5M and 20M rows.
How can this job be improved?
(For more background you can read my December blog post ETL Benchmark Favours DataStage and Talend and my March update ManApps ETL Benchmark Redux: Informatica Surges to Lead.)
Scenario:
Reading X lines from a file input delimited, looking up to another file input delimited, for 4 fields using id_client column. Writing the jointure result into a file output delimited.
Parallel Job Design:
File_input_delimited has three fields, file_lookup_delimited has nine fields. There are four test cases based on increasing the row volume, in all four the lookup has 100K rows and the input has 100K, 1M, 5M and 20M rows.
How can this job be improved?
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Definately as you have posted in your blog, change the join to a lookup stage. I would also go with a 1 node configuration as well. I have had a look at the other parallel job designs and have no idea why they have chosen to use a join stage which requires a sorted input, given the number of rows in the reference file.
Another possible option is to separate the file I/O from the parsing of the columns using a sequential file and a column import stage.
Another possible option is to separate the file I/O from the parsing of the columns using a sequential file and a column import stage.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Lose the repartitioning by moving the partitioning to the inputs of the Modify stage especially in a multiple-machine environment.
For larger volumes, reading the second file into a Lookup File Set may help. So might multiple readers per node.
I take it that the delimited file format is a given? Fixed-width format, if available, can be parsed faster.
For larger volumes, reading the second file into a Lookup File Set may help. So might multiple readers per node.
I take it that the delimited file format is a given? Fixed-width format, if available, can be parsed faster.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 3
- Joined: Mon Mar 09, 2009 8:15 am
They should change to lookup.
Use Configuration files that match the volume, scaling from 1node to 2, 3, 4....using a larger config file will only incur startup time when processing small volumes
Remove unnecessary partitioning and sorting, lookup does not require it
Add -readers to Sequential File Stage to improve performance reading data, but I doubt this will be a bottleneck
Use Configuration files that match the volume, scaling from 1node to 2, 3, 4....using a larger config file will only incur startup time when processing small volumes
Remove unnecessary partitioning and sorting, lookup does not require it
Add -readers to Sequential File Stage to improve performance reading data, but I doubt this will be a bottleneck
Stewart
-
- Premium Member
- Posts: 1735
- Joined: Thu Mar 01, 2007 5:44 am
- Location: Troy, MI
While using joiner also (Not a good option), modifying it a bit outperform informatica as there is marginal differnece. As Ray mentioned moving the partioning to the first link will help because its first partitioning and then repartioning.
Wonder if they are using stable sort or performing any thing inside modify stage.
Get rid of extra logging information and switch off job monitor.
and the best option would be to use lookup because it will love to handle 7 MB or 34MB or 68 MB (Test 12) of lookup file in whatever partition and with whatever number of nodes.
As they have mentioned that they changed job for informatica to use 1 partition for less number of records. why not used single node for less number of records. I think the startup time consists of around half of the time taken to process 100,000 records.
I would love to see the timings of all tools when running on data 10 times more than they are using now. but i can't see their machine much capable of handling that.
Wonder if they are using stable sort or performing any thing inside modify stage.
Get rid of extra logging information and switch off job monitor.
and the best option would be to use lookup because it will love to handle 7 MB or 34MB or 68 MB (Test 12) of lookup file in whatever partition and with whatever number of nodes.
As they have mentioned that they changed job for informatica to use 1 partition for less number of records. why not used single node for less number of records. I think the startup time consists of around half of the time taken to process 100,000 records.
I would love to see the timings of all tools when running on data 10 times more than they are using now. but i can't see their machine much capable of handling that.
Priyadarshi Kunal
Genius may have its limitations, but stupidity is not thus handicapped.
Genius may have its limitations, but stupidity is not thus handicapped.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Good question - the benchmark has a set of results for server edition and parallel edition. The server job version has a hash file lookup. I will post a couple more job designs later including a server edition one.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Interesting Post.
What does modify_1 do? Does it drops certain columns or renames column name? If it only drop columns we can use sequential file property "Drop on Input" instead of modify stage modify_1.
Setting Env variable APT_TSORT_STRESS_BLOCKSIZE variable to an appropriate value will speed up join for huge volumes.
Another option is to use lookup stage which is already mentioned.
What does modify_1 do? Does it drops certain columns or renames column name? If it only drop columns we can use sequential file property "Drop on Input" instead of modify stage modify_1.
Setting Env variable APT_TSORT_STRESS_BLOCKSIZE variable to an appropriate value will speed up join for huge volumes.
Another option is to use lookup stage which is already mentioned.