Search found 3045 matches
- Mon Oct 16, 2006 5:57 am
- Forum: Data Integration
- Topic: IOD 2006 questions
- Replies: 15
- Views: 29361
IOD 2006 questions
How many people are at the conference this year? How is the traffic at the dsxchange booth? Found anyone who wants to buy dsxchange for 2.2 billion?
- Mon Oct 16, 2006 12:13 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance of DB2/EE stage and DB2/UDB API stage
- Replies: 4
- Views: 1222
It depends on what your EE job is doing with the data after it reads it. For example if you are writing it to a sequential file you are potentially partitioning the data, repartitioning and sending it through a file export process. A good way to judge is to send the output of both to a transformer w...
- Fri Oct 13, 2006 6:14 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Can ChangeApply stage be useful?
- Replies: 2
- Views: 804
I currently speccing an environment where the transformations run 24x7 on a DataStage grid but the database load should only occur overnight so you don't impact the users. One method for doing this quickly is to take a copy of the target table into a dataset overnight and use it for processing durin...
- Thu Oct 12, 2006 8:10 pm
- Forum: Enhancement Wish List
- Topic: How about a Change Data Detection stage for all DS versions?
- Replies: 5
- Views: 2595
Wouldn't it be great? There is also the new parallel slowly changing dimension stage in the Hawk release that lets you maintain it from a single stage. There is almost no chance of it being introduced to server jobs as this is not the future direction of the product. Server jobs will continue to be ...
- Thu Oct 12, 2006 5:56 pm
- Forum: General
- Topic: Getting Good Information
- Replies: 2
- Views: 2586
- Thu Oct 12, 2006 5:53 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: migration from datastage 5.2 to datastage 7.5
- Replies: 4
- Views: 1713
The migration from 5.2 server jobs to 7.x server jobs is quite smooth and most of your jobs should just keep working. If you are migrating to a completely new server you will have to verify that the file directory locations are there and the database connectivity. Once migrated you can choose to sta...
- Thu Oct 12, 2006 5:48 pm
- Forum: General
- Topic: Release Note for 8.0
- Replies: 10
- Views: 5788
The YouTube advertisement ends with October 2006 as the availability date. This might mean it is launched (at IOD) in October but is not generally available until later, or it may mean the Information Server on its own (without all the additional products) is available with the Business Glossary and...
- Wed Oct 11, 2006 11:17 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Does Enterprise Edition run DS basic script
- Replies: 2
- Views: 772
There are a couple places to run your job log scripts, you can run them within a parallel or server job in the after-job routine. These routines will remain BASIC code even in parallel jobs. You can also run the script from a Sequence job. Most of the text entry boxes in sequence job stages accept B...
- Wed Oct 11, 2006 11:11 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: User Defined Query
- Replies: 3
- Views: 807
Agree that you should avoid user-defined SQL where possible, just introduces risks without benefits. I do however use the SQL builder, though it can take a bit getting used to and is a bit limited. I like it because when I do use user-defined SQL it synchronises my sql column select list with my col...
- Wed Oct 11, 2006 4:30 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Different Options, Best Performance
- Replies: 2
- Views: 792
- Wed Oct 11, 2006 2:49 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to import cobol copybooks metadeta
- Replies: 10
- Views: 6460
Manual changes to imported copybook definitions are quite difficult because with the fixed width definitions you may find most of your column start and end points need to be changed. Always best to reimport them. Consider giving imported copybooks version numbers against the name. That way you can u...
- Tue Oct 10, 2006 8:13 pm
- Forum: Data Integration
- Topic: Oracle buys Sunopsis
- Replies: 1
- Views: 8084
Oracle buys Sunopsis
Sunopsis seems to have a bit of a buzz about it and is often touted as the best of the ELT products. Oracle has acquired Sunopsis for an undisclosed sum. They plan on putting it into the Fusion middleware and making it the hetergeneous part of Oracle Warehouse Builder. Has anyone used Sunopsis? Will...
- Tue Oct 10, 2006 7:47 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Aggregate Performance vs. DB2 vs. other options
- Replies: 1
- Views: 1091
If you are talking data warehouses or any type of reporting database I think of DataStage as primarily being responsible for delivering transformed data at its lowest level of granularity. What you do with it next can be any number of tools. If you are on Netezza the recent buzz suggests you don't n...
- Tue Oct 10, 2006 12:40 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: how can we suppress the warnings in sequence
- Replies: 10
- Views: 2447
You could suppress it by waiting longer. How about setting the wait to 1758 years and thirty three minutes. Or better yet take it out, why is it waiting for a file that doesn't arrive? You could write your own waitforfile command using a BASIC routine and a sleep loop. Your sequence job is presentin...
- Mon Oct 09, 2006 6:22 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Pick the environment variable file dynamically
- Replies: 2
- Views: 1109
The dsenv file is not the best place for dynamic variables like this. I would consider having different batch user ids for each of the two runs and give each id values in a .profile file. Also consider pushing the two job parameters into all jobs and passing the values in from the calling script via...