Search found 4992 matches
- Thu Apr 13, 2006 12:49 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: ODBC Source- incorrect number of result columns.
- Replies: 13
- Views: 6785
- Thu Apr 13, 2006 12:16 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: DataStage Client Slow response
- Replies: 17
- Views: 9994
The only way to identify the issues are to eliminate as many issues as possible. When your client is particularly sluggish, look at system utilization using something like top, prstat, or glance on the repository node. Maybe jobs aren't running, but you can't discount that the machine may be bottlen...
- Thu Apr 13, 2006 11:42 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Parsing file names
- Replies: 5
- Views: 1042
- Thu Apr 13, 2006 11:29 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Mulitple update to the table using DataStage
- Replies: 8
- Views: 6208
This works well in DS, because if you compute each column separately as a separate job, staging to a hashed file for each column, then you just need to stream your driving keys and reference lookup each column. Since each column would be an equi-join, you need to zero/default fill the target column ...
- Thu Apr 13, 2006 11:19 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: how to pass UNIX script(Procedure) output to job parameter?
- Replies: 12
- Views: 6828
- Thu Apr 13, 2006 11:17 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Mulitple update to the table using DataStage
- Replies: 8
- Views: 6208
- Thu Apr 13, 2006 11:14 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error while opening thejob, DS_JOBOBJECTS
- Replies: 16
- Views: 5723
Actually, failure to open DS_JOBOBJECTS means that the physical repository file that contains all job information is missing. The repository is corrupted. The Routines and Table Definitions are stored in a different file, if you can get an export of them for safety. Your only choice is to either do ...
- Thu Apr 13, 2006 9:02 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Creating Header/Detail/Summary records in one output file
- Replies: 12
- Views: 4466
- Thu Apr 13, 2006 8:28 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: upgrade DS server 7.5 to DS server 7.5.1.a
- Replies: 8
- Views: 2535
- Thu Apr 13, 2006 8:05 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Creating Header/Detail/Summary records in one output file
- Replies: 12
- Views: 4466
- Thu Apr 13, 2006 8:03 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Moving Resized Hashed files
- Replies: 8
- Views: 2217
Still set the minimum modulus. You already know the file is going to be big, why start from the smallest possible size and grow from there? Find your high watermark size and go a little over that. The key pattern means if your file has a primary key value of 1 to some big number then you can get a p...
- Thu Apr 13, 2006 7:59 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Fuction
- Replies: 5
- Views: 1082
- Thu Apr 13, 2006 7:57 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Creating Header/Detail/Summary records in one output file
- Replies: 12
- Views: 4466
Separate your creating of header/control/footer from data transformation processing. 1. Transform and create your data in a single large file. 2. In a Sequencer w/Command stage or a Batch job with API call use the unix split command to generate n files of 994 rows. 3. Use a standard process to read ...
- Thu Apr 13, 2006 7:44 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Performance Improvement
- Replies: 14
- Views: 4013
Spool to a file in one job, transform in another splitting inserts from updates into separate files, then a job to apply inserts and another for updates. Your fundamental problem is that you're querying the same table you are loading. You should NOT do this. You're using rollback to hold the entire ...
- Thu Apr 13, 2006 7:33 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Hash File Qtd Limits
- Replies: 10
- Views: 3770