Search found 53125 matches
- Tue Jun 26, 2007 4:59 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Design of routine for hashed file lookup
- Replies: 8
- Views: 1379
Performance will probably be worse, because hashed file I/O from routines can not use the read cache. Lookups will be performed at disk speed rather than at memory speed. That said, to answer the question you asked, one routine is better than many in this case. The routine code itself will be reside...
- Tue Jun 26, 2007 4:56 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Trim in Modify stage
- Replies: 20
- Views: 22465
- Tue Jun 26, 2007 4:55 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error Handling
- Replies: 3
- Views: 1147
You are doing three separate lookups to Hashed File stages using three separate reference links. Each of these has a NOTFOUND link variable associated with it so, in your Transformer stage, you can direct stream input rows to a separate output based on the constraint RefLink1.NOTFOUND Or RefLink2.NO...
- Tue Jun 26, 2007 2:23 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: IBM Information Server Installation
- Replies: 5
- Views: 2213
- Tue Jun 26, 2007 2:22 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Source to target loading using oracle and datastage
- Replies: 24
- Views: 8105
- Tue Jun 26, 2007 2:20 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Can't delete Job
- Replies: 12
- Views: 4050
DS.CHECKER is the Cleanup command. It has not found any undeleted, redundant or orphaned job or shared container files. You can force-delete your job if it does not contain any shared containers. Can you please identify the records in DS_JOBOBJECTS using the following query? Post the results. SELECT...
- Tue Jun 26, 2007 2:09 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Trim in Modify stage
- Replies: 20
- Views: 22465
- Tue Jun 26, 2007 2:02 am
- Forum: General
- Topic: cannot see SAP R3 plugin installed in the packs palette
- Replies: 1
- Views: 852
- Mon Jun 25, 2007 1:05 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: calculated Modulus for a Type 2 hashed file
- Replies: 7
- Views: 1635
If you want to use separation 1 (512 bytes/group) - which your calculations did not use - then simply change the bytes/group in the formula. HASH.HELP is a very old utility, and does not factor in 64-bit pointers. The new calculation would yield 5,913,830 groups again factoring in 20% overhead (whic...
- Mon Jun 25, 2007 12:24 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: DB@ API plugin not working
- Replies: 2
- Views: 763
Let's review. It used to work. Now it doesn't. Nothing's changed in DataStage. Some DB2 patches have been applied. WHY do you think there's anything you can do in DataStage to remedy your situation? Go back to the vendor and complain that the DB2 patches have broken DataStage, and get them to supply...
- Mon Jun 25, 2007 12:21 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Removing all characters but alphanumeric
- Replies: 2
- Views: 893
- Mon Jun 25, 2007 12:19 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Hash file row padding
- Replies: 5
- Views: 1103
- Mon Jun 25, 2007 12:18 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: USER DEFINED SQL FILE WITH DRS STAGE
- Replies: 28
- Views: 13393
- Mon Jun 25, 2007 12:16 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: calculated Modulus for a Type 2 hashed file
- Replies: 7
- Views: 1635
You're going wrong by not reading carefully what HASH.HELP is telling you. It has begun by examining average record size and coming up with a separation of 1 (that is, a group size of 512 bytes). It's recommendation for modulo is based on that group size. Your calculation was based on a group size f...
- Mon Jun 25, 2007 12:07 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Hash file row padding
- Replies: 5
- Views: 1103
Unless the data are absoultely fixed width in every row it's impossible to calculate accurately. Use a storage overhead of 14 bytes per record plus one byte per field (for the delimiters) and round up to the next multiple of 4 or 8 bytes. Remember that there are no data types; you can't assume 4 byt...