Hadi,
PX does not have hashed files, the equivalents for your purposes are datasets in PX jobs. You can do this in one PX job, using either permanent datasets or termporary ones for your lookups.
Search found 15603 matches
- Sat Apr 08, 2006 6:24 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Two-step Look-up design
- Replies: 5
- Views: 1743
- Sat Apr 08, 2006 2:19 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Merge_0: Function 'construct_hash_table' failed
- Replies: 10
- Views: 3332
- Fri Apr 07, 2006 12:18 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: skip header and tail while reading from seq file
- Replies: 14
- Views: 6583
- Fri Apr 07, 2006 8:29 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Merge_0: Function 'construct_hash_table' failed
- Replies: 10
- Views: 3332
Kris, I tried in my 7.5.1 version and had the same problem. I would submit this as a bug to IBM/Ascential through your support provider. In order to work around this problem you might want to consider writing your 2 files in a fixed-width format with no quote characters or separators so that the mer...
- Fri Apr 07, 2006 7:42 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Trailing Record in Seq File
- Replies: 3
- Views: 983
Edward, the easiest methods are to keep your job that creates and writes this sequential file simple and to not include the trailer write in the job itself, but as either a simple UNIX level command or even another DS job which writes to that sequential file in append mode. I would pass the number o...
- Fri Apr 07, 2006 7:17 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: kill job using job id
- Replies: 15
- Views: 6609
- Fri Apr 07, 2006 5:19 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: kill job using job id
- Replies: 15
- Views: 6609
- Fri Apr 07, 2006 4:17 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error while importing data from csv and xls files.
- Replies: 14
- Views: 4202
- Fri Apr 07, 2006 4:15 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: IO system issue
- Replies: 1
- Views: 775
Are you inserting 5 rows per minute into the DataStage job log files? (This is a very small amount and could not, by itself, cause I/O issue on the UNIX machine) or do you have 5 multiinstance jobs per minute writing to the same log file? If so, how many records does you log file have? What kind of ...
- Fri Apr 07, 2006 3:17 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error while importing data from csv and xls files.
- Replies: 14
- Views: 4202
- Fri Apr 07, 2006 3:05 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: how to take the back up the log files for n days
- Replies: 9
- Views: 1329
- Fri Apr 07, 2006 3:03 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error while importing data from csv and xls files.
- Replies: 14
- Views: 4202
- Fri Apr 07, 2006 2:40 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error while importing data from csv and xls files.
- Replies: 14
- Views: 4202
- Fri Apr 07, 2006 1:24 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error while importing data from csv and xls files.
- Replies: 14
- Views: 4202
Hmmm... so you've declared an Excel sheet as a system DSN in Windows. You 've imported the metadata and loaded it into an ODBC stage in a DataStage job. The view-data is giving you an SQL error - have you specified your own SQL? If yes, please post it. If not, then click on user-defined SQL in the O...
- Fri Apr 07, 2006 1:17 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Data Source is empty after job finished successfully
- Replies: 2
- Views: 944