Probably easiest is to write to two separate sequential files, and cat them together afterwards.
You can also construct a line containing record1 : LF : record2 and write that to a single sequential file for which you have defined just one VarChar column.
Search found 53125 matches
- Mon Jul 04, 2005 1:00 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: transformer-2 set of records in o/p link for one i/p record
- Replies: 2
- Views: 735
- Mon Jul 04, 2005 12:58 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Project Environment Variable File
- Replies: 2
- Views: 746
- Mon Jul 04, 2005 12:55 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: how to get distinct data without using aggregate stage
- Replies: 5
- Views: 1699
Welcome aboard! :D If your source is a database you can use user-define SQL and perform SELECT DISTINCT right at source. If your source is a text file, you can pre-process it with the sort -u command (even though you are on Windows, DataStage 7.5 ships with MKS toolkit, so that you can execute most ...
- Mon Jul 04, 2005 12:50 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: internal limit restriction exceeded
- Replies: 6
- Views: 2256
- Sun Jul 03, 2005 10:18 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: FTP Stage in Enterprise Edition
- Replies: 4
- Views: 1540
- Sun Jul 03, 2005 10:15 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Pros & Cons of Audit stage
- Replies: 2
- Views: 1764
At one site where I worked late last year the requirement was similar to yours. Indeed, the business rules depended on what data were available in addition to changing over time. We implemented a "late binding" approach, where a table-driven approach was used to select the appropriate business rule,...
- Sun Jul 03, 2005 4:32 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: how to update multiple existing records based on New record
- Replies: 4
- Views: 1156
there may be more than one record in existing database A hashed file is based on a primary key lookup. A primary key equality constraint can only ever return one row. You may need two separate lookups against the same hashed file, if you have two different fund identifiers. Otherwise, the Hashed Fi...
- Sun Jul 03, 2005 4:28 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: job aborting in merging of two files
- Replies: 15
- Views: 4152
- Sun Jul 03, 2005 4:16 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: UniData - UniVerse - U know what they actually are?
- Replies: 5
- Views: 2798
There is a long-term project to change the back end. In the next major release of the product the structure of the Repository database will be changed, so that all the products in the Enterprise Integration Suite can share the one repository. However, no major changes to DS Engine are envisaged in t...
- Sat Jul 02, 2005 7:50 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Multiple Transformers?
- Replies: 1
- Views: 759
First point is that both methods work. So would a design with two Transformer stages each performing three lookups. Technically one Transformer stage can have 1 stream input, N outputs and (127 - N) reference inputs. However, by cramming all of these into a single process (for larger N) you probably...
- Sat Jul 02, 2005 7:39 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: UniData - UniVerse - U know what they actually are?
- Replies: 5
- Views: 2798
UniVerse and UniData are both database products, now in the IBM stable where they are jointly called "U2" (not to be confused with a certain group of Irish balladeers). They were created by separate companies. VMARK Software, Inc. was a public company that created UniVerse in 1984; UniData was a pri...
- Sat Jul 02, 2005 7:28 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Replace function
- Replies: 12
- Views: 3171
- Sat Jul 02, 2005 7:27 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: job aborting in merging of two files
- Replies: 15
- Views: 4152
The error may actually raise on the Merge stage. However, being a passive stage, it has not reported the error but its caller, the active (Transformer) stage, has reported it. Please post exact details of the join you have attempted including the definitions of key columns in both source files. Do t...
- Sat Jul 02, 2005 7:23 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Getting DS log messages from DS and into Oracle.
- Replies: 8
- Views: 1877
I think I was trying to say that the design UV stage ---> Transformer stage ---> Sequential File stage could perform the required task without much (any?) code. Table name in the UV stage is RT_LOG#JobNumber# WHERE condition in the UV stage is TIMESTAMP > '#JobStartTimeStamp#' Both of these can be r...
- Sat Jul 02, 2005 7:15 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Runtime column propagation
- Replies: 8
- Views: 2343
At some point you're going to need some metadata. Since you want to persist with RCP, I would advise importing the sequential file's table definition now that it has been created, and use that in subsequent jobs. Better would be to load the metadata back into the job that creates the sequential file...