Search found 53125 matches

by ray.wurlod
Mon Jul 04, 2005 1:00 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: transformer-2 set of records in o/p link for one i/p record
Replies: 2
Views: 735

Probably easiest is to write to two separate sequential files, and cat them together afterwards.

You can also construct a line containing record1 : LF : record2 and write that to a single sequential file for which you have defined just one VarChar column.
by ray.wurlod
Mon Jul 04, 2005 12:58 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Project Environment Variable File
Replies: 2
Views: 746

They are in a text file called DSParams in your project directory on the DataStage server machine.
by ray.wurlod
Mon Jul 04, 2005 12:55 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: how to get distinct data without using aggregate stage
Replies: 5
Views: 1699

Welcome aboard! :D If your source is a database you can use user-define SQL and perform SELECT DISTINCT right at source. If your source is a text file, you can pre-process it with the sort -u command (even though you are on Windows, DataStage 7.5 ships with MKS toolkit, so that you can execute most ...
by ray.wurlod
Mon Jul 04, 2005 12:50 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: internal limit restriction exceeded
Replies: 6
Views: 2256

It's nothing to do with DataStage; it's a configuration such as number of listeners. Ask yout Oracle DBA.
You might also execute the command oerr ORA 12540 (at the operating system prompt on the DataStage server machine) in case this might shed more light.
by ray.wurlod
Sun Jul 03, 2005 10:18 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: FTP Stage in Enterprise Edition
Replies: 4
Views: 1540

dsftp.so is on the server machine. It is the library supporting the plug-in stage itself.

dsftpune.dll ("une" = "United States English") supports the GUI - the client piece - of the FTP stage. It is on the client machine.

Are you finding it under Server or under Parallel?
by ray.wurlod
Sun Jul 03, 2005 10:15 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Pros & Cons of Audit stage
Replies: 2
Views: 1764

At one site where I worked late last year the requirement was similar to yours. Indeed, the business rules depended on what data were available in addition to changing over time. We implemented a "late binding" approach, where a table-driven approach was used to select the appropriate business rule,...
by ray.wurlod
Sun Jul 03, 2005 4:32 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: how to update multiple existing records based on New record
Replies: 4
Views: 1156

there may be more than one record in existing database A hashed file is based on a primary key lookup. A primary key equality constraint can only ever return one row. You may need two separate lookups against the same hashed file, if you have two different fund identifiers. Otherwise, the Hashed Fi...
by ray.wurlod
Sun Jul 03, 2005 4:28 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: job aborting in merging of two files
Replies: 15
Views: 4152

Can you create some test data files in which there are no column headings? It may be that there is some data type interference occurring with the column headings.
by ray.wurlod
Sun Jul 03, 2005 4:16 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: UniData - UniVerse - U know what they actually are?
Replies: 5
Views: 2798

There is a long-term project to change the back end. In the next major release of the product the structure of the Repository database will be changed, so that all the products in the Enterprise Integration Suite can share the one repository. However, no major changes to DS Engine are envisaged in t...
by ray.wurlod
Sat Jul 02, 2005 7:50 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Multiple Transformers?
Replies: 1
Views: 759

First point is that both methods work. So would a design with two Transformer stages each performing three lookups. Technically one Transformer stage can have 1 stream input, N outputs and (127 - N) reference inputs. However, by cramming all of these into a single process (for larger N) you probably...
by ray.wurlod
Sat Jul 02, 2005 7:39 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: UniData - UniVerse - U know what they actually are?
Replies: 5
Views: 2798

UniVerse and UniData are both database products, now in the IBM stable where they are jointly called "U2" (not to be confused with a certain group of Irish balladeers). They were created by separate companies. VMARK Software, Inc. was a public company that created UniVerse in 1984; UniData was a pri...
by ray.wurlod
Sat Jul 02, 2005 7:28 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Replace function
Replies: 12
Views: 3171

Fewer.

:twisted:
by ray.wurlod
Sat Jul 02, 2005 7:27 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: job aborting in merging of two files
Replies: 15
Views: 4152

The error may actually raise on the Merge stage. However, being a passive stage, it has not reported the error but its caller, the active (Transformer) stage, has reported it. Please post exact details of the join you have attempted including the definitions of key columns in both source files. Do t...
by ray.wurlod
Sat Jul 02, 2005 7:23 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Getting DS log messages from DS and into Oracle.
Replies: 8
Views: 1877

I think I was trying to say that the design UV stage ---> Transformer stage ---> Sequential File stage could perform the required task without much (any?) code. Table name in the UV stage is RT_LOG#JobNumber# WHERE condition in the UV stage is TIMESTAMP > '#JobStartTimeStamp#' Both of these can be r...
by ray.wurlod
Sat Jul 02, 2005 7:15 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Runtime column propagation
Replies: 8
Views: 2343

At some point you're going to need some metadata. Since you want to persist with RCP, I would advise importing the sequential file's table definition now that it has been created, and use that in subsequent jobs. Better would be to load the metadata back into the job that creates the sequential file...