Search found 53125 matches

by ray.wurlod
Sun Apr 16, 2006 3:16 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: format string
Replies: 6
Views: 1564

If Sainath has ten characters (guaranteed) from the input stream, then concatenation ("00000" : InLink.C10column) is the most efficient algorithm.
by ray.wurlod
Sun Apr 16, 2006 3:13 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Multi lingual excel file
Replies: 2
Views: 1071

The ? you see here is not Char(63). Rather, it's the Unicode special representation of "unmappable character (from memory, somewhere near UniChar(0xFB)). Each worksheet is a separate table. You should, therefore, be able to process each with a separate job (clone), but with a different NLS character...
by ray.wurlod
Sun Apr 16, 2006 3:07 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: HandleNull
Replies: 12
Views: 3774

Since there is a virtual Data Set between your Transformer stage and your Sequential File stage (you can see this in the generated OSH), if there is a bug or a design problem it's within the Sequential File stage. Can you thoroughly check the metadata in the Sequential File stage and its input link,...
by ray.wurlod
Sun Apr 16, 2006 3:03 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Edit columm metadata in Sequential file
Replies: 5
Views: 1038

Null field value must be precisely the same length as non-null data if the file is fixed-width format. Null field length allows you to specify how long the representation of null is in a variable-width format column. I believe they are mutually incompatible. Click on Help with the Edit Column Metada...
by ray.wurlod
Sun Apr 16, 2006 12:36 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: insert/update keeping some nonnull old values
Replies: 8
Views: 2865

Have you tried doing a lookup with NULL values in relevant foreign key columns to see whether that exists, and only then transmitting the row to update the table?
by ray.wurlod
Sat Apr 15, 2006 3:43 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: right outer join
Replies: 3
Views: 1043

It is possible to prove that A RIGHT OUTER JOIN B ON condition is equivalent to B LEFT OUTER JOIN A ON condition for an equijoin, though I don't propose to provide that proof here. Stream input from table B and perform regular lookups against table A (or a hashed file populated from table A). Do not...
by ray.wurlod
Sat Apr 15, 2006 3:38 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Error while opening thejob, DS_JOBOBJECTS
Replies: 16
Views: 5723

You can not safely delete any of these. If they are "taking up too much room" delete jobs that are not longer required. RT_STATUSnnn is small, and contains the status records for job number nnn. RT_CONFIGnnn is small, and contains the run-time configuration information for job number nnn. RT_BPnnn ...
by ray.wurlod
Sat Apr 15, 2006 3:32 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Reusable ODBC Lookups.
Replies: 6
Views: 2112

Ken outlined the advantage of a hashed file. As to whether ODBC can avoid regenerating the result set, it really depends on the database server, and is not under control of DataStage. The technique used is "prepared SQL", where a query containing parameter markers is sent by DataStage to the databas...
by ray.wurlod
Sat Apr 15, 2006 3:26 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: HandleNull
Replies: 12
Views: 3774

Try choosing the input column name from the expression editor so that your input column name is fully qualified with the input link name.
by ray.wurlod
Sat Apr 15, 2006 3:22 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Configuration File Error
Replies: 10
Views: 6995

An integer. A command line argument. Beyond that I have no idea. Examining the script may shed some light; asking your support provider may also prove fruitful, or may result in the response that you don't need to know. I very much doubt that it is the cause of "access denied" or "unable to fork", w...
by ray.wurlod
Fri Apr 14, 2006 5:01 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Look up
Replies: 5
Views: 1339

Lookups in server jobs are achieved by painting one or more reference input links into a Transformer stage and supplying them from passive stage types that support "get by key" functionality (for example NOT Sequential File stage). You supply an expression in the Transformer stage that is used as th...
by ray.wurlod
Fri Apr 14, 2006 4:57 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Reusable ODBC Lookups.
Replies: 6
Views: 2112

Not sure what you mean. The results from a lookup, no matter what stage type is on the other end of a reference input link, is a row containing either data or nulls. (From an ODBC stage or a UV stage it might be more than one such row.) Columns from these rows can be directed onto the output of the ...
by ray.wurlod
Fri Apr 14, 2006 4:50 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Type 30 descriptor, table is full.
Replies: 11
Views: 6009

The error message indicates that it is the T30FILE setting that needs to be fixed. You are not hitting the "sub-directories in a directory" limit - that would generate a rather different error message. The problem is not being caused by running both parallel and server jobs; it's just the total numb...
by ray.wurlod
Fri Apr 14, 2006 4:48 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Handling Line Terminator's
Replies: 13
Views: 2852

Open the Columns grid, scroll right and find where you can set the "column contains line terminators" rule. This should allow you to handle any intermediate line terminator characters in the data.
by ray.wurlod
Fri Apr 14, 2006 4:46 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Mapping in datastage
Replies: 12
Views: 3636

A routine that reads the mapping file then processes the data file seems to be the most viable solution. The routine can then create another file containing delimited data with column headings. You could then import the table definition from the new sequential file and design a job on that basis. To...