Search found 53125 matches

by ray.wurlod
Fri Jul 13, 2007 5:36 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Date format
Replies: 2
Views: 725

Date is a data type, and has a specific internal format. They will remain comparable in production.

It is only when folks start converting their dates to incompatible string forms that comparisons fail.
by ray.wurlod
Fri Jul 13, 2007 4:05 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Warnings for decimal fields
Replies: 18
Views: 10089

Perhaps you and they have different interpretations of "critical".

A critical issue for IBM is one that would cost them revenue.
by ray.wurlod
Fri Jul 13, 2007 4:03 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: CDC Between Sequential File & Teradata Enterprise data s
Replies: 4
Views: 3145

You could convert the Unicode data to non-Unicode in an upstream Modify stage. Add a NOWARN specification.

But what's wrong with performing a Unicode comparison?
by ray.wurlod
Fri Jul 13, 2007 4:01 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: CDC Stage standards
Replies: 3
Views: 760

You can limit the comparison columns to just those ten, and all should be OK.
by ray.wurlod
Fri Jul 13, 2007 3:57 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Problem processing encrypted field in Data Stage
Replies: 1
Views: 1490

It's probably a viewing problem only. Send the data to a text file and see what there is to be seen there. Even then there may be an issue. If the encrypted data contain one or more bytes with \x00 the STREAMS I/O module used by the text file writer may interpret it as "end of string" and infer end-...
by ray.wurlod
Fri Jul 13, 2007 3:56 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Changing default codeset for EBCDIC to ASCII conversion
Replies: 2
Views: 1536

Are there separate "LANG" environment variables for the two databases? That might be one way.

Otherwise, without NLS enabled for DataStage (which would be overkill for your shop), I don't think there's much alternative than to code it yourself, particularly since you have those skills in-house.
by ray.wurlod
Fri Jul 13, 2007 3:53 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: diff between DB2/UDB enterprise stage and DB2/UDB API stage
Replies: 10
Views: 8366

There is (at least at version 7.5 and below) an issue with the DB2 Enterprise stage that prevents it from being used to work with data on a different platform. With DataStage on UNIX you must use the DB2 API stage to access DB2 data on AS/400 and mainframe platforms, via the DB2 Connect client softw...
by ray.wurlod
Fri Jul 13, 2007 3:50 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Teradata error
Replies: 7
Views: 1259

That must've been fun to track down!
by ray.wurlod
Fri Jul 13, 2007 3:48 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Datatype warnings from oracle!
Replies: 13
Views: 4772

Edit the metadata post-import to get it right - to coincide with what your SQL is returning. The warning is only an alert - it doesn't necessarily mean anything bad has happened - it's alerting you to the fact that you're (theoretically at least) trying to shoehorn a larger data type into a smaller,...
by ray.wurlod
Fri Jul 13, 2007 3:45 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: ISNULL is returning 0 for NULLs
Replies: 6
Views: 1450

There's nothing wrong with the IsNull() function.

You just didn't understand the implications of using Char(N) data type which, perforce, must contain N characters. It's unfortunate that this was never enforced in server jobs.
by ray.wurlod
Fri Jul 13, 2007 3:42 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Improving the sequential write performance
Replies: 7
Views: 1778

Get an operating system other than Windows or UNIX, one that permits multiple writers per file. Neither UNIX nor Windows (nor USS) permits multiple writers per file. This is NOT a DataStage limitation. If you want all the data in one file you are necessarily constrained to a sequential operation. Yo...
by ray.wurlod
Fri Jul 13, 2007 3:39 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: orchdbadmin jargon
Replies: 2
Views: 781

You should also ask your vendor to supply the Orchestrate manuals (which IBM sometimes calls the OEM manuals for DataStage) in which all is revealed. Indeed, anyone using the Modify stage without the Orchestrate Operators guide is just looking for trouble (there are certain errors in the Parallel Jo...
by ray.wurlod
Fri Jul 13, 2007 3:35 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Building Generic DS Jobs
Replies: 5
Views: 982

Welcome aboard. Yes it is possible, but two pieces of advice. 1. Export these prototype jobs before anyone gets to overwrite (forgets to "Save As") one of them. That way you can reinstate the prototype. Also figure out sanctions against transgressors, such as overwriting their work with the imported...
by ray.wurlod
Fri Jul 13, 2007 3:31 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Use of Hashfiles in a parallel job
Replies: 2
Views: 901

There is no such thing as a hash file in DataStage. A hashed file is a popular way to store lookup reference data in server jobs. They should not be used in parallel jobs, as to do so will thwart the automatic scaling capability of these jobs. Stop thinking like a server job developer and investigat...
by ray.wurlod
Fri Jul 13, 2007 3:28 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Data set stage question
Replies: 6
Views: 1222

To clarify, this only occurs when run from a job sequence? Try interposing an Execute Command or Routine activity between each pair of Job activities in which you will get the job sequence to sleep for, say, 30 seconds. This will give the file system a chance to flush the Data Sets to disk.