Search found 42189 matches

by chulett
Thu Sep 22, 2011 6:53 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Local Message Handlers after Migration from 8.0.1 to 8.5
Replies: 11
Views: 4405

So this means the Jobs are not exported using the standard exporter from version 8.0.1 or at least it is not possible to import jobs with binaries from version 8.0.1 to 8.5. Not quite sure how you got there from what Ray posted. He answered your question about message handlers, not the one about bi...
by chulett
Wed Sep 21, 2011 11:09 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Read operation failure
Replies: 16
Views: 5384

Sample? :?

Forget cat, what happens if you 'cd' into that directory and then use 'vi' (or 'view' if you prefer) to take a look at the *TRANS1 file? If you get any kind of an error, post it here unedited in its entirety.
by chulett
Wed Sep 21, 2011 10:31 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Server jobs performance on unix vs windows
Replies: 13
Views: 6531

azens wrote:We did get some performance gain from switching ZFS to UFS for hashed files location.
Ah... I was wondering if this was you but I hadn't checked the older posts yet.
by chulett
Wed Sep 21, 2011 10:25 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Read operation failure
Replies: 16
Views: 5384

Those show as two zero byte (as in empty) files. Any idea how they got that way? First thing I'd suggest is you recompile the job and see what they look like then.
by chulett
Wed Sep 21, 2011 10:21 pm
Forum: General
Topic: After SQL statement in Oracle Connector stage
Replies: 6
Views: 2115

I'd be concerned with using 'After SQL' to do a log update... any failure there and you wouldn't have any way to re-execute that piece. I've always set them up as separate trailing processes, for whatever that is worth.
by chulett
Wed Sep 21, 2011 7:30 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Help needed in order to load remaining records in DB
Replies: 2
Views: 1143

If your source is static and you have a handle on the partitioning, you could use a constraint to simply filter out the known quantity of loaded records and only start 'reprocessing' after that threshold is crossed.
by chulett
Wed Sep 21, 2011 7:26 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: how to get records
Replies: 7
Views: 1703

No, I specifically said to use @INROWNUM. If you simply use @OUTROWNUM in both constraints, the first 500 records would go down both links and the rest would be discarded (for lack of a better word).
by chulett
Wed Sep 21, 2011 7:21 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Server jobs performance on unix vs windows
Replies: 13
Views: 6531

Have you tried to narrow down exactly why your performance is poor on the Solaris box? What aspect(s) of the jobs you're running have been affected - read speed? Write speed? It is specific to your hashed files, for instance or has access times to remote relational databases taken a hit? You really ...
by chulett
Wed Sep 21, 2011 4:03 pm
Forum: General
Topic: How can I clear job log with DSJOB or using some routine?
Replies: 3
Views: 3268

If your logs are stored in the 'Universe' repository:

CLEAR.FILE RT_LOGnnnn

(where 'nnnn' corresponds to the internal job number of this job) will essentially truncate the log. Purge criteria will more than likely need to be re-established after that.
by chulett
Wed Sep 21, 2011 10:48 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Read operation failure
Replies: 16
Views: 5384

What does that mean? Can you not find the file? It would be under the project directory in the folders named for job number 368 as the message states. So there will be several RT_XXXX368 folders and (from memory) you want to look in the RT_BP368 folder where you should find the file mentioned.
by chulett
Wed Sep 21, 2011 9:48 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Read operation failure
Replies: 16
Views: 5384

You can start the light shedding. Find the file noted in the message under the job number noted, it contains the generated code that had the issue. Look at and around the line noted and see what bit of code is causing the issue.
by chulett
Wed Sep 21, 2011 7:01 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Current TimeStamp as parameter
Replies: 4
Views: 1994

So then it does need to be a job parameter. Pass it in (properly formatted) via a Sequence job. As noted, "Job Control" code runs after the job has started where it is too late to set parameters.
by chulett
Wed Sep 21, 2011 6:45 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: how to get records
Replies: 7
Views: 1703

You just use it. That variable will be the number of the row out the link and is counted per link so you'll actually want to use @INROWNUM instead. Then your constraint or stage variable would be where @INROWNUM <= 500 or > 500. Be aware this is also counted per node, so run this on one node.
by chulett
Wed Sep 21, 2011 6:39 am
Forum: General
Topic: help in dsjob command
Replies: 9
Views: 3318

Of course. So, resolved?