Search found 53125 matches

by ray.wurlod
Wed Feb 11, 2015 8:12 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Oracle Connector Delete then Insert (write mode)
Replies: 3
Views: 2727

How do you know from the log that they are performed simultaneously? The log's time granularity may be too coarse to separate operations that occur consecutively.
by ray.wurlod
Wed Feb 11, 2015 8:10 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Output needs to move in different jobs
Replies: 10
Views: 2264

Craig has guessed what your data look like. You have to create the rules (in English, if you like) about what in the data defines the record type, then how to extract it if it's not in a field by itself - for example it might be the first character of each line. Then you can convert your specificati...
by ray.wurlod
Wed Feb 11, 2015 7:31 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Derive Column names from Rows
Replies: 24
Views: 9617

I don't believe what you are trying to do is legally possible.
by ray.wurlod
Wed Feb 11, 2015 3:30 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Output needs to move in different jobs
Replies: 10
Views: 2264

The filter condition exactly reflects the mechanism used in the file to identify header, detail and trailer records. Ordinarily you would wish to separate these into separate outputs in your Transformer stage, therefore the "filter" conditions become the output link constraint expressions ...
by ray.wurlod
Wed Feb 11, 2015 3:28 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Information on stages
Replies: 3
Views: 1836

The best way is to use Metadata Workbench. It has the ability automatically to recognise the situation you describe, and report on it.
by ray.wurlod
Wed Feb 11, 2015 3:25 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Pivot_XML:Error when checking operator: Caught parsing error
Replies: 6
Views: 5387

Any 0 is FALSE, any non-zero value is TRUE. Try double-clicking on the default value cell to edit it. Even if it's not editable at design time, you can still provide a value at run time (though it may have to be TRUE, rather than 1).
by ray.wurlod
Tue Feb 10, 2015 10:01 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Pivot_XML:Error when checking operator: Caught parsing error
Replies: 6
Views: 5387

It's not enough to add $APT_DUMP_SCORE to the job. You have to recompile the job, and you have to set the value of $APT_DUMP_SCORE to 1 in order to have the score logged.
by ray.wurlod
Tue Feb 10, 2015 8:17 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Pivot_XML:Error when checking operator: Caught parsing error
Replies: 6
Views: 5387

The score is a log entry that starts something like "this step has n operators".
by ray.wurlod
Tue Feb 10, 2015 8:16 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Hashed File - Read and Write Cache Setting
Replies: 12
Views: 6803

As I recall that was a bug that was remedied in the next version. Can't recall which version it was in, other than "yours". :wink:
by ray.wurlod
Tue Feb 10, 2015 4:24 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Remove Duplicates - Retain both Duplicates
Replies: 6
Views: 1361

Create a fork-join to identify the count from each key. Downstream of the Join, create a filter that passes only those key values for which the count is 1.
by ray.wurlod
Tue Feb 10, 2015 4:23 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Datastage Px job
Replies: 6
Views: 2230

Suneel, you require a sorted list of employee names associated with each department number, as far as I can see. Is such the case?
by ray.wurlod
Tue Feb 10, 2015 4:21 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Hashed File - Read and Write Cache Setting
Replies: 12
Views: 6803

Changing the maximum size of the caches will have no effect unless you are actually using the caches. There will only be a performance improvement if you are currently demanding more cache than the presently configured amount (in which case there will be warnings in job logs). Enabling row buffering...
by ray.wurlod
Mon Feb 09, 2015 10:21 pm
Forum: IBM<sup>®</sup> SOA Editions (Formerly RTI Services)
Topic: ISD 11.3.1 SOAP based web service questions
Replies: 4
Views: 9052

There is definitely (it seems to me) a trend towards identifiers of the form oneTwoThree, in a number of languages including C, C++, Java and Python. Not so much in databases (yet?).
by ray.wurlod
Mon Feb 09, 2015 6:46 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Server Jobs Performance Very Slow post Infosphere Upgrade
Replies: 20
Views: 18166

It makes (will make) no difference. The choice of supporting NLS at installation is what drives the internal storage of characters, not the value of the NLSMODE parameter in uvconfig. If you do change NLS mode a restart is required. However, it will make almost no difference to performance of server...
by ray.wurlod
Mon Feb 09, 2015 4:07 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Server Jobs Performance Very Slow post Infosphere Upgrade
Replies: 20
Views: 18166

There's no real reason to change anything in uvconfig. It won't help. There are two reasons for slower performance, and you can't do much about either of them. You're now using NLS, which means you're potentially processing more than one byte per character. Internally, with NLS enabled, DataStage us...