Search found 53125 matches

by ray.wurlod
Tue Apr 15, 2008 1:28 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Applying Loop in Datastage
Replies: 12
Views: 3942

I'd use a server job and write to a Type 19 hashed file. Only 30 lines - sheesh! The server job would be done before the parallel job had even gotten started.
by ray.wurlod
Mon Apr 14, 2008 11:45 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Error reading nullable date from sequential file
Replies: 4
Views: 1360

The representation of NULL must have precisely 10 characters if you have specified that the field width is 10. Otherwise read the field as VarChar, and convert it to Date subsequently. VarChar can legitimately have a value whose length is zero. Date can not. Any specified in-band null (representatio...
by ray.wurlod
Mon Apr 14, 2008 11:44 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Applying Loop in Datastage
Replies: 12
Views: 3942

Your requirement is too vague.

Every DataStage job is inherently a loop over all records in its source.

What do you mean more specifically?
by ray.wurlod
Mon Apr 14, 2008 11:24 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Oracle Remote Server
Replies: 14
Views: 4857

You can do that from the Administrator client if you can not access the server machine. Get to the command window for the project and execute the command SH -c "tnsping servername" If you get "command not found" then your PATH does not include $ORACLE_HOME/bin, so try it entering the full ...
by ray.wurlod
Mon Apr 14, 2008 10:05 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Control character or Unprintable character in string
Replies: 10
Views: 1971

That's not generally true. All printable accented characters are between 129 and 255.
by ray.wurlod
Mon Apr 14, 2008 9:58 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Hashed File
Replies: 4
Views: 1100

Best according to what criteria? If your criterion is total execution time being as short as possible, and you have the power in the machine, then as many jobs as you like can run at the same time and read from the same hashed file. There is no problem, or overhead, with doing this. If you are enabl...
by ray.wurlod
Mon Apr 14, 2008 8:57 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Hashed File
Replies: 4
Views: 1100

Compared to what?

What other strategy do you propose for doing the checking of keys/returning of looked-up values? What "impact on performance" would your alternative strategy have? Indeed, what do you mean by "performance" in an ETL context?
by ray.wurlod
Mon Apr 14, 2008 7:57 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Error in merging files File '/tmp/sovRukEa/opf__9' write err
Replies: 4
Views: 2132

My point was that the file opf__9 was already resident on /tmp. Yes, sorting will use scratch space, but it should go first to the scratch disk resource specified in the configuration file, only going to /tmp as a "last resort". But if the configuration file lazily specifies /tmp as the scratch spac...
by ray.wurlod
Mon Apr 14, 2008 7:50 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Oracle Remote Server
Replies: 14
Views: 4857

Good progress, we're eliminating some possibilities. You now need to check that the connection parameters in your DataStage job are correct. Ideally these are job parameters, so you can just look at the "job started" event in the job log to determine which values were supplied. Otherwise you have to...
by ray.wurlod
Mon Apr 14, 2008 7:41 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Oracle Remote Server
Replies: 14
Views: 4857

In that case you need to check that that particular entry in tnsnames.ora is correct.
by ray.wurlod
Mon Apr 14, 2008 7:09 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job aborts when run with Node 2 or more
Replies: 5
Views: 1014

Please post your one-node configuration file and job score and your two-node configuration file and job score. Enclose each in Code tags for easier visibility.
by ray.wurlod
Mon Apr 14, 2008 7:08 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: CPU CYCLES
Replies: 18
Views: 3158

Licensing for DataStage is based on the number of CPUs, not on the number of CPU cycles. Cycle (or MIPS) licensing is really only ever encountered with mainframe systems. That said, I am not aware of the DataStage licensing model on USS - check with your support provider. There is no relationship wh...
by ray.wurlod
Mon Apr 14, 2008 6:17 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: CPU CYCLES
Replies: 18
Views: 3158

The answer is a definite maybe. What does "brached" mean? It depends also on many factors, not all of them internal to DataStage, such as whether CPU cycles have to be expended on paging activities as each process exhausts its timeslice and gets another one. So one of the factors is how the operatin...
by ray.wurlod
Mon Apr 14, 2008 4:31 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: External Routines
Replies: 6
Views: 2413

What I understand is that memory allocated for the return value will be freed automatically by DataStage when (after) the routine returns. However any memory allocated within the routine (say for local variables) does need explicitly to be freed.
by ray.wurlod
Mon Apr 14, 2008 4:29 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Sequence execution contrl based on file data value
Replies: 14
Views: 3650

For every record, you say?

Sounds like the easiest approach would be to use this file as the reference input to a Lookup stage at the start of Job2 and Job3 to determine whether to process that record. Run both Job2 and Job3 at the same time, if you have the resources.