Search found 53125 matches

by ray.wurlod
Fri Jan 18, 2008 3:35 pm
Forum: General
Topic: Get list of job names in a sequence in command line
Replies: 10
Views: 7406

OK, you can't do it without writing a query to get the information out of the Repository. I read the original question (being from a brand new poster) as wanting just to use dsjob, with no other work.

What's your solution for version 8?
by ray.wurlod
Fri Jan 18, 2008 3:31 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to write new stage?
Replies: 4
Views: 1289

Chapter 5 of the Parallel Job Advanced Developer's Guide is titled "Specifying Your Own Parallel Stages" and is, therefore, the reference for these three stage types.
by ray.wurlod
Fri Jan 18, 2008 3:27 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Remove dulpicate
Replies: 7
Views: 1481

Composite keys are fine. Are you data partitioned, as well as sorted, on these key fields?

Not being partitioned on the keys would seem to manifest as "missing (some) duplicates" if the duplicates were on different partitions as a result, say, of Round Robin partitioning.
by ray.wurlod
Fri Jan 18, 2008 3:25 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: acessing environment variable in scripts
Replies: 5
Views: 1157

Or seems to me you could just reference it directly in the script - $VariableName or ${VariableName} for example. That would fail if the job parameter had been given a non-default value at run time, which does not change the environment variable value. That's why I thought the indirection was neces...
by ray.wurlod
Fri Jan 18, 2008 3:23 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Can't schedule job or access schedule tab in Administrator
Replies: 3
Views: 1356

Your UNIX System Administrator needs to add the pertinent user ID(s) to cron.allow and at.allow in order for them to be allowed to schedule from DataStage (which uses cron or at, depending on the scheduling requirement - once or repeating).
by ray.wurlod
Fri Jan 18, 2008 3:21 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: how to delete the existing data and reload it again
Replies: 3
Views: 877

Plan ahea d You must incorporate this functionality from the beginning, by having some way of identifying which run loaded every row in the target. This might be a unique run ID, a timestamp, or anything else you can imagine. Use its value in the WHERE clause of a DELETE statement, perhaps executed ...
by ray.wurlod
Fri Jan 18, 2008 3:19 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: AS400 read error
Replies: 3
Views: 1704

Same user ID? Who is "*N"?
by ray.wurlod
Fri Jan 18, 2008 3:18 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Performance issue while inserting into Oracle table
Replies: 7
Views: 1480

With a single node confiuration file everything is executed sequentially irrespective of partitioning settings in stages. It does seem like a long time. Try not automatically executing sqlldr from DataStage but executing it subsequently, manually, with your DBA advising what's happening. Are there v...
by ray.wurlod
Fri Jan 18, 2008 3:16 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to add a new user in DS
Replies: 3
Views: 1021

Plan for a different strategy when you upgrade to version 8.
by ray.wurlod
Fri Jan 18, 2008 3:14 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Finding and Replacing all Non Printable Characeters
Replies: 1
Views: 916

Convert() function. Convert(Char(9):Char(12), "", InLink.TheString) This solution is sub-optimal, because it re-evaluates the Char() function twice for every row processed. Better would be to initialize a stage variable to Char(9):Char(12) and use that in the deriva...
by ray.wurlod
Fri Jan 18, 2008 3:11 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Encode
Replies: 4
Views: 1416

Do it in an after-job subroutine. Your job writes the header and body separately. The after-job subroutine encodes the body and (possibly) cats the head and body together. The result is then transferred.

You can use DataStage BASIC code or shell script. Doesn't matter.
by ray.wurlod
Fri Jan 18, 2008 3:08 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Partition and Dataset
Replies: 11
Views: 2819

mavrick21 wrote:Sorry for 3 posts above. Problem with my browser.
See this post

ravibabu, please enclose job designs in Code tags so that indenting is preserved.
by ray.wurlod
Fri Jan 18, 2008 3:06 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: ODBC update aborting at certain times
Replies: 3
Views: 943

What makes you think you're not querying the reference table too? Have your DBA check out the possibility that the deadlock is happening because of a FOREIGN KEY constraint, where updating the reference table takes some kind of a lock on the corresponding record in the parent table, which you're in ...
by ray.wurlod
Fri Jan 18, 2008 3:03 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Some problem in parameter $APT_MONITOR_SIZE
Replies: 2
Views: 1655

Is it not documented somewhere that APT_MONITOR_SIZE is ignored if APT_MONITOR_TIME is set? If this is true it might explain the empty field in DSParams.
by ray.wurlod
Fri Jan 18, 2008 3:01 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Comparision of Sort Stage and Order by clause
Replies: 3
Views: 984

That suggests that the ORDER BY column is not indexes.

No information is published on the sorting algorithm used by DataStage other than "it's faster than UNIX sort command".

I suspect it's a multi-threaded heap-merge sort, but can offer no proof.