Search found 53125 matches

by ray.wurlod
Thu Dec 09, 2004 3:21 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Parallel Extender usage
Replies: 9
Views: 3413

You can not estimate time without knowing how large the rows are and whether there are other factors influencing throughput such as network bandwidth, number of Oracle listeners, parallelism within the query, other locking operations on the target table and so on. This is a "how long is a piece of s...
by ray.wurlod
Thu Dec 09, 2004 3:18 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Checkpoints (or savepoints) in a job
Replies: 3
Views: 1717

Not directly. You have control over the number of rows per transaction but to implement the rest you would have to design it in. You would need to keep a count of how many rows had been loaded, and build in restart logic (perhaps driven from a job parameter) to start from that point. It can be done,...
by ray.wurlod
Thu Dec 09, 2004 3:15 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Creating multiple flat files using DataStage Enterprise Ver
Replies: 1
Views: 1555

Of course.
You may need to decide exactly how you want to partition the data to the multiple flat files, rather than relying on one of the PX partitioning algorithms.
Otherwise, simply write to multiple Sequential File stages.
by ray.wurlod
Thu Dec 09, 2004 3:13 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Insertion issues
Replies: 2
Views: 1132

You should be able to detect within DataStage that the write has failed, and capture the row that failed to be inserted (read about reject links).

What makes "them" think that the second INSERT would succeed when the first INSERT had failed? :roll:
by ray.wurlod
Thu Dec 09, 2004 1:57 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Pass Generated Value to After-job Subroutine
Replies: 2
Views: 757

Lots of neat ways but they all boil down to pretty much the same thing; park it on disk. Another output link writing to a text file or a hashed file. The job's user status area isn't in memory, it's a field in one of the run time hashed files in the Repository. So any "neat" solution using the user ...
by ray.wurlod
Thu Dec 09, 2004 12:38 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Routine to reset a job
Replies: 6
Views: 4082

Your original problem was casing. Function names are case sensitive.
The function name you wanted was DSAttachJob. The function name you specified was DSAttachjob. Not the same function.
by ray.wurlod
Wed Dec 08, 2004 7:08 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Filtering non alphanumeric charaters
Replies: 7
Views: 2470

So, to keep all the alphanumerics, use that result and convert to "".

Code: Select all

Convert(Oconv(Oconv(MyString,'MC/N'),'MC/A'), "", MyString)
by ray.wurlod
Wed Dec 08, 2004 7:01 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Convert Redbrick plugin to db2 plugin
Replies: 6
Views: 1423

Red Brick 6.30 is the current version. Red Brick 6.40 is almost ready for beta testing. Design for releases beyond that is under way. If they're continuing to develop it, is it likely that they'll drop support for it? Don't automatically accept anything sales dudes tell you. Oblige them to find out,...
by ray.wurlod
Wed Dec 08, 2004 3:08 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Using result from BEFORE-SQL in Input SQL
Replies: 5
Views: 1443

Think about the amount of work you're asking DataStage to do. How many rows are selected by this query? SELECT ... FROM FACT WHERE CAPTR_DT = (SELECT MAX(CAPTR_DT) FROM MONTH_CONTROL) How many rows are selected by this query? SELECT ... FROM FACT then doing a lookup for every row wit...
by ray.wurlod
Wed Dec 08, 2004 3:03 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: creating two surrogate keys in a single table??
Replies: 4
Views: 863

It makes more sense not to use the Sequential File stage, for performance if nothing else. You can generate the surrogate key within the Transformer stage, perhaps in a stage variable, then use this as the value of the primary key of the one table and the foreign key of the other table. If you use a...
by ray.wurlod
Wed Dec 08, 2004 2:59 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Help in WriteSeq!
Replies: 3
Views: 1234

Do you understand why? :wink:
by ray.wurlod
Wed Dec 08, 2004 2:55 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Convert Redbrick plugin to db2 plugin
Replies: 6
Views: 1423

I believe it would be relatively straightforward. What Siva is postulating is switching from one bulk-loading stage type to a different bulk-loading stage type. There's much less difference between bulk loaders than there is between database servers. Hopefully the DB2 bulk loader stage handles dates...
by ray.wurlod
Wed Dec 08, 2004 2:49 pm
Forum: General
Topic: Object Variable or with variable not set
Replies: 11
Views: 9844

DataStage clients use OLE (object linking and embedding) to access objects exposed in a number of OLE servers, among them dsobjects.dll. In order to understand this mechanism, and the errors it can generate, you really do need to have studied some form of Windows programming using OLE. The bad news ...
by ray.wurlod
Wed Dec 08, 2004 2:45 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Problem with installation on HP system
Replies: 4
Views: 786

It is necessary to have temp space because some files are compressed on the CD; the CD being read-only means they can't be uncompressed there!
by ray.wurlod
Wed Dec 08, 2004 2:44 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: ds_loadlibrary: error in dlopen - Dynamic Error
Replies: 4
Views: 1583

Also make sure that the Oracle 32-bit libraries are mentioned ahead of the Oracle 64-bit libraries.
It also helps to set

Code: Select all

ORACLE_LIB=$ORACLE_HOME/lib32 
in dsenv.