Search found 4992 matches
- Fri Jun 27, 2008 8:30 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Date Calculation.
- Replies: 5
- Views: 2102
- Thu Jun 26, 2008 2:57 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: FTP error
- Replies: 8
- Views: 2576
- Thu Jun 26, 2008 2:37 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: FTP error
- Replies: 8
- Views: 2576
- Thu Jun 26, 2008 2:31 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: FTP error
- Replies: 8
- Views: 2576
- Thu Jun 26, 2008 2:29 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Issue Importing Table Def . DS 7.5.2
- Replies: 3
- Views: 1757
Has anyone else ever imported metadata from this Unidata server? There's a long setup/configuration section under your Start menu --> Ascential documentation explicitly covering setting up Universe and Unidata. If you're not the first you should talk to a coworker who's done it before. If you are th...
- Thu Jun 26, 2008 2:18 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Issue Importing Table Def . DS 7.5.2
- Replies: 3
- Views: 1757
- Thu Jun 26, 2008 2:17 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Two inputs to the same hashed file
- Replies: 14
- Views: 4040
So is it ok in principle to have two inputs to the same hashed file? Two processes, both updating a hashed file simultaneously, in a server job? I thought hashed files weren't available in PX because they didn't support parallel access. They're not in PX because PX is a horse of a different color. ...
- Thu Jun 26, 2008 2:08 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: copy error
- Replies: 5
- Views: 2060
- Thu Jun 26, 2008 2:07 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: compilation error
- Replies: 7
- Views: 2269
- Thu Jun 26, 2008 2:05 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: job stops running after some time
- Replies: 13
- Views: 7086
There's probably a datatype issue (wrong datatype or NULL) happening in the Aggregator stage and the job is blowing up. The 100 rows is not an indication of which row has the issue, just the last time the job updated its link statistics. If I was you I would take the sequential file and try to run t...
- Wed Jun 25, 2008 8:50 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Two inputs to the same hashed file
- Replies: 14
- Views: 4040
If Colin says there's 2 Transformers, then it may the case that there is another Transformer that is directing rows down one path or the other. In either case this will work fine if all of the rows always come from one data stream or the other. If rows are flowing down two separate streams then you ...
- Tue Jun 24, 2008 2:48 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job Aborts after 50 errors; how to change?
- Replies: 9
- Views: 5080
- Tue Jun 24, 2008 12:11 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job Aborts after 50 errors; how to change?
- Replies: 9
- Views: 5080
Warning limits are set when a job is ran by whatever facility you are using. If it's Director, it's part of the dialog box where you press the Run button. If it's job control, it's part of the command line invocation. If you fail to specify via the job control, it then defaults to what is set as the...
- Tue Jun 24, 2008 11:29 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: LookUp on Condition
- Replies: 5
- Views: 1959
- Tue Jun 24, 2008 9:27 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: agregation
- Replies: 2
- Views: 1393
You need to specify the expected row count and the destination of the results before we can give a best recommendation for your SPECIFIC example. If the result is going to load into a table within the same database instance and reduce from 1 billion rows to 1 million rows, then the argument would be...