Search found 4992 matches
- Fri Mar 12, 2004 12:07 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Job control for reset all jobs
- Replies: 5
- Views: 1023
The dsjob executable is your gateway to everything you need to do. It is a command line executable, so you can use the switch to list all jobs in a project. Then, you can cycle thru all jobs and issue the reset command. All of this is available from the system command prompt in both Unix and Windoze...
- Fri Mar 12, 2004 12:04 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Dynamically Selecting ODBC stage Table
- Replies: 5
- Views: 897
Re: Dynamically Selecting ODBC stage Table
I could do this with multiple Odbc stages and use a constraint in a transformer stage and decide the target. You supplied your own answer. An alternative is to derive the target tablename as an output column and stream to a sequential file. Then, use an after-job cutter script to separate the singl...
- Thu Mar 11, 2004 11:59 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: viewing metadata of a hashed file
- Replies: 2
- Views: 542
I assume you mean a DS hash file, and not a Universe/Unidata hash file. There are two ways: 1. Go look at the job that creates and populates the hash file. As a courtesy, save the definition into the DS Manager library. 2. Import hash file metadata using DS Manager to get it into the library if you ...
- Thu Mar 11, 2004 3:44 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Keeping old job log message
- Replies: 2
- Views: 2578
No. If you find that you need historical log information you should extract it and move it into a separate DB. You can find techniques if you search this forum. As for the reasons why the log is cleared, it's actually removed. Anytime you import a job, all of its existing support tables are dropped ...
- Thu Mar 11, 2004 2:10 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Question regarding DS KeyManagement?
- Replies: 5
- Views: 823
Are you using this function and then applying a constraint to throw away rows in a subsequent transformer stage? All I see is that if this function is called for the first time with a new argument value that it starts numbering from 1000. Otherwise, it constantly returns the current value in the has...
- Thu Mar 11, 2004 1:36 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Question regarding DS KeyManagement?
- Replies: 5
- Views: 823
- Thu Mar 11, 2004 1:06 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Link Partitioning/Collecting or Coded Round Robin Partitions
- Replies: 7
- Views: 2314
Optimize your spool of data out of DB2. Write a job that is just DB2 --> xfm --> seq and see how fast that runs. Then, pick a numeric column, something like the primary key, and instantiate that job passing in the parameters to divide the data in the SQL. You'll find that the degree of instantiantio...
- Wed Mar 10, 2004 9:00 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Exploding one record into multiple records
- Replies: 1
- Views: 581
Unless you have obscene amounts of source rows, why not use a relational database to do the work for you? Load the primarykey, startdate, and endate into a work table. You hopefully have a table of dates somewhere (TIME dimension in Kimball speak). You could select your work table joining to the TIM...
- Tue Mar 09, 2004 9:14 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Which is more efficient Hash Tables or OCI lookups
- Replies: 17
- Views: 4908
I haven't done a CDR (Call detail record?) warehouse, but the principles still apply. If you have a VLD (Very Large Dimension), then you typically have a high volume of fact records processing daily, this only makes sense. In a high volume fact processing solution, you MUST employ job instantiation ...
- Tue Mar 09, 2004 9:52 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Fixing corrupted log files on reboot
- Replies: 15
- Views: 5319
A hard crash that you are describing is tricky to programmatically recover. Who watches the watcher? If the main controlling job itself crashes, corrupting its log, status, and config files, then how does that get automatically rectified? I think in the event of a catastraphic failure, such as a reb...
- Mon Mar 08, 2004 3:51 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Hash File Problem...
- Replies: 11
- Views: 2339
- Mon Mar 08, 2004 3:33 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Hash File Problem...
- Replies: 11
- Views: 2339
If you have a hash file and two jobs have the same exact definition, then viewing data in both jobs should show the EXACT SAME ROWS. It will read the hash file the same way/order, so the first row listed in both jobs should be the same row. Is it? If not, you're looking at different files. If it is,...
- Mon Mar 08, 2004 3:13 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Hash File Problem...
- Replies: 11
- Views: 2339
- Mon Mar 08, 2004 3:11 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Windows X UNIX
- Replies: 4
- Views: 1475
This might be a little interesting: m However, to directly answer your question, the differences between the two from a hardware perspective makes it a difficult comparison. Intel CPUs are really fast, whereas Unix boxes tend to be slower. So, it depends on how you design your jobs. If your jobs are...
- Mon Mar 08, 2004 2:19 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Null command
- Replies: 6
- Views: 1407