Search found 4992 matches
- Wed Feb 25, 2004 10:41 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Combine source rows into single target row?
- Replies: 14
- Views: 5639
You would have a stage variable for each data type, each variable checks the data type field and saves the value if it is the right type. Each stage variable is then written to an output field. A constraint on the transformer only outputs a row when the final data type is detected. Make sure the so...
- Wed Feb 25, 2004 10:35 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Read only Environments
- Replies: 14
- Views: 3224
Re: Read only Environments
This environment is a datastage Project - which we term Production. We also have another environment termed Development. Hmmm. Environment sounds synonymous with project in the posters world. Access to servers is handled via userids and passwords, not a problem to lock down a server that way as som...
- Wed Feb 25, 2004 8:58 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Read only Environments
- Replies: 14
- Views: 3224
Wow. Your production and development exists within the same server? Didn't anyone find this to be a problem? That's like saying my production database and development database co-exist on the same server. Except in DataStage's case it's worse because only a single version can be installed and execut...
- Wed Feb 25, 2004 8:52 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Combine source rows into single target row?
- Replies: 14
- Views: 5639
- Wed Feb 25, 2004 9:32 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Launching DS Jobs With API and Visual Basic
- Replies: 3
- Views: 1263
Since dsjob.exe is the command line interface, you're best option is to simply use this and "wrapper" it with your front end. Here's a shell script I posted to do the wrapper so that an enterprise scheduler could interface and run jobs: m I suggest you just execute the NT/DOS program dsjob in the DS...
- Wed Feb 25, 2004 9:29 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Combine source rows into single target row?
- Replies: 14
- Views: 5639
Unless you have obscene volumes or an obscene number of columns, you could probably just use your source file and put it into hash lookups. It sounds like each row has a type. Simply make the primary key on the hash file the natural key of the data plus the column type. Load the whole source file in...
- Wed Feb 25, 2004 7:23 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: How to delete a hash file using DS or DSCommands ?
- Replies: 3
- Views: 1847
If you used an explicit path for the hash files (something like /var/opt/ds/hashfiles/) then simply using an "rm -r /your directory" is sufficient to blow it all away. If you used the default of letting your hash files fall into your project, then you must use the Universe TCL command DELETE.FILE to...
- Tue Feb 24, 2004 12:37 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Has anyone tried executing Unix cmd from within transfromer?
- Replies: 2
- Views: 796
- Tue Feb 24, 2004 10:26 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: populating parent and child tables in one job - Possible??
- Replies: 5
- Views: 3623
Ahhhhhhhhhh........splat (sound of me jumping out the top window of the highest building in town) Sucks to be you on this project. Your observations for your performance concerns are valid. Parents are processed first, children are processed second with surrogate foreign key substitutions of the par...
- Tue Feb 24, 2004 9:35 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: populating parent and child tables in one job - Possible??
- Replies: 5
- Views: 3623
In a word, SANDBOX. You are missing the concept of using a sandbox to land, stage, cleanse, and do all surrogate assignment work. Once you are happy with results, extract your inserts and updates from the sandbox and blast it into the target. You'll of course argue about performance, extra work invo...
- Mon Feb 23, 2004 9:34 pm
- Forum: IBM<sup>®</sup> DataStage TX
- Topic: Error : Job is being accessed by another
- Replies: 3
- Views: 2064
- Mon Feb 23, 2004 3:51 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: How to submit a Datastage Job from UNIX command line
- Replies: 8
- Views: 4042
and 2ndly Ken, does this script covers all there is for the PX edition and PX jobs? As far as I know, dsjob works for all job types. Since this script just executes dsjob and uses it to query for job status, I hope there's no problems. All of the other stuff is used to simply demonstrate gathering ...
- Mon Feb 23, 2004 3:47 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Difference between Decimal and Numeric data types?
- Replies: 3
- Views: 5700
DataStage has to map multiple disparate data types into its own generic data types. This holds true for both inbound and outbound data. That's why it's so important to use the source and target imported metadata rather than make it up or attempt to use source metadata for a target. For the purposes ...
- Mon Feb 23, 2004 10:37 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: How to submit a Datastage Job from UNIX command line
- Replies: 8
- Views: 4042
This is covered fully on this site. Search for anything on dsjob, the unix command line interface.
Here is a script that I posted if you're looking for examples of a fully fleshed interface:
http://www.dsxchange.com/viewtopic.php?t=85578
Here is a script that I posted if you're looking for examples of a fully fleshed interface:
http://www.dsxchange.com/viewtopic.php?t=85578
- Mon Feb 23, 2004 12:41 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Creation buckets On the basis of Value
- Replies: 8
- Views: 1928
Okay, well then here's a last parting attempt to help before going to sleep (it's 1:30am in the morning here). If you're processing 300 million rows a day, I would hope that you have opportunities for incremental update rather than full re-aggregation. Otherwise, you must keep in mind that your unit...