Also, when Importing, make sure you Filter the result to just the Owner Name (SCOTT) that you are interested in. Otherwise you will get everything in the schema, including system tables like you noted. I'd bet your scott tables are in there, just buried along with everything else. After the fact, yo...
I got more of an impression that the 'flow' they were interested in wasn't of any job in particular or a series of jobs. As noted, the interest seemed to be in 'how it was controlled and run under UNIX'.
We shall see. Ajayone, pop back in if you have more questions on this topic!
It's a UNIX scripts that uses 'dsjob' to run any kind of DataStage job. You can use it from the command line or schedule it into an Enterprise Scheduler like Ctrl-M or even something like cron. Not sure what you mean by the 'between jobs' comment. Visio charts? Flow? The script is well documented, f...
Ok. You are correct that you should be cautious about going in and tweaking those parameters, however proper tuning of them certainly is supported and recommended. As Ray notes, you'd need to have an idea of your max number of open hashed files at any given time and use that to tune the T30FILE entr...
You need to read the DB2/UDB API Stage Guide pdf that is in your Docs directory. There is a section there that explains how to handle column names with $ or # in them.
ps. This same issue (and solution) exists for the Oracle stages.
It works the same way that it does in TOAD. Typically, you point it to an ORACLE_HOME and it uses the tnsnames.ora file there. Obviously, there can be more to it than that but that's normally how it works. The fact that you got a Table or View does not exist error shows you are connected to an insta...
Xpert wrote:Generally which is the latest version in DS
Generally, the one with the highest number.
As Ray notes, it changes frequently and depends on platform as well. Best to hunt down where the eServices website ended up and check the matrix there.
No, not 1024 and not in parallel. He meant he divides 1024 by the row size and uses that number. So, for a 50 byte row size, the Array Size would be set to 20. Which means 20 rows would be sent across the network at a time where they would be processed in a serial fashion there.