Did the stage generate that SQL or did you? I would think your parameter marker would look more like :1 or ? rather than being bound using a column name.
What Ken is talking about can be done but isn't simple. To get an idea of what would be involved, first create a 'skeleton' job that would be your base job with no metadata in it and export it to a .dsx. Then add in the metadata for one table and export it to another .dsx. Look at the differences be...
Why not just do something with a Lookup? Build a table with the subsitution values and use that. When values change or are added, no change to the job itself would be needed.
OP is looking for deletion of category and not job. How do you know this? The OP didn't explicitly state that. My first thought was that they wanted to delete a category - and all jobs in it - from the command line and that's where my money is. Regardless, please be careful about handing out advice...
You don't need a DataStage job to 'combine' or merge two flat files, if the two files have the same metadata and merge means concatenate the two together. Take our Guru friend's cat command and add the missing redirection: cat SRCFILEA SRCFILEB > SCRFILEC A script can do this. Or the Merge stage if ...
You could create an 'empty' hashed file that anything like this could leverage. To clear the source hashed file would be as simple as copying the DATA.30 and OVER.30 from the empty hashed file into the current hashed file's directory. You could also clear it in the GUI. Another link to an Aggregator...
Writing to a DB with an Update action of Update/Insert or Insert/Update is always going to be slow..in fact very very slow. Always split your Updates and Inserts. Load them through two seperate links and then it will very much faster. I totally agree. Not just slow but typically the slowest way you...