Does you id have read/write permissions to this particular directory? plust provide fully qualified path to the files. If your in the directory where Text resides, provide the fully qualified path of Sales folder.
Not a plausible approach. Redesign it so that the contents of a hashed file are now present in a dataset and use a px job to read the dataset. If you could be more clear on what needs to be done, someone here can point you in the right direction.
In the filter command of sequential file stage, put wc -l. Specify one column in the metadata. This will get your the row count of the file you specify.
Did you make sure all your numeric/Integer/BigInt/SmallInt datatypes in the sequential file contain only numbers and no non-numeric characters?
Stick in a transformer and specify a reject link. Specify the array size to 1. The culprit row will be rejected for you to analyze.
In the UV prompt from the unix command line (telnet, putty etc.) or DataStage Administrator.
Also, were the jobs locked by your id? If not then you wont be able to 'free' them. Search for DS.TOOLS in the forum, another way of unlocking the jobs.
Now this is a good informational post, not something you see everyday. Thanks for the great info guys. This is definately going in my favorites bucket :D
The Convert() function will work to retain only alphanumeric characters. But for non-printable characters, I think you will have to go the routine route.
So is your scheduler controlling the process by reading return codes or your job sequence is? First of all, the creation of flat files at the end of each extract is redundant, IMHO. As the scheduler can be setup to kick off the 11th job (your main sequence) only after the first 10 finish successfull...