Craig \ Ray,
I have tried both the options you have suggested - deleting the files directories and changing those options as well. But none seems to have make any difference to the actual problem I'm getting.
However, a bit more information for you and that might help me to understand this issue better (I don't have much experience with server jobs so some questions can sound moronic - bear with me please)
In my previous post I depicted the area that is getting the phantom rows coming through. But the main job is far bigger that part I have shown, and they are not getting this issue of phantom rows in the hashed file \ lookups (there are a total of 5 in the job - so 4 other plus the one I have issue with).
After changing the type to 2 from 30 (for all 5) I noticed upon running the job it creates a file D_Tmp_Batch_Check_Prev_Batch and a directory Tmp_Batch_Check_Prev_Batch which has the data & OVER (type 30) files in it. What is interesting is none of the stages in the file is called that. My question is how is the directory coming up. It is this one that is causing the problem - pretty sure. As when I killed the job from shell & recompiled the job I noticed in job stat that (lookup) link was still processing rows
And when I tried to delete that directory it wouldn't let me to.
Also does the directory name simply match what the file name is? D_ files are delete files?
E.g. If a file name specified in the hashed file is Last_Batch we will end up with Last_Batch (file), D_Last_Batch (file) and Last_Batch (directory)?
Finally is there any other stage that can create these file structure, in server jobs?
![Question :?:](./images/smilies/icon_question.gif)