Fellows
I have a number of jobs that have been suffering from corruptions in their intenal status files i.e. Job has aborted but still shows as running in Director & also issues with Wave resource.
These occur on a random basis and are rectified by Clearing the Status file & re-compiling.
Clould this be caused by the &PH& directory filling up?
Also, is it good practice to manually clear out the &PH& periodically using "CLEAR.FILE &PH&" (via Administrator Command window).
Any feedback would be great.
Zeddicus "Zed" Zorander
Clearing &PH&
Moderators: chulett, rschirm, roy
Zed,
That could be one cause. And yes, it is generally a good idea to manage the Phantom directory so that the number of entries in it doesn't get out of hand.
Instead of manually clearing it (which you certainly can do) you might also want to consider an automated process. If you search the site, I'm pretty sure you'll find a post by T.J. on this subject along with a script he runs out of cron each night. From what I recall it does a 'find' of any files in there over a certain number of days old and removes them. It would be easy enough to roll your own as well.
You might also want to search on UVCONFIG in this forum. You may need to adjust some of the 'tunables' in the underlying engine, specifically T30FILES as a starting point.
That could be one cause. And yes, it is generally a good idea to manage the Phantom directory so that the number of entries in it doesn't get out of hand.
Instead of manually clearing it (which you certainly can do) you might also want to consider an automated process. If you search the site, I'm pretty sure you'll find a post by T.J. on this subject along with a script he runs out of cron each night. From what I recall it does a 'find' of any files in there over a certain number of days old and removes them. It would be easy enough to roll your own as well.
You might also want to search on UVCONFIG in this forum. You may need to adjust some of the 'tunables' in the underlying engine, specifically T30FILES as a starting point.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
You have at least 2 issues which can cause this. The easiest fix is to at TCL:
The problem is a type 1 file is a short name directory which is the default for &PH&. Any item in a type 1 file that is longer than 14 characters is split into a directory. This directory is owned by the first user which runs this job. Your umask is set wrong or the group is different for the next user so it cannot write to this directory. The group id needs to be set for &PH&. At UNIX
Do a search on umask. It needs to be set in dsenv in the DataStage engine directory.
The RESIZE is the easiest solution. Try that first.
Code: Select all
RESIZE &PH 19
Code: Select all
chmod -R 2770 \&PH\&
The RESIZE is the easiest solution. Try that first.
Mamu Kim