Are you inserting 5 rows per minute into the DataStage job log files? (This is a very small amount and could not, by itself, cause I/O issue on the UNIX machine) or do you have 5 multiinstance jobs per minute writing to the same log file? If so, how many records does you log file have?
What kind of entries are being written to your log files? If you know that this log file will have a lot of entries it is possible to set the dynamic file's minimum modulus to a large amount to at least keep the file from increasing/decreasing it's modulo too much dynamically.
IO system issue
Moderators: chulett, rschirm, roy
The amount is higher for 2 or 3 jobs, which could be considered as "critical".
For example, one of the most critical log file is for a job which parse cobol file and each run does it for one file, the average of files per minute is around 5.
Per run, 10 lines is written (2 for the auto-purge because I set the auto-purge each 10 runs).
Approx, there are 100/150 lines written in the log file per minute.
The entries in the log file are "standard", like start of the job, environment variables settings, NLS, Finished ..
The modulo of the dynamic setting seems to be an interesting option.
I never touched this kind of files (RT_LOGnn).
Should it be deleted and then recreated and then this has no impact on execution?
For example, one of the most critical log file is for a job which parse cobol file and each run does it for one file, the average of files per minute is around 5.
Per run, 10 lines is written (2 for the auto-purge because I set the auto-purge each 10 runs).
Approx, there are 100/150 lines written in the log file per minute.
The entries in the log file are "standard", like start of the job, environment variables settings, NLS, Finished ..
The modulo of the dynamic setting seems to be an interesting option.
I never touched this kind of files (RT_LOGnn).
Should it be deleted and then recreated and then this has no impact on execution?
</a>