Temporary lookuptable.* files unexpectedly accumulating
Posted: Thu Feb 07, 2013 3:30 am
According to IBM documentation, lookups can create temporary datasets on the resource disk defined in the configuration file.
These lookups seem to be cleaned after a successfull execution, but remain if the job aborts.
One must then clean the files manually or with a script.
http://www-01.ibm.com/support/docview.w ... wg21441823
We have a number of jobs that are scheduled to run on a regular basis, in the same project, on the same server.
These jobs are all based on the same template and use the same settings (apparently...)
However, one and only one of these jobs generates 'lookuptable' dataset files, which are never deleted, even though that job never aborts.
16 files are created at each run.
The job has 8 lookup stages, normal or sparse, including a total of 21 reference links (16 reference links to normal lookups).
The project is configured to run on 1 node only, and each stage is set explicitely in 'sequential' mode.
Even if we are setting up a script to cleanup that directory, we would like to avoid the creation or the non-deletion of these files.
Does anyone know what is happening ?
1. Is it Possible to avoid the creation or the non-deletion of these files?
2. How to read the content of these files? (they are not plain text and the Data Set Manager returns 'unknown error reading record schema')
These lookups seem to be cleaned after a successfull execution, but remain if the job aborts.
One must then clean the files manually or with a script.
http://www-01.ibm.com/support/docview.w ... wg21441823
We have a number of jobs that are scheduled to run on a regular basis, in the same project, on the same server.
These jobs are all based on the same template and use the same settings (apparently...)
However, one and only one of these jobs generates 'lookuptable' dataset files, which are never deleted, even though that job never aborts.
16 files are created at each run.
The job has 8 lookup stages, normal or sparse, including a total of 21 reference links (16 reference links to normal lookups).
The project is configured to run on 1 node only, and each stage is set explicitely in 'sequential' mode.
Even if we are setting up a script to cleanup that directory, we would like to avoid the creation or the non-deletion of these files.
Does anyone know what is happening ?
1. Is it Possible to avoid the creation or the non-deletion of these files?
2. How to read the content of these files? (they are not plain text and the Data Set Manager returns 'unknown error reading record schema')