cleanup resources

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
scottr
Participant
Posts: 51
Joined: Thu Dec 02, 2004 11:20 am

cleanup resources

Post by scottr »

if i select cleanup resources for a perticular job from director, will it effect any other jobs that are currently running(which are using hash files).

to day i did the same and some other job is stopped with the following error.

Program "JOB.180808767.DT.1358363097.TRANS1": Line 172, Internal data error.

k of 0x834 does not match expected blink of 0x0!
Detected within group starting at address 0x80024000!
File 'xxx/Lookup1996to2002/DATA.30':
Computed blink of 0x834 does not match expected blink of 0x0!
Detected within group starting at address 0x80024000!

thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Cleanup resources only affects the currently selected job.

However, your error message indicates that this is not the answer to your problem. Your problem is in the hashed file Lookup1996to2002, which has become corrupted, almost certainly (given the hex address of the group in which the problem occurred) because the hashed file has hit the 2GB size limit.

You almost certainly will have lost some data, and will not readily be able to determine what data you have lost. The safest thing to do is to clear the hashed file, RESIZE it to using 64-bit addressing, and re-load it.

From the Adminstrator client command window, or in a dssh session on the server, execute:

Code: Select all

CLEAR.FILE hashedfilename
RESIZE hashedfilename * * * 64BIT
You may need to use SETFILE to establish a VOC pointer to the hashed file first.

You might also contemplate whether you really need every row and every column in the hashed file that you are loading into it. For example, if you only need current rows, don't load any non-current ones. Any column that is not being used in downstream processing should never be loaded into the hashed file. If you can be savage enough with these cuts, you may be able to fit within the 2GB limit.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply