Page 1 of 1

Type 30 Descriptor Table Full - Windows

Posted: Thu Jun 08, 2006 8:10 am
by ecwolf
Good Day!!

We are using DataStage 7.5 Server Edition on a Windows platform and we have a number of sequences which are running in parallel.

Recently we have been running into the famous "Unable to allocate Type 30 descriptor, table is full" error. I have been researching the forum and from what I understand, there is a T30FILE setting that needs to be adjusted. I have two questions:

1. From what I've read, this setting can be modified in a UNIX configuration file. Is there a similar configuration file in Windows? If so where is it? I have not been able to find a definitive answer. If I've missed a post somewhere please let me know.

2. Is there any way of cleaning out the T30File table? We have been doing a lot of development and I fear that this table is filled with old and obsolete entries. Again any advice would be appreciated.

Thanks and Cheers!!

Eric

Posted: Thu Jun 08, 2006 8:22 am
by kcbland
uvconfig is in the DSEngine directory on Windoze as well. Check it out.

T30FILE is not a true table, but a shared memory thingy. There are no old and obsolete entries. A reboot once a month never hurts. Your issue is too many simultaneous dynamic hashed files being used. Just up the number, you'll be fine. Follow the same directions (get users out, logoff all clients, stop DataStage, go to server command line, modify uvconfig, uvregen, reboot the server (good idea) else start DataStage.

Posted: Thu Jun 08, 2006 4:06 pm
by ray.wurlod
If you don't believe that you are using that many hashed files, remember that most of the Repository tables are also hashed files. Every job you create engenders some more, such as RT_CONFIGnn, RT_STATUSnn and RT_LOGnn. T30FILE sets an upper limit on the number that can be open simultaneously: the default is 200, raise it to 500 or even 1000. Each entry in the table on Windows is only 112 bytes in size, so even 1000 will take only a relatively small amount of memory.
Entries are removed from the T30FILE memory table when a dynamic hashed file is closed and no-one else has it open. The T30FILE table stores the sizing information needed (immediately) by all processes to determine whether to trigger a split or a merge when updating the hashed file, and the current modulus value to be used in calculating the group address using the hashing algorithm. Keeping this information in shared memory ensures its immediacy.

Posted: Tue Jul 24, 2007 3:19 pm
by datastage
ray.wurlod wrote:If you don't believe that you are using that many hashed files, remember that most of the Repository tables are also hashed files. Every job you create engenders some more, such as RT_CONFIGnn, RT_STATUSnn and RT_LOGnn. T30FILE sets an upper limit on the number that can be open simultaneously: the default is 200, raise it to 500 or even 1000.
In the past I only considered what jobs were running at the point in time that the error occurred, but this makes me think: Open DS Director sessions could also have an influence right? Wouldn't they be basically opening the RT_LOGnn files to display in the client window?

Posted: Tue Jul 24, 2007 3:33 pm
by ArndW
Yes, the Director has a number of files open but it won't have several log files open concurrently. Also remember that these file units are shared across all users, so if you have several directors and running jobs open using the same hashed files they share units.

Posted: Tue Jul 24, 2007 11:40 pm
by ray.wurlod
Director also has RT_CONFIGnnn and RT_STATUSnnn open for the currently selected job, and DS_JOBS and its CATEGORY index.