Page 1 of 1

Program Unable to allocate Type 30 descriptor, table is full

Posted: Thu Oct 27, 2005 7:56 pm
by Christina Lim
Hallo all,

Would appreciate your opinion on the error that we faced quite frequently now.

We have a job sequence that invoke a oracle stage to delete the records and do insertion of new records in the subsequent job.

However, our job sequence keep aborting at different job that deletes the record thru oracle enterprise stage. But the job is a runnable job unlike stated in the log.

I did a quick check search in the forum for "Program "DSR_EXECJOB": Line 282, Unable to allocate Type 30 descriptor, table is full." error.
It's related to hash file size limitation which can be solved by increasing the T30FILE variable in the config file.

However, our job is just a simple job without hash file creation.
The flow is : Textfile to modify stage to oracle stage (deletion). And most of all, we are on DSEE and not DSSvr. Please advice.

The job sequence log:

04:31:06: S_ATMSPF_VENdel (JOB VEN_S_ATMSPFdel) started
04:31:07: Exception raised: @S_ATMSPF_VENdel, Error calling DSAttachJob(VEN_S_ATMSPFdel)
(DSGetJobInfo) Cannot open executable job file RT_CONFIG7010 (DSOpenJob) Cannot open job VEN_S_ATMSPFdel. - not a runnable job
04:31:07: Exception handler started
04:31:07: ntfEmail (ROUTINE DSSendMail) started
04:32:08: ntfEmail finished, reply=0


The job sequence log from reset :

From previous run
DataStage Job 6999 Phantom 29026
[4247596] Done : DSD.RUN VEN_S_ITEMPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[999614] Done : DSD.RUN VEN_S_DESCPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[3682372] Done : DSD.RUN VEN_S_REGPPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[4411392] Done : DSD.RUN VEN_S_RESNPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[1355894] Done : DSD.RUN VEN_S_HPADPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[1212490] Done : DSD.RUN VEN_S_FLUPPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[3506230] Done : DSD.RUN VEN_S_ZAOLPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[4243500] Done : DSD.RUN VEN_S_ZRADPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[917516] Done : DSD.RUN VEN_S_SURHPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
Program "DSR_EXECJOB": Line 282, Unable to allocate Type 30 descriptor, table is full.
Job Aborted after Fatal Error logged.
Program "DSD.WriteLog": Line 239, Abort.
[4247604] Done : DSD.RUN VEN_S_PCDDPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[1953864] Done : DSD.RUN VEN_S_RTRNPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[3711002] Done : DSD.RUN VEN_S_CLNTPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[2932778] Done : DSD.RUN VEN_S_COVRPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
[1478858] Done : DSD.RUN VEN_S_PTRNPFdel. 0/0 pSID=MIPRD pUID=stgmi pPwd=LL:@9:V@>9:L0F7I4JJ<BKI7F<>IM1 pEnv=dwh DSJobController="VEN_Load_All_txt"
Attempting to Cleanup after ABORT raised in stage VEN_Load_All_txt..JobControl

DataStage Phantom Aborting with @ABORT.CODE = 1


Thank you so much for your time .

Posted: Thu Oct 27, 2005 7:59 pm
by kcbland
All jobs use dynamic hash files. The log, status, config, and temp hash files that go with every job design are dynamic. During a job's execution, it's logging messages to the log, updating the config and status files as it executes.

Is this error happening when you have a lot of jobs simultaneously executing? If so, then you already know the answer: increase your T30FILES and do the uvregen.

Posted: Thu Oct 27, 2005 9:07 pm
by Christina Lim
Hallo Kenneth ,

Thank you for your response.

While I search the forum for detailed steps to increase the T30FILE, can you please explain your insight where all jobs use dynamic hash files?
Does that mean that all job runs info will be stored in dynamic hash files regardless of whether it is a EE or Server ?

Thank you.

Regards,
Swee Ting

Posted: Thu Oct 27, 2005 9:13 pm
by kcbland
That would be correct. The DS Engine uses dynamic hash files as its storage device for job designs, metadata, routines, and all run-time support files.

Posted: Thu Oct 27, 2005 9:13 pm
by ray.wurlod
All repository design time objects are stored in dynamic hashed files with names beginning DS_, for example DS_JOBS, DS_ROUTINES and so on.

All repository run time objects are stored in dynamic hashed files, and there is one set of these per job. Most have names beginning RT_, for example RT_CONFIGnnn, RT_STATUSnnn, RT_LOGnnn.

Thus, for every running job there are three dynamic hashed files (the three mentioned above) open in addition to any that appear in the job design itself. For every connected client there will be some of the DS_ dynamic hashed files open.

Some of the utility routines use a dynamic hashed file.

On larger sites these numbers can add up fairly fast. The default for T30FILES is too small for medium to large sites. Increase it from 200 to 1000.

You can monitor how many dynamic hashed files are open system wide using the command

Code: Select all

$DSHOME/bin/analyze.shm -d
periodically.