Limiting Number of Simultaneously Executing Jobs (Cleanly).

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
gsherry1
Charter Member
Charter Member
Posts: 173
Joined: Fri Jun 17, 2005 8:31 am
Location: Canada

Limiting Number of Simultaneously Executing Jobs (Cleanly).

Post by gsherry1 »

Hello Forum,

When receiving the following error, the solution is typically to increase uvconfig entries to allow for more UV resources to be allocated (T30FILES, MFILES):
Unable to allocate Type 30 descriptor, table is full.
My work environment has been getting this error and other RT CONFIG errors that we attribute to running too many DS Jobs at once. In addition to tuning those uvconfig parameters, we would like to limit the amount of active jobs running in DS, and the excess to be blocked until jobs complete. We consider excess processes blocked to be a more cleaner solution than to abort with errors.

Is there any options within DataStage that would allow for such a setup?

I am also interested in hearing if anybody has solved this problem outside of DataStage using some tool such as Workload Manager or Windows setting.

Thanks in advance.

Greg
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

We've written our own job control that takes a table of jobs and their dependencies and manages the execution. Part of that include throttling, but when you're talking about PX you've lost control over resources. Node pools are about the only way to deal with resource allocation, but even that's tricky. A job can have "dead spots" where processing single-threads or binds, then other spots where it's a total system killer.

The best you can accomplish is to manage the number of simultaneously executing jobs and that usually means writing your own job control.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
WDWolf
Charter Member
Charter Member
Posts: 14
Joined: Mon Dec 05, 2005 12:06 pm

Post by WDWolf »

The problem with the work load manager is the identification of PID's. In order for this to be succesful we would need to have some level of seperation for each new invocxation that does not exits. I have used the job control path many times in the past. An alternative we have gone to at our current site was to allow the external scheduler (like zeke, control-m, autosys, etc) to be the master scheduler and to have it run the variuos sequencers. In order for this to be effective you have to really limit the number of jobs in a sequencer to 1 (with limited exceptions). Our current implementation really only uses the sequencers to faciliate the passing of parms, and the notifications of failed/warnings on jobs. All of the actual scheduling of jobs is now done via Zeke. This allow us to externally manipulate the work load on the server with simple schedule changes....creates a lot of sequencers doing this! The DS instance in this case is a shared environment used by many teams, the only common point is that they all use the same external scheduler, side benefit for us is that the same scheduler controls all mainframe, and other server activity as well so tieing it all together is really nice.
William Wolf
Wolf Consulting
612-719-9066
Post Reply