G'day all,
I have a need to run a copy (new instance) of a batch job from an executing instance and then make sure that a failure of the first job will not kill the second:
EG:
a.1 spawns a.2
Normally if a.1 fails, a.2 will also fail. This is not a desirable occurrence.
If I use a routine to do a dsrunjob and then detach from the new job will this prevent it?
I can call a Unix script to do a dsjob as a solution but was hoping to avoid this.
Any suggestions?
Start a job and dis-associate from it
Moderators: chulett, rschirm, roy
Start a job and dis-associate from it
Andrew
Think outside the Datastage you work in.
There is no True Way, but there are true ways.
Think outside the Datastage you work in.
There is no True Way, but there are true ways.
Andrew,
normally jobs, even instances of the same job, are run completely independantly of each other. I don't know how you could easily program concurrent runs of job.a and job.b to trigger an abort of the job.a when job.b aborts. How did you effect this? Could one job aborting set the database status so that the other can't access the DB and thus aborts?
normally jobs, even instances of the same job, are run completely independantly of each other. I don't know how you could easily program concurrent runs of job.a and job.b to trigger an abort of the job.a when job.b aborts. How did you effect this? Could one job aborting set the database status so that the other can't access the DB and thus aborts?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Arnd,ArndW wrote:Andrew,
I don't know how you could easily program concurrent runs of job.a and job.b to trigger an abort of the job.a when job.b aborts. How did you effect this? Could one job aborting set the database status so that the other can't access the DB and thus aborts?
If the jobs are initiated froma sequence they are tied together so job.a crashing (or being stopped) cascades the stop down the tree.
I think if I kick off a job using dsrunjob in a sequence I should be ok, especially if I dettach and don't dswaitforjob.
I'll let people know and post my routine code for my "uber scheduler" (well it could do with a lot of work ) unless people can't come up with an alternate slution.
A bit more on the story:
The scheduler runs a suite of jobs, some run every hour, some each day at a particular hour, some each week and some each month. I have created a database table to control this, check each job step againt the table and the current processing period and run if required. If the sysetm is 10 hours behind it will catch up pretty quickly as the jobs are set up to be able to run concurrently (controlled by instances) and job.b will spawn job.c etc.
The jobs will wait until their correct run time if they are not behind and run nearly immediatly if they are started late.
Andrew
Think outside the Datastage you work in.
There is no True Way, but there are true ways.
Think outside the Datastage you work in.
There is no True Way, but there are true ways.
Andrew,
that makes more sense; in that in a sequence you are running the jobs sequentially and not concurrently. If you program this in job control or elsewhere, issue a DSRunJob() and don't call the DSWaitForJob() then you have achieved your intended result. The DSDetachJob() call isn't necessary, as long as you don't call DSStopJob(). You can fire off multiple concurrent instances with no interaction between the jobs (unless something inside the job doesn't allow concurrent access such as using the same sequential output file).
that makes more sense; in that in a sequence you are running the jobs sequentially and not concurrently. If you program this in job control or elsewhere, issue a DSRunJob() and don't call the DSWaitForJob() then you have achieved your intended result. The DSDetachJob() call isn't necessary, as long as you don't call DSStopJob(). You can fire off multiple concurrent instances with no interaction between the jobs (unless something inside the job doesn't allow concurrent access such as using the same sequential output file).
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
I'm in the same boat as Arnd (no pun intended) - I don't quite get this as this is only true if you've set the Sequence job up to make that happen. Usually via the 'automatically handle' functionality with only 'Ok' triggers, or an Exception Handler or that 'Stop Everything' stage, I forget the exact name.aartlett wrote:If the jobs are initiated from a sequence they are tied together so job.a crashing (or being stopped) cascades the stop down the tree.
It certainly doesn't have to work that way, unless I've really missing the point here.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Thanks Arnd and all. It's what I suspected but it doesn't hurt to check .ArndW wrote:Andrew,
If you program this in job control or elsewhere, issue a DSRunJob() and don't call the DSWaitForJob() then you have achieved your intended result.
Andrew
Think outside the Datastage you work in.
There is no True Way, but there are true ways.
Think outside the Datastage you work in.
There is no True Way, but there are true ways.