Fork() failed. Not enough space.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
dpsahoo
Participant
Posts: 11
Joined: Fri Jan 07, 2005 12:11 pm
Location: Sydney, NSW
Contact:

Fork() failed. Not enough space.

Post by dpsahoo »

I ran a parallel job with 2 nodes. "The job failed with Fork() failed. Not enough space". I checked the scratch disk for any space issues, but there was enough space.

However, when I ran the same job with 1 node it completed successfully but with a longer time.

Not sure how the change to single node config file resulted in success.

I ran the job again with 2 nodes and lo and behold it failed again with fork() failed and changing to single node file again resulted in success.

Can anybody throw light as to why a lesser config file is better than a higher node config file in certain circumstances?
This post is by Durga Prasad
balajisr
Charter Member
Charter Member
Posts: 785
Joined: Thu Jul 28, 2005 8:58 am

Post by balajisr »

Check your OS Process limit configuration.
balajisr
Charter Member
Charter Member
Posts: 785
Joined: Thu Jul 28, 2005 8:58 am

Post by balajisr »

Check your OS Process limit configuration.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

With Fork() 'space' means 'memory'. You runned out.
-craig

"You can never have too many knives" -- Logan Nine Fingers
John Smith
Charter Member
Charter Member
Posts: 193
Joined: Tue Sep 05, 2006 8:01 pm
Location: Australia

Post by John Smith »

single node - your job spawn less processes hence uses less memory. configuring more nodes does not equal better unless you have resources in your box.
DS consultant.
Post Reply