Failed to distribute the shared library

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
my_stm
Premium Member
Premium Member
Posts: 58
Joined: Mon Mar 19, 2007 9:49 pm
Location: MY

Failed to distribute the shared library

Post by my_stm »

I developed a simple job few days ago. the job is run successfully without any abortion and warning in few days ago but when i run it today. I feel shock because my job is aborted. :?:

My job design is looked like in this below :

DB2--> Transformer -> Lookup File set

Below is abortion error that i get

rcp: error in writing to /home/dsadm/Ascential/DataStage/Projects/WORKING1/RT_BP18019.O/V16S0_CopyOfEXL05PDimBasePd01_trns_DataCleaning.o.tmp : No space left on device

trns_DataCleaning: Failed to distribute the shared library "/home/dsadm/Ascential/DataStage/Projects/WORKING1/RT_BP18019.O/V16S0_CopyOfEXL05PDimBasePd01_trns_DataCleaning.o" to node "entisdev001".

main_program: Could not check all operators because of previous error(s)

main_program: Creation of a step finished with status = FAILED.

Can anyone can giving me some clues on why this error happened? :oops:

Thanx
jhmckeever
Premium Member
Premium Member
Posts: 301
Joined: Thu Jul 14, 2005 10:27 am
Location: Melbourne, Australia
Contact:

Post by jhmckeever »

It looks as though you've run out of disk space (The first message is "No space left on device".) Speak to your system administrator.
<b>John McKeever</b>
Data Migrators
<b><a href="https://www.mettleci.com">MettleCI</a> - DevOps for DataStage</b>
<a href="http://www.datamigrators.com/"><img src="https://www.datamigrators.com/assets/im ... l.png"></a>
my_stm
Premium Member
Premium Member
Posts: 58
Joined: Mon Mar 19, 2007 9:49 pm
Location: MY

Post by my_stm »

I don't think it's space issue. As you can see the specs below:

Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 0.25 0.22 12% 2371 5% /
/dev/hd2 5.00 2.97 41% 44385 6% /usr
/dev/hd9var 0.50 0.42 16% 557 1% /var
/dev/hd3 5.00 4.47 11% 748 1% /tmp
/dev/hd1 0.50 0.23 54% 187 1% /home
/proc - - - - - /proc
/dev/hd10opt 5.00 3.79 25% 5125 1% /opt
/dev/fslv00 38.50 13.48 65% 267395 8% /home/dsadm/Ascential
/dev/fslv01 5.00 3.72 26% 167 1% /install_tmp
/dev/fslv02 55.00 38.50 30% 25435 1% /entis

As can be seen highlighted above, our project env/staging env are only residing there. The percentage of space utilized no where near treshold. :(

Unless I am missing some other parts of checking.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It's free space while the job is running, not before or afterwards, that is at issue here.

One of your file systems has filled during the running of the job. Probably scratch disk, but there's not enough information to say so for sure.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
my_stm
Premium Member
Premium Member
Posts: 58
Joined: Mon Mar 19, 2007 9:49 pm
Location: MY

Post by my_stm »

It seems pretty unlikely. Because I've only run this simple job:

Source file -> transformer -> Target file.

Total records processed: 1

And still I am getting said error. I've made sure no other jobs were running in the system and also monitored the disk space while the job's at it.

Still the error's ambiguous and I can't seem to find any postings with similar errors here.

Might need to consult IBM...
Post Reply