parallel job reports failure (code 139)

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jerome_rajan
Premium Member
Premium Member
Posts: 376
Joined: Sat Jan 07, 2012 12:25 pm
Location: Piscataway

parallel job reports failure (code 139)

Post by jerome_rajan »

Hi,
I've searched extensively but none of the discussions seem to match my case. I keep getting "parallel job reports failure (code 139)" erratically. When I view the performance stats within the designer, the job shows all green though it actually aborted with the error. Here's a snapshot of the job
Image

Any help appreciated
Jerome
Data Integration Consultant at AWS
Connect With Me On LinkedIn

Life is really simple, but we insist on making it complicated.
jerome_rajan
Premium Member
Premium Member
Posts: 376
Joined: Sat Jan 07, 2012 12:25 pm
Location: Piscataway

Post by jerome_rajan »

Anyone? :roll:
Jerome
Data Integration Consultant at AWS
Connect With Me On LinkedIn

Life is really simple, but we insist on making it complicated.
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

Try searching on "parallel job reports failure (code 139)" and go through the results.
Choose a job you love, and you will never have to work a day in your life. - Confucius
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

jerome_rajan wrote:Anyone? :roll:
When i Google it found the same error for different scenarios.

You need to find a way to trace it.

For example, if you are writing the data into a table, then take out that DB stage and use peak / file.

Use Disable combination variable see the log.

Try one after other. Don't make more changes at one time.
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

jerome_rajan wrote:Anyone? :roll:
Learn patience! This is an all-volunteer site where people post as and when they can. Yesterday, for example, I had a breakfast meeting with management, a busy day at work, and a training session at IBM after work. Not much time for DSXchange.

The issue many times is that the job monitor is interfering with job initialization. You can test whether this is your issue by setting APT_NO_JOBMON=1 to disable job monitoring. Then you have the option to disable the time monitor only by setting APT_MONITOR_TIME=5 or you can set APT_DISABLE_FASTALLOC=1 to resolve this error.

There are other possibilities - for example jobs containing Netezza stages, that are less likely in your case.

Search is your friend.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
jerome_rajan
Premium Member
Premium Member
Posts: 376
Joined: Sat Jan 07, 2012 12:25 pm
Location: Piscataway

Post by jerome_rajan »

Hi Ray,
Not being impatient. Waited for a day and found that the post was getting lost in the archives. My apologies if I came across 'pushy'. :)

I started breaking the job in parts to determine which stage is causing the issue. Problem is that the the error is so erratic, it occurs once in 3-4 days on an average. So the iterations may take a while. Will definitely find the solution and add to the already existing ocean of solutions for this error!

Thank you
Jerome
Data Integration Consultant at AWS
Connect With Me On LinkedIn

Life is really simple, but we insist on making it complicated.
oracledba
Premium Member
Premium Member
Posts: 49
Joined: Mon Aug 06, 2012 9:21 am

Post by oracledba »

two possibilities

it could be related to a problem with your ODBC.ini file setup.
If you are using an ODBC setup then you may want to test the connection to ensure it is good.


it could also be related to the length of the filename. reduce the filename length and try again
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

do you get anything in the phantom logs when you reset the job it should show as "From previous run".
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
atul9806
Participant
Posts: 96
Joined: Tue Mar 06, 2012 6:12 am
Location: Pune
Contact:

Post by atul9806 »

This error is basically comes when Datastage is unable to write the job log into its file. So try to take a copy of current job and run. Hope this work !
~Atul Singh
<a href=http://www.datagenx.net>DataGenX</a> | <a href=https://www.linkedin.com/in/atulsinghds>LinkedIn</a>
Post Reply