All,
I have a custom operator being used in my job. Iam getting an error "Contents of phantom output file=> RT_SC524/OshExecuter.sh[16]: 65112 Segmentation fault". Soon after this it coredumps. Do anyone has any idea, what it is?? Does it have to do anything with the custom operator Iam using?
Any idea is apreciated.
-vj
Segmentation Fault with a core dump
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 75
- Joined: Tue May 13, 2003 4:14 am
- Location: California
- Contact:
Re: Segmentation Fault with a core dump
This sounds like it is connected with the custom operator. Perhaps there is a row of data that is not formed correctly and the operator does not know how to handle it?
Contents of phantom output file
I have the same problem, a job that ran successfully yesterday and is failing today.
Job has :
Sequential file - linkto - Transformer stage - linkto - TDMLoadPXStage
resulting with following messages :
Contents of phantom output file =>
RT_SC33/OshExecuter.sh[16]: 29445 Memory fault
Contents of phantom output file =>
DataStage Job 33 Phantom 29446
Parallel job reports failure (code 139)
Any idea ?
tks
Gilles
Job has :
Sequential file - linkto - Transformer stage - linkto - TDMLoadPXStage
resulting with following messages :
Contents of phantom output file =>
RT_SC33/OshExecuter.sh[16]: 29445 Memory fault
Contents of phantom output file =>
DataStage Job 33 Phantom 29446
Parallel job reports failure (code 139)
Any idea ?
tks
Gilles
-
- Participant
- Posts: 25
- Joined: Thu Oct 02, 2003 8:57 am
Hi
How long did this job run for?
Did it fall over immediately, or did it process a number of rows (and if so, how many). I had a similar problem (though i forget the exact message, it did involve segmentation violation) when using the TDMLoad stage. In our case (PX 7.01, HP-UX) it was caused by an apparent memory leak. Try running "top" on your server and watch the memory used by your job. We could run until the memory used by the job was around 800Mb, then bang!
Our solution was to use a server shared container with the TDMLoad stage. The performance is pretty good and the job doesnt fail
Hope this helps
How long did this job run for?
Did it fall over immediately, or did it process a number of rows (and if so, how many). I had a similar problem (though i forget the exact message, it did involve segmentation violation) when using the TDMLoad stage. In our case (PX 7.01, HP-UX) it was caused by an apparent memory leak. Try running "top" on your server and watch the memory used by your job. We could run until the memory used by the job was around 800Mb, then bang!
Our solution was to use a server shared container with the TDMLoad stage. The performance is pretty good and the job doesnt fail
Hope this helps