Write to ORA_BULK stage
Moderators: chulett, rschirm, roy
Write to ORA_BULK stage
Hi, When I write + 1 million records to an Oracle Bulk stage it falls over with the following error codes. I doesnt get to the point where it kicks off sqlldr. Has anybody come across this ?
From previous run
DataStage Job 47 Phantom 6582
jobnotify: Unknown error
DataStage Phantom Finished.
[6593] DSD.StageRun CpCustMeasuresTransforms. CpCustMeasuresTransforms.CTransformerStage3 1 0/0 - core dumped.
From previous run
DataStage Job 47 Phantom 6593
Abnormal termination of DataStage.
Fault type is 10. Layer type is BASIC run machine.
Fault occurred in BASIC program ORABULK.RUN at address a9
From previous run
DataStage Job 47 Phantom 6582
jobnotify: Unknown error
DataStage Phantom Finished.
[6593] DSD.StageRun CpCustMeasuresTransforms. CpCustMeasuresTransforms.CTransformerStage3 1 0/0 - core dumped.
From previous run
DataStage Job 47 Phantom 6593
Abnormal termination of DataStage.
Fault type is 10. Layer type is BASIC run machine.
Fault occurred in BASIC program ORABULK.RUN at address a9
ArndW wrote:The dump is coming from the Transform stage, not the bulk load stage. If you put a constraint "1=2" in the transform so that no rows are passed to the ORA passive stage does the error still occur?
No it doesnt. It only occurs with large amounts of records. I can pass smaller result sets with no problems.
I can write to a sequential file with no problems from the same transform with 4million records. I have workarounds but I'd rather know what is causing the problem.
So you had DS create more than 1 data file and an explicit control file and it still failed? I am surprised that it failed and that you managed to reproduce it so quickly. It looked like a memory issue caused by getting too much data in one file.
You can debug your core file to get a detailed cause; but that is more work. You need to have dbx or a similar tool installed and can use that with the core (dumped into your project directory) and the dssh executable (perhaps another, but your core header file will tell you which program you need to use).
Yet another option is to use the VLIST utility within UniVerse/DataStage to look at the instruction at hex address A9 in the ORABULK.RUN program.
You can debug your core file to get a detailed cause; but that is more work. You need to have dbx or a similar tool installed and can use that with the core (dumped into your project directory) and the dssh executable (perhaps another, but your core header file will tell you which program you need to use).
Yet another option is to use the VLIST utility within UniVerse/DataStage to look at the instruction at hex address A9 in the ORABULK.RUN program.
I havent used it to create two datafiles. I can try this.
I have plenty of workarounds. The reason that stage is used is a historical one that I dont know. It is used a lot elsewhere in the warehouse as well even though it is a 9i DB that gets written to. This problem has only started happening recently. My next aproach is to upgrade the stage to the newer version for bulk loading Oracle 9 db and re-test
Ill let you know how I get on.
Thanks for your help.
I have plenty of workarounds. The reason that stage is used is a historical one that I dont know. It is used a lot elsewhere in the warehouse as well even though it is a 9i DB that gets written to. This problem has only started happening recently. My next aproach is to upgrade the stage to the newer version for bulk loading Oracle 9 db and re-test
Ill let you know how I get on.
Thanks for your help.
</a>