Thanks for the response, i have just completed the run for loading data from a sequential file to a Oracle table which was a Plain Insert .. took 1 hour 13 minutes to complete
this job was a plain load ..read from Sequential stage and write to Oracle Stage
Search found 60 matches
- Mon Sep 07, 2009 4:01 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Perf Issue when reading .csv file using sequential Stage
- Replies: 16
- Views: 6961
- Mon Sep 07, 2009 2:40 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Perf Issue when reading .csv file using sequential Stage
- Replies: 16
- Views: 6961
Perf Issue when reading .csv file using sequential Stage
Hi All I have got a performance issue while reading a .csv file which has 10254768 rows of data in it. :shock: the job flow is as below Seq file >Transformer > Sort > Transformer > Oracle Stage ( 2 oracle stages ... one for capturing reject data and the other for good data) Our process runs on two n...
- Fri May 01, 2009 11:14 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job aborts with Error in Write Block
- Replies: 7
- Views: 7304
Thanks all for your response. I was finally able to fix the issue :lol: This job was changed couple of weeks back and the code has been stable,till we encountered this issue. The last modification on our code was to read a Nullable column and set it to default when we have no value If IsNull(Columnn...
- Thu Apr 30, 2009 8:17 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job aborts with Error in Write Block
- Replies: 7
- Views: 7304
- Wed Apr 29, 2009 1:26 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job aborts with Error in Write Block
- Replies: 7
- Views: 7304
Job aborts with Error in Write Block
Hi I have a job which was working good till yesterday. the job now fails with the below fatal errors buffer(2),1: Error in writeBlock - could not write 130080 buffer(3),1: Fatal Error: APT_BufferOperator::writeAllData() write failed. This is probably due to a downstream operator failure. i have trie...
- Mon Oct 13, 2008 11:41 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Oracle write hangs infinitely
- Replies: 20
- Views: 9460
- Mon Oct 13, 2008 11:35 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Migration Issue
- Replies: 8
- Views: 2141
Was the testing in dev and pre-prod done with near real time data ??? That is the main reason my jobs fail in prod even though they are successful in testing phase. Sad, we cannot get near PROD data ..so we are prepared for these fixes ...any thing can differ ..end delimeter .. length of column, dat...
- Mon Sep 15, 2008 1:56 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: date conversion
- Replies: 10
- Views: 2696
- Mon Sep 15, 2008 12:27 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Oracle Deadlock Situation.
- Replies: 7
- Views: 2271
Sorting the data and then introducing the hash partition would be a good option ... or remove duplicates based on your key values and then use it for Upsert ... either of them should help you fix the issue. One reason i got dead lock was .. the table was accessed continuously ( through Datastage onl...
- Mon Sep 15, 2008 12:19 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Continuous Surrogate key generation for multiple runs
- Replies: 7
- Views: 4356
Yes ... using the database sequences definitely slowed our jobs significantly... The more data we tried processing, the worse the performance got ... We have to go for a re-design :? .... This is what we are doing(trying to :oops:) ......We have created a UNIX script which returns the maximum of the...
- Wed Sep 10, 2008 12:56 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Unable to login
- Replies: 9
- Views: 4674
- Mon Sep 08, 2008 3:48 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Unable to login
- Replies: 9
- Views: 4674
Unable to login
Hi We have got a strange issue on one of our DataStage environments We have got a team at offshore which can not access DataStage from their boxes. They can access the UNIX box on which the DataStage is installed, the DataBase ...everything on that box except for DataStage. I am working in a differe...
- Mon Apr 28, 2008 8:33 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job not working in Production
- Replies: 6
- Views: 1366
- Mon Apr 28, 2008 8:16 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job not working in Production
- Replies: 6
- Views: 1366
Hi Guys ..Thanks ..seeing your answers keeps my hopes alive I am using a transformer with lots of Stage variables and nearly 15 constraints :D I tried running the job with peek stage after removing the DB stages ... its still the same .... All the parameters are proper any more suggestions...ready t...
- Mon Apr 28, 2008 7:47 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job not working in Production
- Replies: 6
- Views: 1366
Nothing .. it is a clean run ... the job design is like this I read data from a sequential filke .. load it into ten tables based on the first two attributes if it is 01 then Table 1, 02 then Table 2 ..etc The data is getting re-mapped and flowing to wrong tables in Production..when we ran the same ...