My job is working though it has a peculiar property.
It seems to produce more rows with each execution.
I doesn't produce too many rows; but with each turn it can read more rows until it can consume the whole file.
I have hierarchical stage that reads xml, in that stage it produces at the beginning about 150 rows. After some executions it can produce 213 rows. I think there should be about 270.
Is this a memory issue? it has to be a config issue somewhere. I tried increasing the java heap size but it doesn't seem to have any effect.
Sorry for the bizarre question; I haven't found the answer yet. The xml file is about 250K
Datastage -- Produces More Rows after each run
Moderators: chulett, rschirm, roy
we have an outstanding ticket with IBM about the H-stage dropping a record here and there at random. We can run the same data 3 times and get {one record missed, all the data, different record missed} as the result.
Whether you are seeing something similar or not, I can't say from your problem statement. Why do you think 270 is true and 213 is not? You need to run some simple tests with a few records (arguably 300 or so is few) but you need to know exactly what you expect in the output and compare that to what you got.
Whether you are seeing something similar or not, I can't say from your problem statement. Why do you think 270 is true and 213 is not? You need to run some simple tests with a few records (arguably 300 or so is few) but you need to know exactly what you expect in the output and compare that to what you got.