main_program: This step has no datasets.
It has 1 operator:
op0[1p] {(sequential APT_CombinedOperatorController:
(APT_LicenseCountOp in APT_LicenseOperator)
(APT_LicenseCheckOp in APT_LicenseOperator)
) on nodes (
node1[op0,p0]
)}
It runs 1 process on 1 node.
That's the score for licensing, and doesn't help. Is there a second score event logged?
If so, can you please post that score here?
If not, you may have an issue with licensing, or access to DataStage software on other processing nodes.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Yes there was a message handler, i disable that and ran the same job with $APT_DUMP_SCORE=True , still the same it is not producing any extra dump.
I found one more interesting -
On developement box the same job is working fine, then i made a copy of the job with my initial, and i changed the output file, so that my job won't overwrite the development files, then i submit the job and it is producing the same error and aborting.
So, the dev version is working fine, my version is aborting and Production copy is aborting...
Does anyone know if this topic was resolved? It is not marked as such, but I am hoping that since it was from July that maybe the OP got it working.
We are getting the same error with "hash" instead of "same". It is on a new process, so we don't have it working in one environment and not another - so I don't know if it is environmental or not.
The job I was working on had a bunch of extra hash partitioners on it. I removed everything that was not part of a join prep and the problem went away.
I am not sure if getting rid of hashes was the required fix or if I simply eliminated a problem hash by coincidence. So I guess my question remains - what does this error mean and why am I getting it?
The message seems to imply that somewhere in the job there is an input link on which there is no partitioner/collector defined, or that Same has been selected as the partitioning/collecting algorithm but there is non-partitioned data arriving. For example, SeqFile (sequential) ----> AnyStage (parallel) and forcing Same as the partitioning algorithm on the downstream stage might be able to cause this symptom.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Well, I took a copy of the original job (that was failing) and started removing hashing one-by-one. I identified 2 stages that caused the problem, but still don't understand what went wrong. There were 2 modifies that had hash partitioning on their inputs. The field they reference for partitioning is valid - it exists in the input schema (same name, datatype, nullity, etc.). If the hash is added,t he job fails. If I remove it the job works fine.