Server Jobs - record processing

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Manikandan
Participant
Posts: 2
Joined: Thu Mar 31, 2005 3:47 am

Server Jobs - record processing

Post by Manikandan »

I have 10 records with 2 columns each in my input sequential file like,

A1 B1
A2 B2
A3 B3
A4 B1
A5 B5

I want to generate Surrogate Key for each of the column (should result in 09 surrogate key) using hash file lookup. According to my understanding when we are using a dynamic hash file records get cached for every record in the memory, however it is not happening so.... in the same input file when I have duplicate col2 for some records, the lookup is failing and I could record the same with a flag.

Wish to know, even in Server Jobs records are not processed row by row but in bunch.

Thanks,
Manikandan
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

"Wish to know, even in Server Jobs records are not processed row by row but in bunch."

If row buffering is disabled, row by row. If row buffering is enabled and you have active-to-active stage links, then rows are buffered ("bunches"?).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Manikandan
Participant
Posts: 2
Joined: Thu Mar 31, 2005 3:47 am

Post by Manikandan »

ray.wurlod wrote:"Wish to know, even in Server Jobs records are not processed row by row but in bunch."

If row buffering is disabled, row by row. If row buffering is enabled and you have active-to-active stage links, then rows are buffered ("bunches"?).


Thanks Ray, but if row buffering is disabled again we may get caught with performance issue. For now I have handled the case by putting in an intermediate Hash file and avoiding duplicate in target.
Post Reply