Consumed more than 100,000 bytes looking for record delimite

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rajeevm
Participant
Posts: 135
Joined: Sun Jan 22, 2006 10:44 am

Consumed more than 100,000 bytes looking for record delimite

Post by rajeevm »

Hi

I am actually trying to read the data from .csv file thru sequential file stage and load into the sequential file. I am getting fatal error as below:

Consumed more than 100,000 bytes looking for record delimiter; aborting

Final delimter - end
Field delimiter - |
quote -none

I cannot even view the data also . What 's causing the problem . I did my previous job with the same options it was loading fine.

I really appreciate your responses.

Thanks
rajeev
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It's read 100,000 bytes without finding the first "|" character. It's assumed that your data don't match your metadata, and given up.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
rajeevm
Participant
Posts: 135
Joined: Sun Jan 22, 2006 10:44 am

Post by rajeevm »

Thanks Ray for your reply

But the problem was with the Record delimiter . When I added that property and default it to Unix new line .The job just ran fine .

Thanks
rajeev
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

A closer reading of the error message would have told me that. :oops:
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply