Error reading on import from Sequential file

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Axa
Participant
Posts: 2
Joined: Wed Nov 08, 2006 12:55 am

Error reading on import from Sequential file

Post by Axa »

Hi All,

When I am reading a delimitted extract file using a sequential file stage, I get the following error and eventually job is getting aborted.
"Consumed more than 100,000 bytes looking for record delimiter; aborting"

In this file, there is a column-data type long (which can have maximum of 10000000 bytes). In the worst case, there will be the field delimitter after 10000000 bytes. I believe still it should be readable.

Is there any limit specified in the sequential file stage field delimitter to look for? How to overcome a situation like this?

DataStage edition : 7.5.2 Enterprise Edition , Parallel Job

Many Thanks,

Randima
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Look on the Format tab. The delimiter you've specified is not the one actually used in the file. DataStage is still looking for the on you've specified.

Alternatively you've hit a hard-coded limit of one million bytes that the import operator is allowed to scan without finding a delimiter. If you need a million byte field, then ask your support provider to find out whether there's any undocumented environment variable, or other method, to override this limit.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply