Page 1 of 1

Error reading on import from Sequential file

Posted: Wed Feb 13, 2008 9:32 pm
by Axa
Hi All,

When I am reading a delimitted extract file using a sequential file stage, I get the following error and eventually job is getting aborted.
"Consumed more than 100,000 bytes looking for record delimiter; aborting"

In this file, there is a column-data type long (which can have maximum of 10000000 bytes). In the worst case, there will be the field delimitter after 10000000 bytes. I believe still it should be readable.

Is there any limit specified in the sequential file stage field delimitter to look for? How to overcome a situation like this?

DataStage edition : 7.5.2 Enterprise Edition , Parallel Job

Many Thanks,

Randima

Posted: Wed Feb 13, 2008 10:09 pm
by ray.wurlod
Look on the Format tab. The delimiter you've specified is not the one actually used in the file. DataStage is still looking for the on you've specified.

Alternatively you've hit a hard-coded limit of one million bytes that the import operator is allowed to scan without finding a delimiter. If you need a million byte field, then ask your support provider to find out whether there's any undocumented environment variable, or other method, to override this limit.