Error reading on import from Sequential file
Posted: Wed Feb 13, 2008 9:32 pm
Hi All,
When I am reading a delimitted extract file using a sequential file stage, I get the following error and eventually job is getting aborted.
"Consumed more than 100,000 bytes looking for record delimiter; aborting"
In this file, there is a column-data type long (which can have maximum of 10000000 bytes). In the worst case, there will be the field delimitter after 10000000 bytes. I believe still it should be readable.
Is there any limit specified in the sequential file stage field delimitter to look for? How to overcome a situation like this?
DataStage edition : 7.5.2 Enterprise Edition , Parallel Job
Many Thanks,
Randima
When I am reading a delimitted extract file using a sequential file stage, I get the following error and eventually job is getting aborted.
"Consumed more than 100,000 bytes looking for record delimiter; aborting"
In this file, there is a column-data type long (which can have maximum of 10000000 bytes). In the worst case, there will be the field delimitter after 10000000 bytes. I believe still it should be readable.
Is there any limit specified in the sequential file stage field delimitter to look for? How to overcome a situation like this?
DataStage edition : 7.5.2 Enterprise Edition , Parallel Job
Many Thanks,
Randima