Complex Flat file- Multiple Record Types

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Complex Flat file- Multiple Record Types

Post by srds2 »

Hi There, I have a problem while reading multiple record types using Complex Flat file. My input file is coming from Mainframes in EBCDIC (Binary) format and it has three record types in it; Header, Detail and Trailer. I have defined three record definitions in CFF stage along with the Record Id specifications ( for each record type, the first column represents the record type).I have configured the CFF stage as below.

File Options: Record Type - Fixed Block
Record Options: Byte Order -Big endian; Data format- binary ; Character set: EBCDIC.
Record Id: Header_code='HDR'
Detail_Code='DET'
Trailer_Code='TRL'
With the above settings in the CFF stage I am not able to read the file. But
When I tried to read just the header or just the Detail records using CFF stage in another job then its giving me the expected output but the job is getting aborted when try to read the whole file (Header,Detail along with the Trailer)using one CFF stage with three output links. Can anyone help me in resolving this issue.

Thanks in advance!
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: Complex Flat file- Multiple Record Types

Post by chulett »

srds2 wrote:With the above settings in the CFF stage I am not able to read the file.
Seems to me it would help if you explained what this means, what happens.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Aruna Gutti
Premium Member
Premium Member
Posts: 145
Joined: Fri Sep 21, 2007 9:35 am
Location: Boston

Post by Aruna Gutti »

What is the error message you are getting when trying to run the job with multiple record types?

Also I am not sure whether it makes a difference, I usually enclose my Records Id value in double quotes instead of single quote.

Aruna.
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Hi, Thanks for your response!

I have even tried with double quotes for the record Id values but still the job is getting aborted with the below warnings/errors.

Warnings:
Complex_Flat_File_0,0: Field "complex_flat_file_0_record_type.DETAIL.ACT_NO" has import error and no default value; data: {b < 00 00 00 00 00 0c @ @ 11 10 a \ 00 00 00 0c}, at offset: 34
Complex_Flat_File_0,0: Import warning at record 1.
Import Unsuccessful at record 1
Errors:
Import error ar record 2
Complex_Flat_File_0,0: The runLocally() of the operator failed.
APT_CombinedOperatorController,0: The runLocally() of the operator failed.
Sequential_File_18,0: Failure during execution of operator logic.
Sequential_File_18,0: Fatal Error: waitForWriteSignal(): Premature EOF on node XXXXX Interrupted system call

ACT_NO is a Display Numeric field with 18 length (ACT-NO PIC 9(18)).

My input is an EBCCID (Binary) file and my output links are connected to 3 sequential files, one for each record type.

Thanks a lot for your help in resolving my issue.
Last edited by srds2 on Wed Dec 14, 2011 10:30 am, edited 1 time in total.
Aruna Gutti
Premium Member
Premium Member
Posts: 145
Joined: Fri Sep 21, 2007 9:35 am
Location: Boston

Post by Aruna Gutti »

Hi,

What is your job design? I usually have one input CFF stage with multiple output links each going to a transformer stage and then to an output file.

Aruna.
Aruna Gutti
Premium Member
Premium Member
Posts: 145
Joined: Fri Sep 21, 2007 9:35 am
Location: Boston

Post by Aruna Gutti »

Also could you please make sure the record definition is correct for DETAIL?
The error looks like a file definition issue to me. As you are dealing with Fixed length CFF file make sure your COBOL file definition is showing the correct record length for each type of record.
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Hi, Thanks for your response.

In the current job, I am not doing any transformations using the Transformer so I am just trying to extract the data (Which is in EBCDIC BINARY)using complex flat file stage and load (ASCII data)to sequntial files.

Yes, I am giving the correct length as per the file seen on Mainframes. But not sure why I am not able to extract the data. could you plese let me know if there is any other setting I need to do?

Thanks for your help!.
Aruna Gutti
Premium Member
Premium Member
Posts: 145
Joined: Fri Sep 21, 2007 9:35 am
Location: Boston

Post by Aruna Gutti »

These are my settings for Multiple record CFF stage:
record
{record_format={type=implicit}, delim=none, quote=none, binary, ebcdic, native_endian, charset='ISO-8859-1', round=round_inf, nofix_zero}

I never tried a CFF extract job without a transformer stage as the data coming from Mainframe usually needs to be transformed.

For Example Display Numeric input field (0006) comes out as decimal with a space prefix and a decimal suffix ( 0006.)

Also your error message shows the job failed mapping Detail record in the first record itself. Usually the first record should be mapped to Header record.

You have to check your input data with your record definitions. Or create a debug job and play with it just see whether you can output all 3 types of records before even trying to map all other fields.
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

Aruna points the way. I see two differences for you to explore: the record type should be implicit, and you need to use transformers to create output you can read.

We have a new FAQ that you may find helpful. Disclosure: I wrote it. :wink:
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Thanks a lot for directing me with the valuable source of information.

I am not able to see the Record Type="implicit" option in the Complex flat file stage properties under File options tab. (DS version 8.5). I could only select the below options as Record type;
Fixed, Fixed Block, Variable, Variable Block, variable spanned, variable block spanned or VR. Even under Rounding I can only see the following options; up, down, nearest value, truncate towards zero.

Can you please let me know where I can set the record type as Implicit in the Complex Flat file stage?

Thanks for your help!
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

Aruna, we need you! :)

In 7.5, CFF looks the way you describe it. The only suggestion I can think of is to use "Fixed" instead of "Fixed Block" and see if that helps.

As a follow-up, I recommend getting the data control block settings from the mainframe file catalog. If you don't know what that is, get one of your mainframers to help. That information might help me make further suggestions.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Thanks for your response Franklin. I have tried with "Fixed" record type but unfortunately it also didn't work. The job was getting aborted with the same errors as the Fixed block job.
I am waiting on receiving the Data Control Block information from the MainFrame team, I will post here as soon as I get the information.

Thanks a lot for your suggestions.
Aruna Gutti
Premium Member
Premium Member
Posts: 145
Joined: Fri Sep 21, 2007 9:35 am
Location: Boston

Post by Aruna Gutti »

The record format type implicit is generated when you define multiple records in the records tab.

Could you create the below job to debug your record definition? All CFF stages should have single record definition. Some of my jobs in production have this design and surprisingly this runs faster than CFF with multiple record definitions.

For Header
CFF with Header definition ----> Transform (constraint for Header) ---> peek or seq file stage

For Detail
CFF with Detail definition ----> Transform (constraint for Detail) ---> peek or seq file stage

For Trailer
CFF with Trailer definition ----> Transform (constraint for Trailer) ---> peek or seq file stage

Aruna.
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Hi Franklin, Can you please confirm my understanding below regarding the Data Control blockData?

Data Control block is a 2 or 4 bytes filed which will show the length of the record in a Variable block file. So, to read a variable block file we need to have this Data Control Block in our input file?

When I received the input file, I have been informed that the input file is a Variable block file on Mainframe server but when it gets FTPd it would become Fixed Block file. I have created a Filler in the copybook for the Header and Trailer records to make them the same length as the Detail records(85 Bytes) and tried to extract data using Fixed Block as the record type. But Yesterday I have been told that the file I am getting is a Variable block file and the 3 record types lengths are as below.

The header only has 78 characters
The data records have 81 characters
The trailer record has 81 characters

When I tried to extarct data from the inout file with the Record type as "Variable block" and with the above mentioned lengths of the records the job got aborted and it couldn't read any of the rows.

As Data is in EBCDIC, I am not able to see whether the file which I got is having that Data control block or not. Can you give me some information regarding the Variable block files so that I will check if the file I am getting is a Variable Block file or not?

Thanks a lot for your help Franklin.
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Hi Aruna,

Yes, I have created separate jobs (For header and Detail records) to extract just Header alone and Detail alone with single record definition in CFF stage. Those single record definition jobs ran fine but the problem comes when I tried to create Multiple Record types in CFF stage to extract Header, Detail and Trailer records together from a single input file. I am not able to see where the problem is.. Can you suggest me anything from your past experience with CFF stage?

Thanks for your help!
Last edited by srds2 on Thu Dec 15, 2011 8:58 pm, edited 1 time in total.
Post Reply