FTP a file to Mainframe

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
raghav_ds
Premium Member
Premium Member
Posts: 40
Joined: Wed May 04, 2011 2:21 am

FTP a file to Mainframe

Post by raghav_ds »

I am trying to ftp a trigger file to Mainframe system using a ftp enterprise stage. The file is having only one column in DataStage and the length of the column is 10 characters.

I haved used different format combinations and tried to ftp the files.

1) Set Record length as "fixed" and no record delimiters and field delimiters.
2) Set Record length as "fixed" and UNIX New Line character as record delimiters and no field delimiters.
3) record type as implicit

But in all the cases, when the file is written to Mainframe system the file is created as varible block record and record length as 255 chacters. Because of this the record is not read correctly in mainframes.

Do we have any options in ftp enterprise stage to set the record length explicitly so that it will not create a record length of 255 characters.
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

In my experience with FTP Enterprise, you have no local (DataStage) control over the destination file's attributes. You need a process on the mainframe that catalogs your destination with the correct record length.

Then, with overwrite=yes, you can write your column with record type = implicit. Actually, on the Format tab, a good default approach (after clearing all settings) is to right-click on Record level and choose Mainframe (COBOL) under the Format as sub-menu.

In general, mainframe files are physically stored as one long string of bytes. The catalog provides formatting when the file is read or written.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
raghav_ds
Premium Member
Premium Member
Posts: 40
Joined: Wed May 04, 2011 2:21 am

Post by raghav_ds »

We are planning to ftp a file that has date parameter in its name. This will create a fresh file everyday on Mainframe. Does that mean the cataloging at Mainframe side should run everyday before we transfer the file from DataStage.
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

The short answer is yes, if you want to avoid unreadable formatting when the ftp process creates the file. Since the mainframe file will have a different name everyday, a mainframe job would have to initialize it as a catalog entry, where it remains empty until your DataStage jobs writes to it. The challenge is in making sure your cataloged file name matches the URI file name in your FTP stage.

Another alternative would be to always write to the same mainframe file each day, then have a job that copies it to another file with the date in the name. Cataloging that mainframe file would be a one-time task, because your stage will overwrite it each day.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
raghav_ds
Premium Member
Premium Member
Posts: 40
Joined: Wed May 04, 2011 2:21 am

Post by raghav_ds »

Thank You Franklin.

I was searching for some options at FTP Enterprise stage and now since it is not possible with FTP stage. I am thinking of the following options:

1) Write to a common file and Mainframe has to consume it and write in the format they wish. (It has to be agreed by Mainframe team)
2) Mainframe team will get the file directly from DataStage server.
3) Write a unix script by utilizing the following ftp parameter.
LRECL = 80 (Logical record length = 80 bytes)
RECFM = FB (Record format = fixed block)

I am analyzing the third option. Please let me know if you see any problems with this approach. And also any other options you may have.
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

Those are all good options. Your choice depends on your local concerns and requirements, but from your last post you know that already. :wink:

Running the FTP session from the mainframe should mean that the "get" will let you determine the file attributes. The problem with the FTP stage is that it doesn't offer us that level of control over the ftp session.

The same is true with using a native Unix script instead of the stage. You can write the command line to make sure the mainframe file has the correct attributes.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
raghav_ds
Premium Member
Premium Member
Posts: 40
Joined: Wed May 04, 2011 2:21 am

Post by raghav_ds »

Thank you for your inputs Franklin.
Post Reply