DataStage job or container to read multiple file formats

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

DataStage job or container to read multiple file formats

Post by shin0066 »

Hi,

We have a requirement - where we receive source files in multiple formats like Delimited (csv, pipe), Fixed length, xml. looking for a common process where it reads different kind of source files and map to a specific schema file format - like format to target format based, target formats also can be different formats.

Is this possible in DataStage? if so what would be major components to build it.

for example - a trigger files provides source file location, source file format and target file location and target file format - based on that we need to read the source file and transform to target format and write to target location.

Thanks,
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You might investigate the use of schema files.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Terala
Premium Member
Premium Member
Posts: 73
Joined: Wed Apr 06, 2005 3:04 pm

Post by Terala »

HI Ray,

I looked at Schema File in PX developer guide, it doesn't provide a sample job.

Does any one have a sample jobs?

Thanks,
Post Reply