Insert/Update IMS DB

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
andru
Participant
Posts: 21
Joined: Tue Mar 02, 2004 12:25 am
Location: Chennai

Insert/Update IMS DB

Post by andru »

Hi,

I'm aware that we can access data from IMS database using IMS stage in MVS edition. Is there a way to insert/update data into IMS database from DS? This is a urgent requirement. any help is highly appreciated. Can the same be done on a DS390 on unix platform?

Thanks.
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

IMS isn't a typical target for an ETL tool, so there isn't a "native" solution. Take a look at the External Target stage. You can write your own COBOL subroutine to update IMS and link it into your DataStage job via the External Target stage.

Mike
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

Actually, you can write the subroutine in any language you like as long as it is callable from COBOL.

DS390 is the old name for what they now call the Enterprise MVS edition.

Mainframe jobs will not run on UNIX.

Mike
andru
Participant
Posts: 21
Joined: Tue Mar 02, 2004 12:25 am
Location: Chennai

Post by andru »

If we have Enterprise MVS edition on Unix, can we not execute the cobol program from the External Target Stage? Update on IMS is required for a feedback mechanism in ODS system.

Should I have a batch file to trigger a DS job to dump the data into a file and then trigger a cobol program to update a IMS DB?
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

Andru,

It sounds as if you're somewhat confused about the capabilities/features of the Enterprise MVS edition.

You likely have the capability to design two types of jobs: 1) Server, and 2) Mainframe.

You use the Designer client to develop both types of jobs. There are some stages that are common to both job types (but be aware that the functionality is not necessarily identical). And, of course, there are stage types that are unique to each job type. For both types of jobs the design-time metadata is maintained on the DataStage server.

It is the run-time environment where the differences are greatest:

Server: Job is compiled. The GUI design gets converted to DataStage BASIC code that is compiled into an executable. The DataStage Server engine runs a server job.

Mainframe: Code is generated. The GUI design gets converted to code. This is COBOL source code (with embedded calls to a run-time library) and JCL (a compile member and a run member). The generated code is transferred to a mainframe. At this point DataStage is out of the picture. You use your typical mainframe facilities to compile, execute, schedule, perform change management, etc.

Now to your specific problem:
Depending on where your source data is, you may elect an all Mainframe job solution or a hybrid Server+Mainframe job solution.

In both cases you will need to develop a mainframe routine (External Target) in which you update your IMS target. You will incorporate this into a Mainframe job.

If your source data is on the Mainframe, then you can use a Mainframe job to extract the source data and transform it for loading to IMS.

If your source data is on a server (Windows or UNIX), then you can develop a Server job to extract the source data and transform it into a sequential file that you will ftp to the mainframe where a Mainframe job will pick it up and load to IMS.

Mike
Post Reply