RTI - SOAP over HTTP using passive stages

Dedicated to DataStage and DataStage TX editions featuring IBM<sup>®</sup> Service-Oriented Architectures.

Moderators: chulett, rschirm

Post Reply
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

RTI - SOAP over HTTP using passive stages

Post by Kirtikumar »

hello,
i am working RTI with SOAP over HTTP
The issue that i am facing is - I am creating hash file from XML input in RTI job and next i want to use the same hash files for some lookups, but it is giving compile time error that "input and output links for passive stage not permitted in RTI ".

The previous design was something like(DB2 is driving link):

Code: Select all


                                                 db2 
                                                   | 
                                                   | 
RTIin--->XMLIn--->Tranformer---->hashfile........>Tranformer---->XMLout 

 
so I split the design as below, this removed the error.
In the design hashfile name of both files is obviously the same. But i am not sure how the split jobs in single window/palette are executed i.e. their order of execution.

Code: Select all


RTIin--->XMLIn--->Tranformer---->hashfile 

                                                                 db2 
                                                                  | 
                                                                  | 
                                              hashfile........>Tranformer------>XMLout 
 
I am making XML parsed data as lookup and not the DB2 bcoz, the XML input has only one row and i have to get all matching rows for that single input row from DB2 database table.


Thank You......
Kirti
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

Can you take the hash file out of the design? It makes the real time design a bit messy. It doesn't seem to like having it as a target and source in the one RTI job.
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

Hello,
We will be removing hash files from RTI enabled jobs. The reason is:- For RTI enabled job multiple instances may be active at the same time, so while updating file, multiple instances of same job may try to update the same file and will cause collision. Thus any of the passive stage as hash/seq file may act as bottleneck affecting the performance of jobs, so seq/hash files should never be used.

So now to get multiple rows from lookup, we are trying to use ODBC stage which allows multi-row lookup.

Thanks You
Kirti
vmcburney wrote:Can you take the hash file out of the design? It makes the real time design a bit messy. It doesn't seem to like having it as a target and source in the one RTI job.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

If you would prefer to use hashed files (so that they can be used in other, non-RTI-enabled jobs), you can use the UV stage to access them. The UV stage also offers multi-row return.

However, you don't get the memory cache (but, then, you don't get that with ODBC stage either). Another advantage of UV over ODBC is that you don't have the overhead of driver software; connection from a UV stage to hashed files is via a direct mechanism (the BASIC SQL Client Interface, or BCI).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Why not Hash files in RTI jobs?

Post by Kirtikumar »

Hi,
I can use UV stage to access the hash files. But the problem with hash files in my jobs is two fold:
1. First I will have to create/update hash files from DB2 stage each time an instance is created from Front End. (Creating/updating hash file each time a instance is created is mendatory as I want updated data from database to be sent to the callers coz some times depending on need, I am also updating the database data.)
2. Use UV stage to access that hash files.

And as hash files are created with some name and according to RTI concepts multiple instances of job may exist at a single time, multiple instances may try to update same hash file and may cause collision affecting the performance of jobs.
I am not much aware of ODBC or UV stages. But when I put them on palette/DS window,both stages need Data Source Name(DSN). So DSN needs to be created for both the stages. Then what is difference in 2 stages? And as of now the Hash files are not needed elsewhere in batch jobs.

Regards,
Kirti
ray.wurlod wrote:If you would prefer to use hashed files (so that they can be used in other, non-RTI-enabled jobs), you can use the UV stage to access them. The UV stage also offers multi-row return.

However, you don't get the memory cache (but, then, you don't get that with ODBC stage either). Another advantage of UV over ODBC is that you don't have the overhead of driver software; connection from a UV stage to hashed files is via a direct mechanism (the BASIC SQL Client Interface, or BCI).
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

To address your specific points:

1. This will be true no matter which mechanism you choose. If you want to send current information to caller, you must obtain current information from database. However, you can use a shared hashed file (publicly cached) and keep it up to date at the same time you are keeping your database up to date.

2. UV stages access local hashed files via the pre-defined DSN localuv. If the hashed file is in a different account, you need to edit uvodbc.config to identify a different DSN whose database type is UNIVERSE.
The difference between UV and ODBC is that UV accesses only UniVerse tables, and does so without the use of an intermediate driver.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Job Design Problem

Post by Kirtikumar »

Hi Ray,
I agree that Hash Files can be used and provide the performance benifits. But the problem about which I am worrying is As in RTI multiple job instances will be created, each instance will try to update the same hash file and may cause collision.

Another problem with hash files is how I will be creating them in a single job at one point and use it at another point in the same job. As in my jobs I want to compare one row from XMLIn with database and want to extract multiple rows from there. For this previously I created Hash File of this single row and used it as reference link (As mentioned in my first mail) and made DB2 as driving linkBut using passive stages with input and output link is not permitted in RTI jobs. Now if I created Hash Files from db2 and used them in UV stage,the problem is of course of job design.
The design which I can think of is as follows:

Code: Select all



XMLIn-------------->Xformer
                        |
                        |(driving Lnk)
           (ref)        |
UV stg.----------->XFormer------->XMLOut

The UV stage will access the hash file created/updated from database, But where to create/update this hash file. We cant create/update in sepereate job and I cant think of any way of creating it in this job from database.

Thanks & Regards,
Kirti
Post Reply