Page 1 of 2

Calling a service with multiple requests

Posted: Thu Feb 24, 2011 1:44 pm
by suryadev
I developed a web service with information services director.

When the service is called a request is sent and I get the response and that is through soapUI tool.

Is there any way to design a job which calls a service where the request is taken through a file and the response is stored in a file.


please suggest!!

Posted: Thu Feb 24, 2011 6:28 pm
by lstsaur
What kind of info in the file? Your multiple requests?

Posted: Fri Feb 25, 2011 11:17 am
by suryadev
The file contains 5 fields which are addr1,addr2,city,state,zip and this is the request for a service.


Thanks

Posted: Fri Feb 25, 2011 1:53 pm
by ray.wurlod
suryadev wrote:The file contains 5 fields which are addr1,addr2,city,state,zip and this is the request for a service.


Thanks
Have you published your job as a service using Information Services Director?

Posted: Fri Feb 25, 2011 3:32 pm
by lstsaur
That's a single request with multiple input parameters. You sure can put these 5 fields in a sequential file and make sure the XPATH defined in the description column is matched correctly based on the WSDL. And send this data to your WSID_Transform stage which you code the namespace info and XPATH based on the Web service's WSDL.

Posted: Mon Feb 28, 2011 9:52 am
by suryadev
Yes, I published my job as a service using information services director.

So basically

sequential file source-------->web service transformer---------->sequential file target

Is this the right way to do it?

Also Is XPATH the fields description so something like the length and data type.


Thanks

Posted: Mon Feb 28, 2011 2:06 pm
by lstsaur
No, it's not something like fields and data type. Look your WS_Transform stage's-->Input-->Columns-->Description, it's something like, e.g.
/ns1:YourProject/Server[@xsi:type="xds:string"]/text[] of Text element.

Posted: Mon Feb 28, 2011 9:56 pm
by eostic
Tell us also what your goal is.

I'm still not clear --- is the service you are calling in the WSTransformer the service published via ISD, or some other external service?

....and is the Job "with" the WSTransformer one that you are publishing with ISD?

What is the overall objective? If you have an ISD service, who are the consumers for it? A portal somewhere? Java developers?

Ernie

Posted: Tue Mar 01, 2011 11:21 am
by suryadev
Actually

I designed a web service published via ISD in Information server.

This web service is for correcting the address,so used CASS in the service.

As of now the request for the service is address and the response is the corrected address.


My goal is to send a file as request to this web service.The input file has around 20000 addresses and this has to be the request for the service and the response should be stored in another file where the response will be the corrected addresses.

Posted: Tue Mar 01, 2011 1:30 pm
by eostic
I hate to be so blunt, but why?

Save ISD for the single request, single transaction, milli-second response requirements that you have for address validation from a blackberry, or a java portal, or some other real time application that has many users and is coded by someone who needs abstract access to a web service using a tool that understands SOAP.

If you have 20k records to validate, use a batch job and be done with it. Put the CASS detail in a Shared Container that can be used in your ISD Job and also in a normal batch job.

"if".......you want to merely "kick off" that batch Job via SOAP, then make the batch Job (flat file to CASS to flat file) an ISD Job, and use a Job Parameter to identify the name of the desired file. Then...when publishing the Job, the value of the Job Parameter will become part of your WSDL --- you will pass up the "name" of the file with 20k rows, not the 20k rows themselves.

Ernie

Posted: Tue Mar 01, 2011 1:32 pm
by eostic
Let me be more clear in the first sentence.... Save the ISD "Always on" paradigm [ISDInput Stage starting the Job] for your address validation Jobs.....and instead, as outlined in the rest of the post, use a regular "batch" DataStage Job that is published as a Service.

Ernie

Posted: Tue Mar 01, 2011 3:39 pm
by suryadev
Thanks for the information.

So do a regular job and publish the job as a service using ISD.
For this job the request is only one address and the response will also be one address.

Actually I need two conditions to be satisfied, one as a service with one input and one output and for other condition this has multiple requests from a file.

After publishing this job as service can I do another job which has the file which I was talking about as source and WISD transformer and Output file which has the corrected addresses as target.

I have no idea about the shared container, what is the CASS detail which you asked me to put in shared container.

please suggest

Posted: Tue Mar 01, 2011 5:35 pm
by eostic
One of the key points in the discussion above is "why web services"?

If you are processing files, and you have control...just use files. Please outline the reason for adding web services to this architecture. Then it may be easier to understand your goals and provide the best guidance.

Ernie

Posted: Tue Mar 01, 2011 7:43 pm
by suryadev
I have files but one of the tasks is to change the addresses in the files to corrected addresses.

and the other task is the users call the service and they give one address as a request so that they get the corrected address as response.


Thanks

Posted: Wed Mar 02, 2011 12:07 am
by eostic
Then the right approach is Shared Containers. This is a re-usable bit of DataStage logic where you would basically put just your CASS details.....and then use that Shared Container in another Job that has ISDinput and ISDoutput --- and deploy that Job for your users....

...and include the Shared Container in a batch Job that reads files and writes files....

You get the benefit of re-usable DataStage components, your services users get a real time single address service, and you also get a batch job for doing your addresses.

There are probably many threads on Shared Containers here --- it is a genercal concept for re-using bits of DataStage and independent of these specific techniques we're discussing here. Do some research on those and let us know how it goes.

Ernie