Search found 376 matches

by Nagaraj
Wed May 15, 2013 2:18 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: XML Data formatted
Replies: 2
Views: 2222

XML Data formatted

I am trying to write data from the webservice to the target sequential file stage in .txt, the content would be XML output from webservices, then i am trying to read the file thru xml input stage, Now the problem i see is the sequential file generated is not a complete xml file, the data is half tru...
by Nagaraj
Thu May 09, 2013 12:36 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: XML Output
Replies: 4
Views: 3105

Thank you, I output the whole chunk to a sequential file and reprocessed it using XML input stage and parsed the data. But looks like the soap response i see in the UI tool has data which can go to multiple tables. how to read this and load it to three different tables?
by Nagaraj
Wed May 08, 2013 2:46 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: XML Output
Replies: 4
Views: 3105

Sample output data

Col1 Col2
1 2 3 2012-09-122012-02-042012-06-05
by Nagaraj
Wed May 08, 2013 2:35 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: XML Output
Replies: 4
Views: 3105

XML Output

WS is outputting 5 rows with suppose 10 columns The output i get is just one row with each column getting values from 5 rows and this happens for each column. so column data is getting appended. Job design WS_Client---->Copy---->TFM--->WS_TFM--->TFM--->DataSet Here Client which accepts login ID and ...
by Nagaraj
Tue Apr 16, 2013 2:45 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Why use orchadmin to delete datasets?
Replies: 10
Views: 10146

Okay if you write to a file path like /a/b/datsetname.ds the descriptor files usually exists on /opt/IBM/InformationServer/Server/Datasets/datsetname.ds.userid.hostname..0000.0002.0000.1334.d1ec1d4e.0002.ef79b53e you will have to delete the file on the Datasets directory too to get rid of the datase...
by Nagaraj
Tue Apr 16, 2013 12:57 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Why use orchadmin to delete datasets?
Replies: 10
Views: 10146

Adding more to that.....

Dataset is multiple files. They are
a) Descriptor File
b) Data File
c) Control file
d) Header Files
by Nagaraj
Tue Apr 16, 2013 12:49 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Why use orchadmin to delete datasets?
Replies: 10
Views: 10146

The normal rm command will only remove the .ds physical files, and it will not remove the descriptor files places in other locations which has the meta data, Ideally if you delete both these files manually then you dont need orchadmin utility at all, it's all about convinience, if you use orchadmin ...
by Nagaraj
Mon Feb 04, 2013 12:44 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Reject Handling
Replies: 4
Views: 3023

One more way to do is with two jobs and use it in Sequence jobs. 1. Process the file and write it to a reject file if there are any rejects. 2. Send to the concerned person thru command notification stage if there is even one reject. 3. Use command stage to check if the reject file is empty and proc...
by Nagaraj
Mon Feb 04, 2013 12:34 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Reject Handling
Replies: 4
Views: 3023

Okay, Add a column X using column generator(with Y value) and append it to the stream and add a TFm to split into two links one which says if the condition is not met and the other one just through put and do the fork join between them to see if there is any row coming from the reject link and try t...
by Nagaraj
Mon Feb 04, 2013 10:57 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Reject Handling
Replies: 4
Views: 3023

Use a Transformer to split the valid and invalid outputs to two seperate files. or one to a seperate reject file and the other to a target db or whatever target you have.

PS: Should use the business condition at your discretion.
by Nagaraj
Fri Feb 01, 2013 8:47 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Parallel job reports failure (code 134)
Replies: 8
Views: 7847

And Also please check column to column mapping and the data types sometimes if there is a completely wrong mapping done, which throws this error abruptly.
by Nagaraj
Wed Jan 30, 2013 8:55 am
Forum: IBM QualityStage
Topic: Advice on joining on 'unexact' match
Replies: 8
Views: 6501

if i have understood it correctly. Format the ledger system column data in a standard format so that it joins exactly with AR system, stripping of the extraneous data in a separate column and append it later when you are done with join for the ledger system record by having an identifier too upstrea...
by Nagaraj
Tue Jan 29, 2013 4:14 pm
Forum: General
Topic: Identify Jobs that load/reads from a particular database
Replies: 5
Views: 3087

Even this works at table level, i need to find whichever jobs on the server either reading or writing to the DB, This has the table name SELECT DS_JOBS.NAME AS JOB_NAME, DS_JOBS.CATEGORY, DS_JOBOBJECTS.NAME AS OBJECT_NAME, DS_JOBOBJECTS.OLETYPE, EVAL DS_JOBOBJECTS."if index(upcase(@RECORD),'TAB...
by Nagaraj
Mon Jan 28, 2013 9:47 am
Forum: General
Topic: Identify Jobs that load/reads from a particular database
Replies: 5
Views: 3087

Is there any way to find in the designer environment? I tried usually DB object is not stored in the designer, its at table level.
by Nagaraj
Fri Jan 25, 2013 3:03 pm
Forum: General
Topic: Identify Jobs that load/reads from a particular database
Replies: 5
Views: 3087

Identify Jobs that load/reads from a particular database

How do i identify the Job Names which are reading/writing from a particular database?
There is Mainatanance going on the one particular database and there are 100's of jobs scheduled on the server, its hard to find the jobs read/loading this particular database other than manual intervention.