Search found 39 matches

by rohit_mca2003
Mon Apr 22, 2019 11:53 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Need to publish message to Kafka Cluster
Replies: 3
Views: 4342

Need to publish message to Kafka Cluster

Hi, I am using DataStage v11.5. I have to publish message to Kafka cluster using Kafka connector (SSL authentication) but I am not sure what are the configuration and user access is required. Could you please help if you know how to setup the secure connection (SSL) and any other configuration requi...
by rohit_mca2003
Thu Jan 31, 2019 7:58 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to retain ONE record after comparing 2 different columns
Replies: 5
Views: 4173

After certain tests, got the solution. I compared the 'Col1' and 'Col2' (either it is number or string) and created a new KEY column. If Col1<= Col2 then Key=Col1:Col2 else Key=Col2:Col1 This will give same key for both of the records and then de-duplicate the record based on this 'Key' column. Than...
by rohit_mca2003
Thu Jan 31, 2019 3:34 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to retain ONE record after comparing 2 different columns
Replies: 5
Views: 4173

Thanks for the reply. I thought about this approach but it may not work. As what ever join will happen with Record1 (party1) & Record2 (party2), same join will happen with 'Record2 (party1) and Record1 (Party2).
So we will end up with 2 records again.

Thanks,
by rohit_mca2003
Wed Jan 30, 2019 7:55 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to retain ONE record after comparing 2 different columns
Replies: 5
Views: 4173

How to retain ONE record after comparing 2 different columns

Hi, I have a file where I need to retain only 1 record out of 2 records where 'Col1/Party1 of record1' is equal to 'Col2/Party2 of other record'. These records may not come in sequence. Sample: ----------- Column Party1 Party2 -------------------------------------- Record1 --> 100 200 Record2 --> 20...
by rohit_mca2003
Sun May 20, 2018 10:57 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Incomplete record schema when using OSH_PRINT_SCHEMAS
Replies: 1
Views: 1966

Incomplete record schema when using OSH_PRINT_SCHEMAS

Hi, I need to print record schema for each operator in the job, mainly the record schema of the final target (sequential file). Job Design: CFF Stage (Source) --> Transformer --> Seq File (Target) But the source file record has more than 10000 columns and when I am using OSH_PRINT_SCHEMAS in job, I ...
by rohit_mca2003
Sun Mar 04, 2018 8:25 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Issue while reading sequential file
Replies: 5
Views: 4005

To answer queries: 1. Actually we have mechanism to create schema file based on metadata of the file. Since metadata says this is DOS format file so schema file automatically takes '\r\n' as record delimiter string. Other files from same source works fine. 2. I already checked file in UNIX and it ha...
by rohit_mca2003
Fri Mar 02, 2018 3:24 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to prevent CHECKSUM stage to re-arrange column names
Replies: 7
Views: 6752

To answer everyone's query. we have resolved this issue and may be would be good use case for future. I am aware that DataStage puts '|' after each field passed to 'checksum' operator and it uses HASH MD5. I tried 2 solutions and both worked: 1. In generic stage, I wrote small code for 'transform' o...
by rohit_mca2003
Fri Mar 02, 2018 3:07 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to prevent CHECKSUM stage to re-arrange column names
Replies: 7
Views: 6752

Best to open a support case then, the documentation doesn't seem to show that as an option. Out of curiosity, what 'existing application' was used to generate the checksum and what did it use to generate them? I've had 'issues' trying to match checksums from different systems, hence the question. E...
by rohit_mca2003
Fri Mar 02, 2018 3:01 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Issue while reading sequential file
Replies: 5
Views: 4005

Issue while reading sequential file

Hi, This is very common error but I did not get any suitable answer from other entries so posting this as new query. I am trying to read csv file which has record delimiter as '\r\n'. It is windows file. I can see <CR><LF> in editors after each record. Schema File has been defined as below: record {...
by rohit_mca2003
Mon Jan 15, 2018 10:10 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to prevent CHECKSUM stage to re-arrange column names
Replies: 7
Views: 6752

Thanks Craig. I need to generate CHECKSUM based on the column order as there is requirement to have keys in specific order. Existing application is running in production where hash was generated based on order of columns and to match with existing hash values we need to follow same order of keys/col...
by rohit_mca2003
Mon Jan 15, 2018 4:46 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: SORT:Restrict Memory Usage
Replies: 5
Views: 7896

Try using variable APT_OLD_BOUNDED_LENGTH and set to 'True'.
by rohit_mca2003
Mon Jan 15, 2018 4:37 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to prevent CHECKSUM stage to re-arrange column names
Replies: 7
Views: 6752

How to prevent CHECKSUM stage to re-arrange column names

Hi Everyone, I have issue while computing hash value using 'CHECKSUM' stage. It seems that CHECKSUM stage re-arranges the columns as per their names while computing the hash value. Example: ---------- Source --> Col1, Col2, Test_Val Case1: If I generate checksum keeping order as (Col1, Col2, Test_Va...
by rohit_mca2003
Thu Jan 11, 2018 10:31 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to stop default type conversion in MODIFY stage
Replies: 3
Views: 2662

This issue is resolved when I defined the conversion as below:

Output_Col:string[max=20]=string_trim[in_col]

If I assigned the value as string[20], is was treating it as CHAR and adding space so defining it as max=20 did the work and it is not exceeding upto the maximum limit of varchar.

Thanks.
by rohit_mca2003
Thu Jan 11, 2018 10:08 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to stop default type conversion in MODIFY stage
Replies: 3
Views: 2662

This option does not restrict the length of output_col. We have used this parameter earlier to restrict the usage of disk and scratch space.
by rohit_mca2003
Thu Jan 11, 2018 8:15 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to stop default type conversion in MODIFY stage
Replies: 3
Views: 2662

How to stop default type conversion in MODIFY stage

Hi, I have generic job where I am handling NULL and performing TRIM for string type of column. As output column of these, If I do not assign data length for output then by default it increases the length of the column. Example: CASE1: --------- Output_Col:string=handle_null(in_col,' ') Output_Col:st...