Search found 97 matches

by leomauer
Mon Jan 27, 2014 10:20 am
Forum: Information Analyzer (formerly ProfileStage)
Topic: Can IBM Information Analyzer read DataStage metadata?
Replies: 1
Views: 3901

Can IBM Information Analyzer read DataStage metadata?

I am trying to find out how to make the file/table definitions that have already been imported to DataStage visible in Information Analyzer. Is it even possible? I thought it is, but can't find how.
My major interest is to make already defined sequential file layouts available in Information Analyzer.
by leomauer
Sun Jul 14, 2013 6:38 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: APT_STRING_PADCHAR
Replies: 4
Views: 4279

And again, the fact is the fact. Different "spelling" of the same value produces the different results. It may not be about the value, but how DataStage interprets this in it's internal code. All I witness that the results are different. I am still waiting. May somebody run test on their s...
by leomauer
Sun Jul 14, 2013 9:12 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: APT_STRING_PADCHAR
Replies: 4
Views: 4279

That is what everybody say, and yet according to my example in the post, it is acts as though 0x020 is the right setting.
Any other opinion?
by leomauer
Wed Jul 10, 2013 2:03 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: APT_STRING_PADCHAR
Replies: 4
Views: 4279

APT_STRING_PADCHAR

I am creating the test dataset using the row generator and RCP. When I have APT_STRING_PADCHAR set to 0x20 , the schema of my test dataset is like record ( field: string[10]; ) But when I set it to 0x020 the schema becomes what I expect: record ( field: string[10, padchar=' ' ]; ) And yet evrybody i...
by leomauer
Thu Mar 21, 2013 9:21 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Processing date fields with RCP
Replies: 1
Views: 2039

Processing date fields with RCP

Our jobs are using Runtime Column Propagation. What I need is to dynamically analyze the incoming layout and if the field data type is Date or Timestamp, then do some logic for each of those fields. I am thinking of Custom Stage, but I do not know the functions to access the incoming layout informat...
by leomauer
Thu Nov 29, 2012 3:25 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: CFF stage OCCURS DEPENDING ON Read issue
Replies: 1
Views: 2883

rhaddur
Did you find a solution using CFF stage?
Does anybody know the answer to the question: how to reference a field coming out of CFF stage as a part of subrecord with the refernce column, like in the example above?
by leomauer
Tue Jul 31, 2012 1:01 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: getting this error while reading data from SQL Server
Replies: 4
Views: 10480

Today we got the same error and learned that varchar(max) is treated as CLOB (text) field and has to go to the bottom of the Select statement:
SELECT
<all non CLOB fields>
<all CLOB fields>
FROM
<table name>
....
by leomauer
Mon Mar 14, 2011 1:09 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Character to Number
Replies: 6
Views: 4759

I see you are using DataStage Server edition. Then write BASIC routine looping thriugh the string character by character and concatenating the results until you reach the end of the string.
As for trimming zeroes. in the string absence of character is not converted to anything.
by leomauer
Mon Mar 14, 2011 12:54 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Character to Number
Replies: 6
Views: 4759

Use Seq function for each character in string:
Seq(<strig>[1,1]):Seq(<strig>[2,1]):Seq(<strig>[3,1]) and so on.
by leomauer
Mon Mar 14, 2011 12:24 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Dynamically handling file
Replies: 4
Views: 3675

Assumptions: 1. The file is delimited. 2. Only fields that exist in every file need to be processed and the varying part of the record is not processed. 3. The required fields are always in the same position in the record. I would do it in transformer: 1. Define the input file record as unbound varc...
by leomauer
Tue Feb 08, 2011 7:21 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: DB2 API stage
Replies: 4
Views: 3713

I doubt if DB2 API stage can execute in parallel. As far as I know only DB2 Enterprise stage can execute in parallel. Please check the documentation. At the very least, nothing prevents from setting it to run in parallel. And according to DataStage framework, the stage that can be set to run in par...
by leomauer
Mon Feb 07, 2011 3:15 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: DB2 API stage
Replies: 4
Views: 3713

DB2 API stage

Is there any downside in runing load using DB2 API stage in parallel Execution mode? The target DB2 is on mainframe.
by leomauer
Wed Feb 17, 2010 6:48 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Copybook to DataStage schema file
Replies: 4
Views: 10412

Import the COBOL copybook into DataStage. Then open the table definition, go to the Layout tab, open the Parallel option and (right click to) save the record schema in a file. Voila! ... I know it is that simple. Unfortunately I need to do it dynamically before the job runs. I use this schema file ...
by leomauer
Wed Feb 17, 2010 8:21 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Copybook to DataStage schema file
Replies: 4
Views: 10412

Copybook to DataStage schema file

Does anybody know a UNIX command line utility that allows to convert COBOL copybook to DataStage schema files?
by leomauer
Fri Mar 20, 2009 12:57 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Split Data into different files
Replies: 4
Views: 3548

I do not think you can do it in an elegant way. Of course you can try to count bytes written, but even then the DataStage must open the number of output files predefined by design of the job. Unless of course you are ready to use Custom stages. But what you can do is to define multiple output file n...