Search found 15603 matches

by ArndW
Thu Jul 29, 2010 3:01 am
Forum: General
Topic: Dynamically reading MF binary files
Replies: 3
Views: 1477

To add to Vince's post - once you have those COBOL copybook definitions loaded into DataStage you can write a job which decides at runtime which (previously loaded) schema file is used to read the input data. This job would need to use runtime column propagation to process data, but it is a possible...
by ArndW
Thu Jul 29, 2010 2:56 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: notnull conversion with non-nullable input
Replies: 9
Views: 7555

Turn on $APT_PRINT_SCHEMAS and look at each link's schema to see if what DataStage has defined and what you think is defined are identical.
by ArndW
Thu Jul 29, 2010 2:47 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Tgt_StgItemXLoc,0: Field "EFFECTIVE_START_DT" has
Replies: 5
Views: 1680

Dates are stored in internal format, your view data default format is dd-mmm-yy and this explains the different "look". I suggest you add a reject link to your output in order to see what the actual data value that is causing the format error. My initial guess is that you have a date which...
by ArndW
Thu Jul 29, 2010 2:38 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: surrogate state file update!!
Replies: 5
Views: 1576

The default "file block size" is system generated and this is what is causing your gaps. Basically, if the block size is 50 then the system "gets" a block of 50 and when those are used it gets another block. If only 1 record is written, then the extra 49 are never used. If you ca...
by ArndW
Thu Jul 29, 2010 2:22 am
Forum: General
Topic: Error compiling a sequence
Replies: 1
Views: 772

Re: Error compiling a sequence

...vJob1 = """:vJob:"""... is syntactically incorrect, use vJob1 = '"':vJob:'"' . Actually, that statement should be deleted, since the jobname should not have quotes around it and you don't use the vJob1 variable in your code. The error is that you need to c...
by ArndW
Thu Jul 29, 2010 2:20 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Segment files...
Replies: 1
Views: 1060

Data sets consist of a dataset descriptor file which contains no actual data but just references to the information about the data. Each dataset has one or more nodes and each of these can contain one or more data files. These are the "segment" files that you are referring to. The "or...
by ArndW
Thu Jul 29, 2010 2:11 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Getting an error while using DB2 Bulk stage as target
Replies: 4
Views: 3736

If you check the DB2 documentation, the -551 means that the caller does not have privileges to perform the given action. I am not sure of what GRANTs are required for a load vs. an insert but guess that it has to do with dropping and re-recreating indices. Talk to your DBA to get this resolved.
by ArndW
Wed Jul 28, 2010 10:49 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Structuring a Server Job to Delete/Prune Input Records
Replies: 7
Views: 2335

You can have your transform stage output records to a flat file; then have an output link on that file which won't get executed until the last row has been written to it.
by ArndW
Wed Jul 28, 2010 10:42 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Insert then Update
Replies: 6
Views: 1500

Why not have two output stages in one job? Could you specify an example of what you want to do in a couple of words to explain the problem?
by ArndW
Wed Jul 28, 2010 9:55 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Reading a Dataset with a DB stage
Replies: 3
Views: 786

No, DataSets are not "SQL" compatible and can only be read using the builtin stage or, for the brave, via the orchadmin dump command.
by ArndW
Wed Jul 28, 2010 8:10 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to create and execute routines in PX
Replies: 12
Views: 9493

If we have custom routines written for server jobs and want to convert to px jobs, how can we do that? Do we need to rewrite in C++? Thanks. Yes, you will need to recode these routines to c++ for PX jobs. While you can use BASIC transform stages in PX jobs and call your original routines from these...
by ArndW
Wed Jul 28, 2010 7:40 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to find memory utiliztion of a parallel job?
Replies: 6
Views: 2277

Setting $APT_PM_PLAYER_MEMORY, $APT_PM_SHOW_PIDS and $APT_STARTUP_STATUS will give you additional information. In your case the first option will give you some of what you would like.
by ArndW
Wed Jul 28, 2010 5:34 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Null Handling
Replies: 19
Views: 4830

You will need to identify the stage actually triggering the error message, and in order to do that you need to set $APT_DISABLE_COMBINATION to "true" for one run. Without that step we can't really progress in determining your problem.
by ArndW
Wed Jul 28, 2010 4:24 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Capturing duplicates
Replies: 8
Views: 2144

Depending upon what you want to do there are several stages and methods available to you. It is not quite clear from your description what you want to achieve. In your example, the "101" row is duplicated. Do you want both records to go down one link, or the first to go down one link and s...
by ArndW
Wed Jul 28, 2010 4:21 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: any information on Pre partitions techniques in datastage
Replies: 3
Views: 1760

I haven't heard about "pre-partitioning" and can't really think of what it could be - perhaps it just means that datasets are partitioned correctly and don't need to be re-partitioned...