We have the following job sequence:
Job A followed by Job B
In the Triggers tab of Job A:
Expression type = Custom - (Conditional)
Expression = @FALSE
What does it mean ? Does it mean that Job A and Job B are triggered to run concurrently ?
Thank you
Search found 75 matches
- Thu Nov 29, 2018 9:08 am
- Forum: General
- Topic: Trigger configured in Sequence job
- Replies: 2
- Views: 2891
- Wed Nov 29, 2017 3:05 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance issue while reading Unstructured stage
- Replies: 10
- Views: 6807
Our Datastage support suggested a better solution: use a wild card (Filename*) in the source Unstructured stage. We no longer need a loop, only an elementary job reading an Unstructured stage and writing to data sets. The performance is very good compared to the former design with a loop (less than ...
- Tue Nov 28, 2017 2:38 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance issue while reading Unstructured stage
- Replies: 10
- Views: 6807
- Mon Nov 27, 2017 6:27 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance issue while reading Unstructured stage
- Replies: 10
- Views: 6807
Thank you for your proposal.
The job which reads the Excel file writes 4 data sets (1 data set for each Excel tabcard) in append mode. Having a multi instance of the job would mean that 1 instance will have to wait for the other because they are writing to the same data sets. Any risk of 'deadlock' ?
The job which reads the Excel file writes 4 data sets (1 data set for each Excel tabcard) in append mode. Having a multi instance of the job would mean that 1 instance will have to wait for the other because they are writing to the same data sets. Any risk of 'deadlock' ?
- Thu Nov 23, 2017 7:14 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance issue while reading Unstructured stage
- Replies: 10
- Views: 6807
Ray, the elementary job runs even longer (25 sec) to read 1 Excel file (4 tabcards) , writing to 4 data sets. There is no transformation, only a simple constraint in the Transformer stage. All the Excel files are in the same folder on the Datastage server. There are approximately 20 columns (Varchar...
- Thu Nov 23, 2017 7:05 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance issue while reading Unstructured stage
- Replies: 10
- Views: 6807
- Wed Nov 22, 2017 5:02 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Performance issue while reading Unstructured stage
- Replies: 10
- Views: 6807
Performance issue while reading Unstructured stage
I have a sequence with a Unix script to get a list of file (xlsx) names, then passing this list to a loop. This is a a job sequence reading each Excel file (having 4 worksheets) with an Unstructured stage and writing to 4 data sets. Each iteration (reading 1 Excel file and writing 4 data sets) takes...
- Tue Sep 13, 2016 6:21 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Auto partition on Change Capture stage
- Replies: 4
- Views: 2662
Auto partition on Change Capture stage
I assume that the correct method is Hash partition and Sort the 2 inputs on the key columns of a Change Capture stage. Is there a risk of incorrect result when leaving Auto partition, No sort ?
Thanks for your support.
Thanks for your support.
- Thu Nov 26, 2015 10:19 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Oracle Rowid when Enabling partitioned read
- Replies: 0
- Views: 2192
Oracle Rowid when Enabling partitioned read
We have a Select joining many tables with Oracle connector. When enabling partitioned read, we see that the number of records read is not constant. When Partitioned read method = Rowid hash or Rowid round robin, the count is correct. When Partitioned read method = Rowid range, the count is lower tha...
- Mon Jan 19, 2015 2:51 am
- Forum: General
- Topic: Export DataStage components (Include dependent items)
- Replies: 2
- Views: 3186
Export DataStage components (Include dependent items)
In the Help of Repository Export user interface, I have not found documentation for the check box "Include dependent items". What is meant "dependent item" in this context ?
- Thu Dec 18, 2014 9:32 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Hash partition with difference in numeric key columns
- Replies: 2
- Views: 2867
Hash partition with difference in numeric key columns
I encounter the following issue on a join stage on 5 columns (3 varchar, 2 numeric). The 2 inputs are sorted and hash partitioned on the 5 columns. For a given combination of the 5 key columns, we were expecting a match between the 2 inputs, and Datastage job did not return a match. I found out that...
- Thu Dec 04, 2014 9:49 am
- Forum: General
- Topic: Acceptable number of stages in a job
- Replies: 5
- Views: 4030
Acceptable number of stages in a job
We have a discussion on the optimal number of stages in a job. With a complex business / functional requirement, we quickly reach a job with over 50 stages, and sometimes , we end up with a failure (fork() failed, Not enough space). As a work around, we have to split up the job into 2 jobs (or more)...
- Thu Nov 13, 2014 4:43 am
- Forum: General
- Topic: Add a parameter to an existing parameter set
- Replies: 5
- Views: 4756
- Thu Nov 13, 2014 2:38 am
- Forum: General
- Topic: Add a parameter to an existing parameter set
- Replies: 5
- Views: 4756
- Wed Nov 12, 2014 5:37 am
- Forum: General
- Topic: Add a parameter to an existing parameter set
- Replies: 5
- Views: 4756
Add a parameter to an existing parameter set
We have N jobs using a parameter set PS1 (for example) with 5 parameters.
If we add a 6th parameter to parameter set PS1 to be used in a new job, do we need to recompile the former N jobs (which do not need the 6th parameter) ?
If we add a 6th parameter to parameter set PS1 to be used in a new job, do we need to recompile the former N jobs (which do not need the 6th parameter) ?