Search found 7201 matches
- Mon Dec 16, 2002 1:28 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: DSSetParm from within a job
- Replies: 10
- Views: 1420
DSSetParm from within a job
Hi, I might have a bigger problems than I assumed. In one of my sequences im creating a unique job-key, this key is frequently being used in my other sequences. Now that I have created the key (using the transform: KeyMgtGetNextValueConcurrent) how can i pass it back to my jobflow, so my other seque...
- Mon Dec 16, 2002 10:06 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Number of rows property
- Replies: 3
- Views: 642
Hi, if you think the abnormal termination is coused by the Aggregator stage, try to sort the records before the Aggregator and specify into it (input properties) the sort key position of the column. Hope this will help you Riccardo ----- Original Message ----- From: To: Sent: Monday, December 16, 20...
- Mon Dec 16, 2002 8:38 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Number of rows property
- Replies: 3
- Views: 642
Years ago, when we set up our project and we were working with DS 3.6 we also had a lot of problems with Aggregator Stage, especialy when hat to process more than a certain number of rows. It crashed every time at the same place, a certain amount of rows. In most case we eliminated then the Aggregat...
- Sat Dec 14, 2002 6:38 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Number of rows property
- Replies: 3
- Views: 642
Number of rows property
Hi Data Stage users, Can we set number of rows to be processed in Aggregator Stage? appreciate your suggestions. Regards, R.Anbuchelian >From : ASDC Staff, Applications Services (7432) ________________________________________________________________________ Visit us at m ____________________________...
- Sat Dec 14, 2002 4:57 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Abnormal termination message
- Replies: 0
- Views: 464
Abnormal termination message
Hi All, When we run a summary job in Data stage, we are getting "Abnormal termination while processing Aggregator Stage" This is a only error message showd in stage log. We are using Data Stage 5.2 in Unix. Please give your valuable suggestions to overcome this problem. Regards, R..Anbuchelian
- Fri Dec 13, 2002 10:20 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Phantom Error - Access violation.
- Replies: 2
- Views: 1347
The job fails every time in our case. But since the results are correct and there are other pressing priorities, I am not paying too much attention to it. Will try to enable the tracing and see over the weekend if time permits. We are running Oracle 8.1 on Win2K. YN --- David Barham wrote: > DSP.Clo...
- Fri Dec 13, 2002 9:42 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Phantom Error - Access violation.
- Replies: 2
- Views: 1347
DSP.Close is the routine in a plug-in that is called to close it. In your case, it would appear that it is closing your ORAOCI8 plugin. I dont have an answer for you. I can only commiserate as we frequently get the same error in one particular suite of jobs. Still, I am glad to hear that we are not ...
- Fri Dec 13, 2002 8:32 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Phantom Error - Access violation.
- Replies: 2
- Views: 1347
Phantom Error - Access violation.
Hi, We have a job here that aborts with this message: DataStage Job 478 Phantom 4044 Program "DSP.Close": Line 94, Exception raised in GCI subroutine: Access violation. Attempting to Cleanup after ABORT raised in stage FRAM1002NCReport..oraNonCompliantQry DataStage Phantom Aborting with @ABORT.CODE ...
- Fri Dec 13, 2002 7:57 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: hash file lookup failure
- Replies: 5
- Views: 711
Hi Gopal, We faced a similar problem in an earlier project. The jobs used to fail sometimes and sometimes worked just fine. Infact there was no pattern. We were also having the create file and clear file before writing option checked. The only diff was had the create file options unchanged (dynamic)...
- Fri Dec 13, 2002 6:12 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: hash file lookup failure
- Replies: 5
- Views: 711
Thanks for your email kasia. We have our hash files loaded using datastage jobs using version 4.x. We have turned on options Clear File before writing and Create File. We have also set parameters Minimum Modulus to 25000 and group size to 2. Let me know if you need more information. Thanks in advanc...
- Thu Dec 12, 2002 8:49 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: hash file lookup failure
- Replies: 5
- Views: 711
4.x Kasia Lewicka cc: Subject: Re: hash file lookup failure 12/11/2002 11:18 PM Please respond to datastage-users Which version of DataStage ? Kasia At 17:43 10/12/2002, you wrote: >Hi, > >We ran in to a problem in our project. We had a load run 2 days ago. We >create around 7 hash files using job1 ...
- Thu Dec 12, 2002 6:11 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Restart the job from the last commit point
- Replies: 7
- Views: 1397
We have different experiences. I forgot exactly which log holds the data it uses to restart, but it does hold onto that data. I have had to query it in the past. I have also had to use the functionality before at a client. It was helpful during the development and test phase of the project, but was ...
- Thu Dec 12, 2002 5:26 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Restart the job from the last commit point
- Replies: 7
- Views: 1397
Ascential teaches a "DataStage Best Practices" class that discusses topics like restart, reusability, etc. In general, designing the job/process flow modularly and using facilities like shared containers and job templates (these facilities are significantly improved in later releases of DataStage (e...
- Thu Dec 12, 2002 5:23 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Restart the job from the last commit point
- Replies: 7
- Views: 1397
Exactly. Our corporate organization uses Informatica as their standard ETL tool and we decided on DataStage. IMHO, you are always better off coding specific restart capabilities that are aware of the data being loaded and the methodologies being used to load that data, rather than relying on some ge...
- Thu Dec 12, 2002 4:45 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Restart the job from the last commit point
- Replies: 7
- Views: 1397
This overstates the abilities of Informatica, Power Center achieves the re-load by re-executing the original SQL and (assuming you have been logging the load) not writing until it reached the same point where the original load failed. It relies on the returned data set being the same in both cases a...