Handling KeyMgtGetNextValue() while moving machines - Post2
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 51
- Joined: Tue Feb 05, 2002 3:06 pm
Handling KeyMgtGetNextValue() while moving machines - Post2
Greetings.
I'm trying to move KeyMgtGetNextValue() values from one server to another. So, for a start I would like to at least list all of the values. I found a thread titled, "Handling KeyMgtGetNextValue() while moving machines" that has the answer I'm looking for. But I think I need a little extra help. Here is the solution that Craig Hulett posted:
<< If you look at the source for the routine in question, you can see that it uses a Hash file called "SDKSequences". In the absense of some back-door Universe methodology for transferring the contents from one Project to another, I would think it would be fairly straight-forward to write two DataStage jobs - one to dump its contents to a flat file and then another to populate it from that flat file after you transfer it to the new server.
I'd run KeyMgtGetNextValue once on your new machine so that it can create it with the specs it wants, either that or make you create it using the same command as in the source code:
Code:
EXECUTE "CREATE.FILE SDKSequences 2 1 1"
FYI - This is from a Version 6 server.>>
I tried to find this hashed file and found the name at "E:\Ascential\DataStage\Projects\PROD\SDKSequences", but it doesn't look like a normal hashed file to me. Am I missing something. A little extra hint would be appreciated.
Thanks,
Scott
I'm trying to move KeyMgtGetNextValue() values from one server to another. So, for a start I would like to at least list all of the values. I found a thread titled, "Handling KeyMgtGetNextValue() while moving machines" that has the answer I'm looking for. But I think I need a little extra help. Here is the solution that Craig Hulett posted:
<< If you look at the source for the routine in question, you can see that it uses a Hash file called "SDKSequences". In the absense of some back-door Universe methodology for transferring the contents from one Project to another, I would think it would be fairly straight-forward to write two DataStage jobs - one to dump its contents to a flat file and then another to populate it from that flat file after you transfer it to the new server.
I'd run KeyMgtGetNextValue once on your new machine so that it can create it with the specs it wants, either that or make you create it using the same command as in the source code:
Code:
EXECUTE "CREATE.FILE SDKSequences 2 1 1"
FYI - This is from a Version 6 server.>>
I tried to find this hashed file and found the name at "E:\Ascential\DataStage\Projects\PROD\SDKSequences", but it doesn't look like a normal hashed file to me. Am I missing something. A little extra hint would be appreciated.
Thanks,
Scott
As Craig pointed out:
1. Create first job ( on the existing server )
Hash --> Xfm --> Seq
(SDKSequences)
Have two columns defined in the hash file e.g.:
ID VARCHAR(10),
VALUE INTEGER(10)
Run this job. Transfer the seq file to the 2nd server.
2. Create a 2nd job ( on the new server )
Seq --> Xfm --> Hash
Read the Seq file created in step 1 and write to the SDK Sequences file.
The easiest way to create an SDKSequences file is to test the routine KeyMgtGetNextValue which will create the file, if it does not exist already.
1. Create first job ( on the existing server )
Hash --> Xfm --> Seq
(SDKSequences)
Have two columns defined in the hash file e.g.:
ID VARCHAR(10),
VALUE INTEGER(10)
Run this job. Transfer the seq file to the 2nd server.
2. Create a 2nd job ( on the new server )
Seq --> Xfm --> Hash
Read the Seq file created in step 1 and write to the SDK Sequences file.
The easiest way to create an SDKSequences file is to test the routine KeyMgtGetNextValue which will create the file, if it does not exist already.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Re: Handling KeyMgtGetNextValue() while moving machines - Po
Little extra hint: It's not a 'normal' dynamic Type 30 hashed file. The '2' right after the 'SDKSequences' in the command above tells you it is created as a Type 2. Not really an issue in this situation unless you plan on creating the hashed file outside of the routine.scottopizza wrote:EXECUTE "CREATE.FILE SDKSequences 2 1 1"
FYI - This is from a Version 6 server.>>
I tried to find this hashed file and found the name at "E:\Ascential\DataStage\Projects\PROD\SDKSequences", but it doesn't look like a normal hashed file to me. Am I missing something. A little extra hint would be appreciated.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Hashed files should not be considered persistent meaning these values should be recreated every job run.
Select max(surrogate_key) from dimension_table -> SDKSequences
This is a job included in EtlStats called reseed_sequences. The surrogate_key is a parameter and so is the table name so the SQL looks a little different because of the # marks. Download EltStats and load his job it will save you some time. Most tables the surrogate key is a primary index so this is very fast to run.
Select max(surrogate_key) from dimension_table -> SDKSequences
This is a job included in EtlStats called reseed_sequences. The surrogate_key is a parameter and so is the table name so the SQL looks a little different because of the # marks. Download EltStats and load his job it will save you some time. Most tables the surrogate key is a primary index so this is very fast to run.
Mamu Kim
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
There have been a number of posts on resetting sequences, this one for example provides a function that can be used.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 51
- Joined: Tue Feb 05, 2002 3:06 pm
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
...except, of course, for the Repository tables and SDKSequences.kduke wrote:Hashed files should not be considered persistent meaning these values should be recreated every job run.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 47
- Joined: Wed Apr 12, 2006 12:13 pm
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The same way you'd reset a sequence from any other client application.
Perhaps use the "after SQL" in a job that does something else or, better, in a job that does nothing else. Then you can call the job ResetSequencexxxxxx as appropriate.
Perhaps use the "after SQL" in a job that does something else or, better, in a job that does nothing else. Then you can call the job ResetSequencexxxxxx as appropriate.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 47
- Joined: Wed Apr 12, 2006 12:13 pm