Search found 7201 matches
- Thu Mar 28, 2002 6:56 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques
- Replies: 16
- Views: 2643
While it would be true to say that the hash lookup would be faster than an ORAOCI lookup, to get the full picture, one must consider the time overhead of building and maintaining the hash file. Im suspecting that this is part of Gopals problems. Cheers -----Original Message----- From: Raymond Wurlod...
- Thu Mar 28, 2002 6:49 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Job Parameters
- Replies: 4
- Views: 866
There are a few options. Option 1 You could use 2 jobs. The first job determines the dates and calls the second job using the parameters Option 2 Dont use parameters. You mentioned getting the dates from an input record. You could use this as the primary input then use an ODBC stage lookup returning...
- Thu Mar 28, 2002 2:45 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques
- Replies: 16
- Views: 2643
If throughput performance is critical, empirical evidence suggests that a static hashed file is the best way to go. Avoid at all costs reference lookups across a network - unless Oracle is on the same machine as the DataStage server, dont use the ORAOCI stage. A local hashed - or even distributed ha...
- Thu Mar 28, 2002 2:41 am
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Job Parameters
- Replies: 4
- Views: 866
Its not clear exactly what you want to do. If the parameters are constant throughout one run of the job, they can be set using, perhaps, a job control routine calling the DSSetParam() function. Alternately, values could be loaded into a single row hashed file, and accessed via a reference lookup. Fo...
- Wed Mar 27, 2002 11:58 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Job Parameters
- Replies: 4
- Views: 866
Job Parameters
How do you change the value of a parameter in a job. I have start and end date parameters, which vary according to certain input records, and I need to do a lookup based on effective_date > #START_DATE# and effective_date < #END_DATE#. So these values need to be set according to an input record. Im ...
- Wed Mar 27, 2002 11:22 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: DS version control
- Replies: 1
- Views: 404
Yes. DS Version Control is managed thru the DS client side API, so handles DS meta data for any and all of the server varieties. Ernie -----Original Message----- From: Rui Soares [mailto:rui.soares@novabase.pt] Sent: Wednesday, March 27, 2002 6:07 PM To: datastage-users@oliver.com Subject: DS versio...
- Wed Mar 27, 2002 11:07 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: DS version control
- Replies: 1
- Views: 404
DS version control
Hi,
The version control is avaliable to Datastage 5.1 - Solaris ?
Happy Easter
Rui Soares
Rui Soares
NOVABASE - Data Quality - QP
Mail : Rui.Soares@Novabase.PT
Tel (NB) : 351 21 383 65 92
Tlm : 351 93 620 15 98
The version control is avaliable to Datastage 5.1 - Solaris ?
Happy Easter
Rui Soares
Rui Soares
NOVABASE - Data Quality - QP
Mail : Rui.Soares@Novabase.PT
Tel (NB) : 351 21 383 65 92
Tlm : 351 93 620 15 98
- Wed Mar 27, 2002 11:05 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques
- Replies: 16
- Views: 2643
Hi I do need all the columns in the lookup. Under normal circumstances fior our loads we will have no more than 3 million rows. This is an exception for the first time. I may go in for an OCI lookup for this time alone Thanks Gopal "Raymond Wurlod" To: Subject: RE: Tuning Hash File Creation Techniqu...
- Wed Mar 27, 2002 11:01 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Upgrade DS to 5.1 - Solaris
- Replies: 2
- Views: 430
Basically, its automatic. When you install the new version over the old version (ie, upgrade the server) your projects will be automatically upgraded as well. Nothing for you to do expect test the heck out of everything to make sure it still works. Biggest problem Ive seen is if you are running Vers...
- Wed Mar 27, 2002 10:45 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Upgrade DS to 5.1 - Solaris
- Replies: 2
- Views: 430
Upgrade DS to 5.1 - Solaris
Hi, Actually, we go to update our project from Datastage 4.1 - Solaris ( 32Bit machine ), to new version Datastage 5.1. To upgrade our projects, its necessary any process ? or its directally ? Happy Easter Rui Soares Rui Soares NOVABASE - Data Quality - QP Mail : Rui.Soares@Novabase.PT Tel (NB) : 35...
- Wed Mar 27, 2002 10:34 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques - Correcting Misconcept
- Replies: 1
- Views: 2270
Tuning Hash File Creation Techniques - Correcting Misconcept
This is a topic for an orphaned message.
- Wed Mar 27, 2002 10:34 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques - Correcting Misconcept
- Replies: 1
- Views: 2270
Keys are stored in the SAME file as the data. The calculation in my earlier note assumed that the key column was included in the average record size (which is the more common practice in table space calculation). Perhaps you are confusing DataStage (UniVerse) hashed file storage with UniData, in whi...
- Wed Mar 27, 2002 10:19 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques
- Replies: 16
- Views: 2643
Average record size = 800 bytes Number of records = 20,000,000 Storage overhead per record = 14 bytes (20 bytes if 64 bit pointers are being used) Effective record size = 814 bytes Total data to store = 20,000,000 * 814 bytes = 16,280,000,000 bytes This exceeds the 2GB limit for a single hashed file...
- Wed Mar 27, 2002 10:05 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques
- Replies: 16
- Views: 2643
So, just for clarification for a newbie.. divide the bucket size (4096) by the KEY size(800) to get the KEYS/Bucket which is then used to calculate the buckets per hash table(4,000,000) by dividing the number of expected records(20,000,000) by the KEYS/Bucket ratio(5). Add a few (10% to 20%?) for sa...
- Wed Mar 27, 2002 10:00 pm
- Forum: Archive of DataStage Users@Oliver.com
- Topic: Tuning Hash File Creation Techniques
- Replies: 16
- Views: 2643
The first and obvious response to this is do you REALLY need all 15 columns for your reference lookup? One of the secrets of DataStage performance (or any computer performance for that matter) is not to do anything you dont have to. Most developers simply stick all the columns from the source table ...