Page 1 of 1

Hash File Tuning

Posted: Wed Oct 11, 2006 3:18 pm
by Kalyan3699
I am trying to read 2 million row table (60 Columns of which 4 Columns are keys) into a hash file.its been taking around 2 hrs to load the data into the file with the default options that are provided with the Hash file.I tried to optimize the hash file using HFC.exe(Hash File Calculator tool),it said either to use a File Type of 14,18 or Modulo of 367097,still the performance is no different.Could someone provide me any options in tuning the hash file.

Thanks

Posted: Wed Oct 11, 2006 3:36 pm
by ArndW
pre-sizing the modulo on a dynamic file and/or using a static hashed file of appropriate size works well. Also, do you have row buffering enabled? That will increase your speed. Are you sure that your hashed file write is the bottleneck? If you do short test and change your hashed file stage into a sequential file writing to /dev/null do you get a much better speed?

Posted: Wed Oct 11, 2006 7:05 pm
by ray.wurlod
You might also try using Write Cache, having allowed the maximum possible cache size (999MB).

Posted: Wed Oct 11, 2006 7:23 pm
by Kalyan3699
Ray,
I am writing to the same hash file and reading from the same hash file by which i cant select the write cache option.

Thanks

Posted: Wed Oct 11, 2006 9:23 pm
by chulett
Ok... that would have been a nice fact to mention up front. Next question, why are you doing that? You are obviously doing a wee bit more than simply reading 2 million rows into a hashed file...

Posted: Wed Oct 11, 2006 10:41 pm
by ray.wurlod
With a hashed file that size, do the initial population in a separate job, WITH write cache enabled. Be amazed!