I've been asked this question and I can't see to find it mentioned anywhere that I can find...
Anyone know the phyical (or practical) size limit on a single (non-key) column in a hashed file?
Maximum field size in a hashed file?
Moderators: chulett, rschirm, roy
Maximum field size in a hashed file?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Practical is easy, it's how much degradation you're willing to sustain reading all of that data from the hash, or potentially parking it in the hash. You also have to deal with the large record size, which means you're going to want to hand tune a static hash file. Still looking for that CLOB solution?
Ray can give you the formulas, but it will sound like he's decribing the orbital mechanics of a geostationary satellite. My guess is you'll find that since a hash file is referenced by reading the entire contents of the row, anything you do to slim to the row speeds the reference.
Ray can give you the formulas, but it will sound like he's decribing the orbital mechanics of a geostationary satellite. My guess is you'll find that since a hash file is referenced by reading the entire contents of the row, anything you do to slim to the row speeds the reference.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
kcbland wrote:Still looking for that CLOB solution?
Umm... perhaps.
I think it's more related to troubleshooting the CLOB issue than using it as part of the solution. I think.
Still curious what the limit is, other than that 2GB limit we all know and love.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
ray.wurlod
- Participant
- Posts: 54595
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
There is no documented limit. In theory, an oversized record can be as long as necessary, since it occupies a daisy-chain of oversized record buffers. Each of these has a pointer to the next, the pointer size is governed by the 32-bit/64-bit setting of the file.
In practice, I've never seen the limit hit. I have seen records over 3MB if that's any help. Performance? Forgettaboutt!
In practice, I've never seen the limit hit. I have seen records over 3MB if that's any help. Performance? Forgettaboutt!
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks. Luckily, not really worried about performance as we're only dealing with small sets of data at any one time.
For what it's worth, it does seem to be working fine when setting a data field up in the hashed file as a very large LongVarchar and putting up to 16 to 20K worth of "raw text" into the field.
For what it's worth, it does seem to be working fine when setting a data field up in the hashed file as a very large LongVarchar and putting up to 16 to 20K worth of "raw text" into the field.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Craig,
I remember back in Prime Information days we had a customer who complained about their performance in reading/writing to Information. After looking at just about every attribute on the OS I finally ended up having to go to the customer site to try to find out the cause. Lo and behold, after passing several security checks on-site (it was Krupp and the system was storing ballistic information for some new rifled projectile system), the performance was terrible. 30 minutes to read a record, in fact! It turns out that their Db designer put the whole database into one record - using multivalues
Since a string is represented in a linked list format (remember the Fibonacci Series?) the size limit becomes (a) process virtual memory space to store the string and (b) disk file size limitations.
I remember back in Prime Information days we had a customer who complained about their performance in reading/writing to Information. After looking at just about every attribute on the OS I finally ended up having to go to the customer site to try to find out the cause. Lo and behold, after passing several security checks on-site (it was Krupp and the system was storing ballistic information for some new rifled projectile system), the performance was terrible. 30 minutes to read a record, in fact! It turns out that their Db designer put the whole database into one record - using multivalues
Since a string is represented in a linked list format (remember the Fibonacci Series?) the size limit becomes (a) process virtual memory space to store the string and (b) disk file size limitations.
-
ray.wurlod
- Participant
- Posts: 54595
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The 3MB record was a similar but different story - they got the syntax of CREATE.FILE wrong and created the hashed file as static with modulo 1. Yuk. The slow deterioration in performance was a bit of a giveaway.
Having ramped up the performance immensely, they wanted us to come back six months later to try to achieve a similar quantum leap! Sometimes ya just can't win.
Having ramped up the performance immensely, they wanted us to come back six months later to try to achieve a similar quantum leap! Sometimes ya just can't win.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.

</a>