There are various numbers floating around regarding Gb per CPU and I've found that the true production numbers depend greatly upon how the application has been coded, both for server and for parallel systems.
There is no upper bound or cap on CPU usage, but again depending on your application and coding you might not even be CPU bound but require faster disks.
Do you have a running system already and have measurements that you can use? Generally scalability, while not 100% linear, is along a straight line until some system limit has been reached (i.e. bandwidth to disk, number of CPUs in a frame, upper bound on installable memory, etc.) so your current system is the best starting point from which to do capacity planning.
capacity planning
Moderators: chulett, rschirm, roy
I would try to stay under 100% CPU. You can also run out of RAM. If you run top then if you are swapping then you need more RAM. If you are waiting on IO then you need more scratch and disk pools.
Common sense things like limiting then number of times you sort your data will improve performance and amount of scratch and RAM.
Common sense things like limiting then number of times you sort your data will improve performance and amount of scratch and RAM.
Mamu Kim
-
ray.wurlod
- Participant
- Posts: 54595
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
</a>