capacity planning

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

There are various numbers floating around regarding Gb per CPU and I've found that the true production numbers depend greatly upon how the application has been coded, both for server and for parallel systems.

There is no upper bound or cap on CPU usage, but again depending on your application and coding you might not even be CPU bound but require faster disks.

Do you have a running system already and have measurements that you can use? Generally scalability, while not 100% linear, is along a straight line until some system limit has been reached (i.e. bandwidth to disk, number of CPUs in a frame, upper bound on installable memory, etc.) so your current system is the best starting point from which to do capacity planning.
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

I would try to stay under 100% CPU. You can also run out of RAM. If you run top then if you are swapping then you need more RAM. If you are waiting on IO then you need more scratch and disk pools.

Common sense things like limiting then number of times you sort your data will improve performance and amount of scratch and RAM.
Mamu Kim
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

More is better.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply