Hi There,
Is Necessary to have the seperate mount points for scratch disk and resource disks?
will it effect performance if we define all resource and scratch disk in the same mount point?
Please give some suggestions
Thanks
Mount points for scratch disk and resource disk
Moderators: chulett, rschirm, roy
Hi craig,
with a mount point being identical to a harddrive or an array of RAID-volumes I am with you entirely.
But what about our ultra-modern SAN-concepts where even the admins have virtually no control over which disks are actually providing disk-space for a specific mount-point (because in a way they all do)?
It might be interesting to test this out, really. But - alas - I am afraid I'll not have the time to do that for a while...
with a mount point being identical to a harddrive or an array of RAID-volumes I am with you entirely.
But what about our ultra-modern SAN-concepts where even the admins have virtually no control over which disks are actually providing disk-space for a specific mount-point (because in a way they all do)?
It might be interesting to test this out, really. But - alas - I am afraid I'll not have the time to do that for a while...
"It is not the lucky ones are grateful.
There are the grateful those are happy." Francis Bacon
There are the grateful those are happy." Francis Bacon
It's generally recommended that scratch disk/sort disk (if you define pools for sort) be local storage or local SAN (not shared SAN) on each physical server. The primary goal is to provide high-performance access/read/write capability while avoiding I/O contention with other users. Avoid NFS for this storage as much as is possible.
Regarding shared SAN resources, the scenario you describe is becoming more prevalent in the IT world. When possible, it is to your advantage to work closely with the storage teams to come to a common ground. As long as you can articulate to them how SAN storage allocation can affect the performance of your IS environment, whether it be the physical allocation, LUN allocations or bandwidth, you should be able to arrive at a configuration which provides satisfactory performance.
Regards,
Regarding shared SAN resources, the scenario you describe is becoming more prevalent in the IT world. When possible, it is to your advantage to work closely with the storage teams to come to a common ground. As long as you can articulate to them how SAN storage allocation can affect the performance of your IS environment, whether it be the physical allocation, LUN allocations or bandwidth, you should be able to arrive at a configuration which provides satisfactory performance.
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
All generalizations are false, including this one - Mark Twain.
Admins do have control over SAN storage they just choose to lump them all in one big logical. It makes it easier to admin. This does hurt performance. All this type of storage in DataStage should be considered temporary. It is built on the fly during ETL processes. It should be easily recreated by running some ETL job. If you are using datasets or hashed files as persistent storage then you have a design problem.
If all storage is deemed temporary then RAID and other dedundant storage is over kill and not needed. Less expensive and faster storage is more important.
Admins are not giving you the best performance by adding this kind of storage.
If all storage is deemed temporary then RAID and other dedundant storage is over kill and not needed. Less expensive and faster storage is more important.
Admins are not giving you the best performance by adding this kind of storage.
Mamu Kim