So you’ve decided that the limiting factor on the performance of your application is disc I/O. The rate at which data is read and written to and from the discs in your dedicated server. It’s time to follow the I/O path from the OS, down to the volume level, down to the individual disc, for a world of adventure…
Optimise the web application!
If we reduce the number of I/O requests in the first place from your application we can free up space for the other demanding activities we can’t change.
Start by looking at the level of logging that is being performed by your application. Is it necessary? could we turn it down for all those times that we aren’t doing debugging?
In your database, use indexes to access data blocks rather than bulk scans – this will generate fewer read requests.
Cluster hot data blocks together – this will generate fewer disc head seeks.
In a web server you can often never have enough RAM. The more you have, the more will be allocated to cacheing of data in memory, which will in turn reduce the number of requests that reach the discs.
Increasing RAM also has the effect of increasing the percentage used for dirty buffers so it helps conglomerate writes as well.
In the event that a server has a requirement for more RAM than is available, it will eventually start swapping out the data from memory onto the discs. Essentially – the discs start to act as RAM. Since the discs are an order of magnitude slower this swapping causes a severe performance hit to the server.
RAM is cheap these days so add it liberally.
Batter backed write cache
Use a battery backed write cache for applications that perform an intensive number of writes to a single location. This will reduce the number of writes that physically hit the disc.
A battery backed write cache will also help with transaction processing on an ACID database. This will hide the write latency.
Try splitting the work load over different physical volumes. For example, have separate volumes for small databases, log files, user data, etc. Or put the file system journal on another disc.
Each physical disc has limits on the rate at which data can be accessed and moved in or out. The more discs you have the more I/O capacity you have. You just have to make sure the best RAID configuration is selected:
RAID 0 will have the best read/write performance (but no data safety!) RAID 10 is the next best RAID 1 characteristics: Write performance is slightly less than a single disc Linear read performance only good with good OS read ahead Read performance otherwise can scale as requests split over drives
RAID 5 characteristics:
Decent read performance Write performance is only good when writing entire strides (i.e. bulk data writes) Database performance is terrible RAID 5 when degraded will have terrible performance RAID 6 great when reliability is everything.
Ensure that you have sufficient bus bandwidth to the discs
Some technology selection tips:
PCI-E or PCI-X Point to Point (SAS, SATA) will be better than a shared bus (SCSI LVD, IDE) Two IDE devices on the same cable will suck Increase the individual disc speed, higher the RPM the better If data safety does not matter, turn on the disc write-back caches
Using some or all of these suggestions, you should be able to generate significant improvements on your server and remove the I/O bottleneck.
web hosting directory