Access Times =
Command Overhead Time + Seek Time + Settle Time + Latency
The Problem of I/O Wait Time
Often, additional processing power alone will do little or nothing to improve performance. This is because the processor, no matter how fast, finds itself constantly waiting on mechanical storage devices for its data. While every other component in the â€œdata chainâ€ moves in terms of computation times and the raw speed of electricity through a circuit, hard drives move mechanically, relying on physical movement around a magnetic platter to access information.
In the last twenty years, processor speeds have increased at a geometric rate. At the same time, however, conventional storage access times have only improved marginally. The result is a massive performance gap, felt most painfully by database servers, which typically carry out far more I/O transactions than other systems. Super fast processors and massive amounts of bandwidth are often wasted as storage devices take several milliseconds just to access the requested data.
"When servers wait on storage, users wait on servers."
This is I/O wait time.
Solid state disks are designed to solve the problem of I/O wait time by offering 250x faster access times (.02 milliseconds instead of 5) and 80x more I/O transactions per second (400,000 instead of 5000) than RAID.
Decreasing application performance under heavy user loads is not a new story for most enterprises. As the number of concurrent users increases, the response time to users also increases. The knee jerk reaction to this problem is to look at two likely sources for database performance problems:
â€¢ Server and processor performance. One of the first things that most IT shops do when performance wanes is to add processors to servers or add servers to server farms.
â€¢ SQL Statements. Enterprises invest millions of dollars squeezing every bit of efficiency out of their SQL statements. The software tools that assist programmers with the assessment of their SQL statements can cost tens of thousands of dollars. The personnel required for painstakingly evaluating and iterating the code costs much more.
In many cases, these likely sources for database performance problems masquerade the true cause of poor database performance: the gap between processor performance and storage performance. Adding servers and processors will have a minimal impact on database performance and will compound the resources wasted as even more processing power waits on the same slow storage. Tuning SQL can result in performance improvements, but even the best SQL cannot make up for poor storage I/O. In many cases, features that rely heavily on disk I/O cannot be supported by applications. In particular, programs that result in large queries and that return large data sets are often removed from applications in order to protect application performance.
When system administrators look to storage they frequently try three different approaches to resolving performance problems:
â€¢ Increase the number of disks. Adding disks to JBOD or RAID is one way to improve storage performance. By increasing the number of disks, the I/O from a database can be spread across more physical devices. As with the other approaches identified, this has a trivial impact on decreasing the bottleneck.
â€¢ Move the most frequently accessed files to their own disk. This approach will deliver the best I/O available from a single disk drive. As is frequently pointed out, the I/O capability of a single hard disk drive is very limited. At best, a single disk drive can provide 300 I/Os per second. Fast solid state disk is capable of providing 400,000 I/Os per second.
â€¢ Implement RAID. A common approach is to move from a JBOD (just a bunch of disks) implementation to RAID. RAID systems frequently offer improved performance by placing a cached controller in front of the disk drives and by striping storage across multiple disks. The move to RAID will provide additional performance, particularly in instances where a large amount of cacheis used.
Introduction to Solid State Disks
Strictly, a solid state disk (or SSD) is any storage device that does not rely on storage devices that use RAM as the primary storage media. Data is stored directly on RAM chips and accessed from them. This generally results in storage speeds far greater than is even theoretically possible with conventional, magnetic storage devices. To fully make use of this speed, SSDs typically connect to servers or networks through multiple high speed channels.
What separates a solid state disk from conventional memory is non-volatility. An SSD typically includes internal batteries and backup disks so that, in the event of power loss or shutdown, the batteries keep the unit powered long enough for data to be written onto backup disks. Because of this, SSDs offer the raw speed of system memory without the disadvantage of losing data when powered down. Because of the lack of mechanical devices in the main data chain, SSDs typically have lower maintenance costs and higher reliability (including a higher MTBF) then conventional storage.
"When servers wait on storage, users wait on servers"
"It's What's Inside That Defines You"TM
ExeVP of Strategic Planning/
Special Assistant to the President
****/Definitive Signal, LLC
Direct line: 206-963-4295