I think that we’re really starting to hit a watershed with the use of SSD and DRAM based storage arrays. Witness the latest announcements from Violin as published over on the Storage Architect:
Enterprise Computing: Violin Memory Inc Release New All-SSD Array: "
The Violin Approach
So what happens if you can remove the cost issues and buy an SSD-based array for the same price as tier 1 storage? This is the route Violin Memory are taking to market – make the SSD storage array as closely priced to tier 1 arrays as possible. Remove the thought process and complications of determining what to place on SSD by making the price argument irrelevant.
In reality, Violin haven’t reached that price parity yet; prices are quoted around the $20/GB mark, which is around double what I’d expect to see for tier 1 storage (depending on volume). However it is in the order of magnitude where organisations can look at those troublesome applications that decide that the cost of additional servers, disk spindles or re-writing the application is outweighed by simply moving the application to a Violin SSD device.
I think this is the ultimate tipping point for SSD use; where the cost of improving application performance is exceeded by the cost of moving to SSD, then SSD will win. Where improving application performance is justified by increased business advantage, the business case is written.
(Via The Storage Architect.)
I can say that this describes a portion of the work I’ve been doing with server virtualisation using Hyper-V, ESX and XEN. A few years ago, CPU performance and density were a limiting factor in the amount of virtual machines you could load into a server. As a consequence, there was a lot of effort expended in building analysis tools to determine server eligibility and sizing so that you could evaluate your existing physical park and know reasonably well how many servers you would need in order to consolidate them and ensure sufficient margin for failover and planned growth.
With the latest generation of hypervisors and the Intel Nehalem class and newer processors, there is so much consolidated horsepower that the cost of doing this kind of study for a small company is prohibitive compared to the ultimate hardware and software investment. In fact, the only limiting factors that we’re seeing are memory and storage performance. Fortunately, memory is an easy calculation to determine requirements for. Storage performance analysis remains a bit of a black art and this type of product can easily solve those bottlenecks when they start popping up.
It’s obviously too expensive to put everything on this type of storage, but with the simplicity of Storage vMotion, you can easily migrate the log VMDK of your database virtual machine over and see what the results are immediately. Storage optimization that you can test on the fly in production to alleviate performance issues that would otherwise require a complete overhaul of your storage infrastructure is a huge deal. When the cost of the analysis is higher than the cost of the solution, people will go the easy route.
In some ways it reminds me of a desktop support policy in a company I worked for. You try debugging the issue for 30 minutes - if it’s not solved, you just dump the master image down to the workstation. I think we’ll start seeing this kind of empirical approach to solving storage performance issues as the price drops on this kind of equipment.