Understanding the value of software in storage

It’s all about the software

In today’s storage world, the reality is that the actual storage component and the surrounding hardware is all commodity based (with a few exceptions). A storage system is composed of disks, disk cases, communications links, processors, memory and networking.

Fundamentally, the disks are the same ones you can buy from Amazon, NewEgg et al. The only major observable difference is that enterprise storage drives tend to be equipped with SAS or NL-SAS which offers a more advanced command set and a more robust architecture permitting dual path connections as compared to SATA. NL-SAS drives are SATA drives with a smarter controller interface, but the mechanics are identical.

The disk cases | drawers | enclosures (pick your name) are all based on a standard structure with a SAS backplane that drives slot into and most of them are OEM’d from a very short list of vendors. Historically, these were often connected using Fibre Channel but pretty much everyone has come to terms with the fact that FC is unsustainably expensive for this and even the latest top of the line VMAX has gone over to SAS as the connection to the disk enclosures.

Internally, most proprietary interconnects (think RapidIO) have been standardized on 40GbE and Infiniband which, while expensive, are commercially available standard components.

On the processing front, with the exception of HP 3PAR’s custom ASICs, nearly everything else on the market are using standard Intel motherboards with standard Intel processors.

So why are storage systems so expensive? It’s all about the software that adds value to this collection of off the shelf parts in order to make them all work together in a coherent fashion and give you the features over and above just putting bits to disk and maintaining a certain amount of local redundancy.

How much am I paying for this software?

At the simplest level, go over to DELL or Supermicro and spec out a barebones DAS storage system per your requirements, add in a couple of servers with the number of 10GbE, FC & SAS ports you need. That’s your storage cost. Then get a quote from your quote from your storage provider. Ignore the costs assigned by part or by disk, at the end of the day it’s the negotiated package price that matters. The publicy-quoted prices are fantasies designed for impressing the purchasing department with huge rebates. I’ve even seen cases where the exact same part number has different list prices depending which model of storage controller you’re buying. So the only price that matters is the whole package with rebates.

The difference between the two is the software cost that you can now compare to a software-only solution like Nexenta or Datacore.

Now imagine that you are putting that money in the trash after the planned life-cycle of the storage investment; generally 3-5 years. You’ll be buying that software all over again with your next storage aquisition.

The key takeaway here is that the value in storage systems has moved from the actual storage hardware itself to the software. All of the storage components are commodity. IBM, EMC, NetApp et al, do not actually make any of the actual storage components. The disks are bought from Seagate, Western Digital, Toshiba, SSD’s from SANDisk, Intel, Samsung, RAID Controllers from LSI, Ethernet from Broadcom & Intel, FC from Qlogic and Brocade, motherboards from Intel.

You get integration and the software.

Is there a better way ?

The optimal approach would be to buy the commodity hardware and run your own software on it. This is the standard approach for companies like Nexenta and Datacore which bring all of the value add features one expects from enterprise storage like replication, snapshots, and so on, although granted through very different internal mechanisms.

Your software is a one-time cost with maintenance over time, but since it’s just software, the maintenance cost doesn’t skyrocket after 5 years. You replace the hardware as it becomes obsolete or your needs change inside the cost effective 5 year maintenance window, leveraging the software’s tools to make the migrations invisible to the servers consuming the storage. Your storage costs are reasonable since you’re only paying for the most basic of components without the markup that accompanies the software integrated into the system.

DELL Compellent has started thinking this way with their new licensing model that applies once you get to a certain size where you can replace the controllers for the cost of the hardware but migrate your existing software licences over, which puts it closer to Nexenta and Datacore from a business model standpoint.

But for some reason, a lot IT shops are leery of buying software to take this approach, for a variety of reasons from sales pressure from incumbent vendors (you should see the discounts when they feel threatened) and IT management’s desire to have “one throat to choke” in case anything ever goes wrong.

The other aspect is that while going the software route give your the ability to choose exactly what you want, this can also be a burden for IT shops that no longer have the in-house expertise to do basic server and storage design. The freedom of choice brings also the responsibility of making the right choices.

So when evaluating storage solutions, try and figure out exactly what you are paying for and understand how much of your investment is tied to the way that you buy it.