Server virtualization is dead. Long live server virtualization! The analysts agree: 75% of the x86 servers are virtualized , and the room to manoeuver is getting tight. Reason enough for VMware , market leader and virtualization specialist, to look for other things – different ways and means to accommodate even more of their customers’ needs. And here there will be no escaping a move into areas in which established providers will resolutely defend their own spot in the sun – storage, management and mobility (here, VMware simply purchased the market leader) alongside VDI and networking.
What do all these solutions have in common? They are built around software, turning data centers into software-defined data Centers (quote: VMware). Put bluntly: cheap stuff from the discounter learns a few tricks thanks to intelligent software to become independent from the hardware supplier. The principle worked just fine with server virtualization. So what could possible go wrong for VMware with software-defined storage (SDS) as a new mainstream trend?
What has the storage market looked like so far? Taken to the extreme, the industry heavyweights with their NAS/SANs etc. mainly get their gear made in the same assembly shop, “’paste” their names on the panel and deliver the storage intelligence on their controllers. You could claim it is also software-defined, but tied together as an appliance. Why? That’s just the way some customers like it. On the other hand it is true that dealing in appliances is, in places at least, substantially more lucrative for vendors. Some of these providers offer controllers that can be installed upstream to manage standard storage arrays by other manufacturers.
Pure software solutions also exist, although there is still no industry definition of SDS. So some wiggle room remains in the interpretation, and comparability suffers. Storage should have the same features as server virtualization: Abstraction of the hardware layer, ideal utilization of resources and storage-on-demand, also storage classics like deduplication, snapshots or replication. Moreover, in a perfect world it will naturally speed up the read/write processes by pooling everything that can be stored (hard disks, RAM, flash memory etc.). The best case scenario will also include user self-service and, in the really great products, a cost allocation model as well. Oh yes, and the costs should be cheaper than for “standard’ solutions. The advertising claims (sorry, data sheets) all plug the same clear message: Everyone is promising the SANs of the future without the accustomed complexity of previous models.
There are plenty of examples, for instance Datacore , StarWind , Microsoft or VMware – also HP , IBM and EMC , even Pernix in a Flash environment. Indeed, cloud-based solutions are all inherently SDS solutions, some of them with additional speed thanks to hardware components or appliances. What do most of them have in common? This issue will usually get quite complicated. Not the answer itself, but the storage solution. Companies will need storage administrators who will require qualification and further training, and whose work will gobble up resources. Just a few years ago, the problem facing small to medium-sized enterprises in the issue of server virtualization was less idea of getting to grips with the hypervisor, and more the question of which SAN to purchase and all the consequences this would entail. Now it is the storage hypervisor, a solution that is entirely independent of the hardware and only needs to satisfy the industry standard.