Comment on this article
Leave a comment to let us know what you think about this topic!
Leave a commentYes, vSAN! Sounds a bit like “Yes, we can!” Is it possible that VMware – like Barack Obama beforehand – borrowed this slogan from Bob the Builder when developing vSAN? After all, like them, the virtualization specialist is venturing forth into uncharted territory by launching its own software defined storage solution. Marco Vogel, VMware expert at SoftwareONE, tells us exactly why this foray is nevertheless a resounding success.
Server virtualization is dead. Long live server virtualization! The analysts agree: 75% of the x86 servers are virtualized , and the room to manoeuver is getting tight. Reason enough for VMware , market leader and virtualization specialist, to look for other things – different ways and means to accommodate even more of their customers’ needs. And here there will be no escaping a move into areas in which established providers will resolutely defend their own spot in the sun – storage, management and mobility (here, VMware simply purchased the market leader) alongside VDI and networking.
What do all these solutions have in common? They are built around software, turning data centers into software-defined data Centers (quote: VMware). Put bluntly: cheap stuff from the discounter learns a few tricks thanks to intelligent software to become independent from the hardware supplier. The principle worked just fine with server virtualization. So what could possible go wrong for VMware with software-defined storage (SDS) as a new mainstream trend?
What has the storage market looked like so far? Taken to the extreme, the industry heavyweights with their NAS/SANs etc. mainly get their gear made in the same assembly shop, “’paste” their names on the panel and deliver the storage intelligence on their controllers. You could claim it is also software-defined, but tied together as an appliance. Why? That’s just the way some customers like it. On the other hand it is true that dealing in appliances is, in places at least, substantially more lucrative for vendors. Some of these providers offer controllers that can be installed upstream to manage standard storage arrays by other manufacturers.
Pure software solutions also exist, although there is still no industry definition of SDS. So some wiggle room remains in the interpretation, and comparability suffers. Storage should have the same features as server virtualization: Abstraction of the hardware layer, ideal utilization of resources and storage-on-demand, also storage classics like deduplication, snapshots or replication. Moreover, in a perfect world it will naturally speed up the read/write processes by pooling everything that can be stored (hard disks, RAM, flash memory etc.). The best case scenario will also include user self-service and, in the really great products, a cost allocation model as well. Oh yes, and the costs should be cheaper than for “standard’ solutions. The advertising claims (sorry, data sheets) all plug the same clear message: Everyone is promising the SANs of the future without the accustomed complexity of previous models.
There are plenty of examples, for instance Datacore , StarWind , Microsoft or VMware – also HP , IBM and EMC , even Pernix in a Flash environment. Indeed, cloud-based solutions are all inherently SDS solutions, some of them with additional speed thanks to hardware components or appliances. What do most of them have in common? This issue will usually get quite complicated. Not the answer itself, but the storage solution. Companies will need storage administrators who will require qualification and further training, and whose work will gobble up resources. Just a few years ago, the problem facing small to medium-sized enterprises in the issue of server virtualization was less idea of getting to grips with the hypervisor, and more the question of which SAN to purchase and all the consequences this would entail. Now it is the storage hypervisor, a solution that is entirely independent of the hardware and only needs to satisfy the industry standard.
VMware takes an interesting approach here by pooling the server side memory resources. And now comes the key aspect: The whole shebang is fully integrated in the VMware flagship vSphere (version 5.5 and above). It hardly takes more than a mouse click to activate VMware software defined storage. These are not virtualized storage appliances like the others. Integration within the general VMware hypervisor is needed, and not a specific storage hypervisor. So what comes next is a really exciting question. Transferring VM´s respectively underlying data from their traditional storage arrays to a virtual SAN? 1, 3, 5 and 6. Finished! Up and running! Sounds simple. Our consultants confirm that it is. Flash-cached, magnetic data carriers, and no additional storage arrays needed – but the servers will again be fully equipped.
vSphere Web Client acts as the management level. VMware admin is already familiar with this product and will not need any extra training. It would be conceivable to create and distribute storage guidelines. If things go the way VMware would like them to, admins will be banned from using the term LUN. And all that needs to be done once the virtual storage pool has filled up is to full migrate data carriers to current servers or fully equipped ESX hosts in the farm – finished. The simple way to scale. And it’s performant as well.
All of this sounds good, and it is for a variety of applications. But of course there are still challenges. Data needs to be moved between the hosts, and the performance of anything run at under 10 Gbit will not be up to par. The maximum number of ESX hosts in a storage cluster is also limited.
So far, the main applications have been in the area of virtual desktop infrastructures (VDI), whereby it is important not to be overly frugal with the SSD disks. Understandably, though, the tasks are a little more difficult to overcome in the area of (productive) servers and the substantially higher SLAs they require. Companies trust in experience, and in the area of storage they even want longstanding, proven experience. Although our customers have responded very favorably to version 6.2 , which delivers a substantially higher storage capacity with its new deduplication, compression and erasure coding features, there is still a lack of long-term empirical data. So it is a good idea to test VMware software defined storage, or vSAN if you will, with less sensitive data. And just to add some flavor to the term „sensitivity“: Thera are companies out there leveraging vSAN for their productive SAP environment.
Additionally, the whole package will require vSphere, and only the data from virtual machines (servers or desktops) can be stored here.
vSAN is a smooth, well-oiled system designed for versatile scenarios. Performance, superb operability and linear scaling options make vSAN into one of the top software-defined-storage solutions around. The cost savings that VMware software defined storage delivers are equally convincing. Some limitations remain, however, and the long term implications are not yet fully known. SoftwareONE recommends: It’s absolutely worth checking out! It is a good idea to start live server operations with less critical data or to use vSAN for VDI environments or as a disaster recovery option. VMware software defined storage will not crowd out classic storage providers in the foreseeable future, among them the established industry giants like Netapp, EMC, HDS etc. But it will certainly give them a headache. Many customers will be delighted to accept this solution.
Leave a comment to let us know what you think about this topic!
Leave a comment