Thursday, March 6, 2008

From P2V to Z2V


What are the "waves" of virtualization? What is the recent history and what is the recent future likely to look like?

Industry analysts for the most part tag the ubiquity of virtualization in the Enterprise at about 20% by the end of '08. And targeting 50% by the end of 2011. For a variety of sources see the folks over at Virtualization.Info for more analyst data.

Here is how I interpret the past and the future given these numbers.

Wave I - Server Consolidation
I used VMware about 10 years ago at my first start-up, Bedouin. We used it to encapsulate the version control system CVS into what today would be called a "virtual appliance". At the time our usage was not common, and was in fact downright mystifying to the VMware business development people. The prevalent use-case at that time was for consolidation of Microsoft software servers like Internet Information Server and Exchange Server. Since the MSFT OS only allowed one instance of these servers to run on a given copy of the OS, it meant that federated environments ended up with multiple servers just to run departmentally controlled servers, which weren't necessarily fully utilizing the hardware. The gains here were extremely tactical; ability to consolidate similar software servers run by departments onto one server for ease of administration.

Wave II - Server Migration (P2V)
Beginning about 4 years ago early adopters began the practice of P2V or Physical-to-Virtual. This involves running an agent that converts the hard drives of a physical server into virtual hard drives incorporated into a virtual server. Each of the "core" virtualization providers (VMware, XenSource, Parallels, etc.) provide this capability for their brand of virtualization. Additionally, some Open Source utilities like Ultimate-P2V, and commercial products from LeoStream and PlateSpin (just acquired by Novell), have the ability to output from a physical server to multiple virtualization formats.

There is definitely one huge "pro" for organizations that aggressively do this style of server migration; the complete decoupling of your hardware capital investment from your software capital investment. What is the rule once you get a piece of software installed and running on a server? Answer: DON'T TOUCH IT - EVER!!! NEVER! P2V lets you break this rule with relative impunity. Once you have migrated a given hardware server to its virtualized server counterpart, you can actually move it to more powerful hardware over time. You can even move it to run in a different physical location with only small risk. Good stuff - IT gets a structural agility in a key dimension.

What is the big "con"? Basically you have migrated your legacy into the future. It was low risk - but where you once had big, bloated, over-provisioned, hardware servers with generational accretion of software bits of unknown provenance, you now have big, bloated, over-provisioned, virtual servers with generational accretion of software bits of unknown provenance. Because these are virtual bloatware, you aren't going to get much of a gearing ratio between your new virtual servers and the underlying hardware. Maybe you can now run 2 VMs on one server, but in some cases you might end up running only one VM on a physical server. BUT, you gained the ability to manage the two streams (hardware and software) separately, which does help. See this recent article where a smart CIO has gained from a Wave II approach, but he cautions "When assessing the cost of converting to a virtual environment, it's important to realize that virtualization requires additional network storage since it takes 20 GB to load the OS of a virtual machine." There, on the topic of Wave II bloatware - I rest my case. Wave III eliminates this problem and allows you much higher numbers of VMs per physical server.

Wave III - Server Innovation (ZtoV)
In this wave you can attain true server agility because you are doing "Zero to Virtual"; building from component libraries straight to virtual servers in any VM format, with no initial physical footprint. The big win here is you are able to do "lean" provisioning. You use a small footprint OS (one you have configured, or Red Hat AOS, or Ubuntu JEOS, or one of the small footprint OS's for use in virtualized servers) as the base of the server. This means smaller surface area of attack vulnerabilities, streamlined administration, no over-provisioning of then unused commercial licenses, and more virtual servers per physical hardware. A knock-on effect of these lean machines is greater mobility, more easily allowing leverage of utility or cloud infrastructures.

Speed. Agility. Lower costs. I like it.

Think about it. If you have 100 servers, and you have dutifully "P2V-ed" them, what do you do for your 101st server? Do you really build that 20 gig OS build plus over-provisioned application stack and middleware? (DVD's, Installshield, InstallAnywhere, Tar, Zip, GZ and more) Then run the P2V agent? Then move it out a physical server? Or use a component repository approach and go straight to the virtual container?

Where do you find Z2V solutions? Three good examples of ZtoV out there are: my company CohesiveFT (obvious bias on my part), rPath, and FastScale. We each have a different approach to the problem with frankly different go-to-market models and target customers. This isn't the place (at least today) for compare and contrast. The point is that ZtoV is fast, agile and allows you to capitalize on the wave of innovation happening in data center computing today.

In summary
I think Wave I of virtualization is done. It might be happening in the far left tail of the distribution, but there is too much to gain in capitalizing on Wave II and Wave III opportunities to stop at a Wave I approach. The focus of my company is Wave III, allowing customers to pursue "server innovation" through Z2V, but that doesn't mean there isn't a whole lot of P2V to be done out there. That's why Novell paid $205 million for Platespin. That said, I think P2V will be the approach of choice for organizations that have pretty static server infrastructure. And the approach for migrating legacy.

Organizations that have loosely-coupled distributed computing infrastructures (running things like web services stacks, enterprise service buses, message queues, application platforms, page servers, etc.) already have an active application refresh rate. In this refresh process they can use Wave III techniques to gain more leverage. Because of this, it would be too simplistic to say "P2V is for legacy" and "Z2V is for greenfield projects". It is more likely that the refresh rate of an organization's commercial and proprietary applications will be the determinant for which approach to take.

At CohesiveFT we focus on the Wave III model, but expect that organizations will be running both Wave II and III models simultaneously over the next couple years. Of course time will tell.
Post a Comment

Share this Post

Related Posts Plugin for WordPress, Blogger...