The Bigger Picture: Virtualization + Management

by Shijaz Abdulla on 01.11.2010 at 00:52

“Virtualization without management is more dangerous than not using virtualization in the first place.”
– Tom Bittman, Gartner VP & Analyst

So you realized that you need to virtualize. The idea of being able to run multiple workloads on a smaller number of boxes sounded interesting to you. You saw how virtualization can save you many a buck in hardware maintenance, energy, cooling, rack space, and were fascinated by it.

Server consolidation – now that’s a term you liked to hear. Sounds like its going to simplify things up, doesn’t it? The idea of putting more eggs in one basket. Yes, it reduces cost, but how are you going to ensure that these baskets are strong enough to hold your eggs and that they wont break under pressure?

So you let the ‘Hypervisor wars’ begin – Hyper-V, VMware, you decided and chose your weapon. “What next?” you say. Well, the battle has not yet begun. Today, the hypervisor is more like a commodity, whatever you choose, it will let you virtualize – the art and science of creating a thin layer that abstracts operating system environments from the underlying hardware.

Some hypervisors provide more features than the others. The choice is simple. What matters to you is (1) which of these hypervisor features do you really need for your business, and (2) is the feature that you get worth the cost and complexity that particular hypervisor brings with it?

I am of the view that any technology you implement should contribute to the business of your organization. If it does not support the business, then that technology is useless to your organization. If you, for example, chose VMware to virtualize your (otherwise primarily Microsoft) datacenter just because it offers ‘memory overcommit’, a feature which you will probably never use in the first place, (because is not recommended for production), then you know exactly what I’m talking about.

You could go online and spend hours scouring pages and pages of information comparing hypervisors from Microsoft and VMware, but what you’re looking at is basically a piece of code between 1.7 and 3.6 GB in size.

So what does really matter?

What really matters is Management. The ability to manage and monitor every service you offer in your datacenter, end-to-end, physical or virtual. What your users see is not the hypervisor, it is way beyond that – the users see the service that you’re offering. And a robust management tool helps you ensure that services you offer are healthy so that you meet your SLA.

Imagine being able to have to look at one single monitoring dashboard that will proactively alert you of problems on hardware, operating system environments, virtualization layers, apps running on physical servers, and apps running on virtual servers. Imagine being able to look at one interface to discover that your hardware is overheating, or your server power supply is not redundant, and then look at the same interface to discover there is a shortage of disk space on your physical host server or if an application service is stopped. Imagine looking at the very same interface to know that the outbound mail queue on your Exchange Server machine that you virtualized on Hyper-V (or VMware) is building up faster than it should, or discovering that a service on one of your Linux servers virtualized on Hyper-V is failing.

That, my friends, is what I call robust end-to-end management. Microsoft System Center provides you just that. Don’t virtualize without it.

Let’s take one step forward. Let’s say you’re a local bank and you have virtualized your web servers on Hyper-V. You’ve deployed System Center components for managing your gear. Let’s say that you get most hits on your website during the day. During your “peak” operating hours you need 3 machines in a load balancing configuration to handle the load. During “off peak” hours you barely have any traffic, so all you need is one server. In the absence of virtualization or management, you would still leave 3 physical machines running 24/7 to handle the load.

But when you have virtualization with System Center, things are different. System Center Operations Manager is monitoring your servers (physical and virtual) 24/7. You can configure System Center to raise an alert when the number of transactions on your application running on IIS on the first server exceeds a threshold ‘x’, and trigger an event that results in automatically starting the 2nd virtualized web server, and the third, and so on as the number of transactions increase. Similarly, when the number of transactions drop, the additional servers can be powered off automatically, freeing up processor, memory and other resources on the host machine, which can potentially be used by other services that require additional servers to be powered up during ‘off peak’ hours. Hence, you are able to run more servers than the capacity of your Hyper-V host machine by dynamically provisioning and de-provisioning servers and efficiently utilizing your resources. Because your management tool can now see inside your virtual machines. What this means, basically, is that you get ‘more bang for the buck’.

And this is what I’d call a ‘Dynamic Datacenter’. And that’s where we’re taking you with System Center and virtualization. We’re not arguing over who’s got the smallest hypervisor; we’re giving you the much bigger picture and what really matters to you and your datacenter at large.

VMware’s vCenter, on the other hand, does not see “inside” the guest. It cannot monitor the number of connections/sec on your web service, or the length of your Exchange mail queue or the number of transactions on your database. It just sees your VM from a hypervisor perspective and does not know how the application on that VM is performing (or even if it is running). And that’s not sufficient from a service level perspective.

Even if you run VMware, System Center can still work together with it and manage your VMware environment – but of course this is an integration with vCenter. You still have an island of a management tool that you’re joining together with System Center. When it’s an all-Microsoft platform, you definitely have the Microsoft advantage. Everything’s integrated by default and everything works.

Thanks for reading and be sure to subscribe to this blog for more to come.

Trackback Permanent Link

4 Responses to The Bigger Picture: Virtualization + Management

  1. The Bigger Picture: Virtualization + Management – http://tinyurl.com/2uljwq3 #HyperV #SystemCenter

  2. Larry says:

    I love Microsoft products but I am sorry Hyper V is weak compared to VMware. Memory over commit is real and used every day, in production. So is power managment.

    Here is an example of using both at the same time. Lets say I have a 3 server web farm. Lets say I have 4 VMware hosts in a cluster. At night when usage is little to nothing VMware will move those 3 web servers and more servers onto a single ESXi host, over commiting the memory because usage is very low, and the other 3 ESXi hosts go to sleep. In the morning when web traffic picks up vCenter will wake up the other hosts and move those web servers or other guests onto other hosts.

    Stepping back from your examples, VMware is stupid easy to setup. Clustering is about 1000x easier in VMware. ESXi has a tiny foot print compared to Windows 2008 R2 Enterprise. ESXi take about 5 min to install….if that. VMware will use NFS file based storage, and on the right storage vendor (NetApp) that file based storage will be deduped on the fly on primary disk. Using NFS there are no LUN size problems, LUN locking, LUN quueues etc. With NFS you can grow and shrink volumes on the fly.

    Hyper V has a way to go. They need to support file based storage for one, CIFS/NFS volumes on a enterprise class NAS over 10gig ethernet would be a great start.

  3. Hi Larry,

    Thanks for your comment! 🙂

    Memory overcommit gives your VM’s the “feeling” that there is approx 20% more RAM (real world figure) than there actually is on your host machine. Imagine that the 2 physical hosts have 10GB of RAM each and you have 3 VMs on each host that need 2 GB of RAM each. In your example if the utilization reduces, all 6 VMs can move to one host and the host can overcommit to the machines that it has 12GB of RAM. So this means, you have put 1 node to sleep, while you are utilizing one node, and in reality, 10 GB of RAM.

    Now consider the same example with Hyper-V and System Center. Since System Center can see the application or the service and understand how this application uses memory, and when and why, System Center can help determine that it needs only 1 virtual machine running during off peak hours. So it just shuts down the other 2 virtual machines, hence the nett usage of RAM is only 2 GB and you just freed up a total of 8GB more RAM than in your VMware scenario! Which means you saved 80% “real” memory, which is a higher saving when compared to the 20% “overcommitted” memory in VMware.

    When you say “At night when usage is little to nothing…”, question: how does VMware define “usage” and just how much is “little to nothing”? As far as I understand, VMware monitors only the hardware resource utilization by each VM. It does not look inside the VM to see what applications are running on this VM and what process requires these resources and when. If you wanted to define “usage is little to nothing” in System Center, you would define it like, for example, when the connections per second on the IIS web service goes above threshold X. Or, if the number of email messages in the queue of the mail server goes above threshold Y. So you see, they are not tied to parameters of mere hardware utilization.

    What you offer to your users is the service and not the server. And that’s what the private cloud concept revolves around. Unless your management platform has the visibility to look at the service and make intelligent decisions on resource allocation, you are still lacking what it takes on the journey to a private cloud.

    Moving on to your other comments:
    – Hyper-V is even easier to setup 🙂 – all you need is 8 clicks. http://technet.microsoft.com/en-us/library/cc732470(WS.10).aspx
    – Clustering is easier in Hyper-V. Any Windows administrator that knows how to create a Windows cluster can create a Hyper-V cluster.
    – How does lower footprint on a hypervisor contribute to your business? What is the tangible benefit? Anyways, see this: http://blogs.technet.com/b/virtualization/archive/2009/08/12/hypervisor-footprint-debate-part-1-microsoft-hyper-v-server-2008-vmware-esxi-3-5.aspx
    – Hyper-V takes 5 mins or lesser to install.
    – I will post about the NFS in a separate comment.

    Once again, thanks for stopping by.
    Shijaz

  4. Pingback: What is a cloud? (and why should I care) | microsoftNOW

Leave a Reply