Virtual Machine Manager Self Service Portal 2.0 SP1 Beta is available

by Shijaz Abdulla on 10.03.2011 at 11:01

The Self Service Portal 2.0 (SSP 2.0) Service Pack 1 (SP1) beta is now available for System Center Virtual Machine Manager (SCVMM).

(wow, that’s a lot of abbreviations, isn’t it?)

The VMM SSP is a fully supported, partner-extensible solution built on top of Windows Server 2008 R2, Hyper-V, and System Center VMM. You can use it to pool, allocate, and manage resources to offer infrastructure as a service and to deliver the foundation for a private cloud platform inside your datacenter. VMMSSP includes a pre-built web-based user interface that has sections for both the datacenter managers and the business unit IT consumers, with role-based access control. VMMSSP also includes a dynamic provisioning engine.

What’s new with VMMSSP 2.0 SP1?

Import virtual machines: Allows DCIT (Datacenter) administrators to re-import virtual machines that were removed from the self-service portal and also import virtual machines created outside the portal but managed by VMM.

Expire virtual machines: This feature provides the user the ability to set an expiration date for the virtual machines that are being created or imported so that the virtual machines auto-delete after the set date. This feature also provides users the flexibility (through role-based access) to set or change the expiration date for the virtual machine.

Notify administrators: This feature provides functionality to notify BUIT(Business Unit) or DCIT(Datacenter) administrators about various events in the system (for example, Submit request, Approve request,Expire virtual machine, and so on) via email through SQL server mail integration.

Move infrastructure between business units: This feature allows DCIT(Datacenter) administrators to move an infrastructure from one business unit to another when the system is in Maintenance Mode.

For all the details check out the public beta on Connect.

The Bigger Picture: Virtualization + Management

by Shijaz Abdulla on 01.11.2010 at 00:52

“Virtualization without management is more dangerous than not using virtualization in the first place.”
– Tom Bittman, Gartner VP & Analyst

So you realized that you need to virtualize. The idea of being able to run multiple workloads on a smaller number of boxes sounded interesting to you. You saw how virtualization can save you many a buck in hardware maintenance, energy, cooling, rack space, and were fascinated by it.

Server consolidation – now that’s a term you liked to hear. Sounds like its going to simplify things up, doesn’t it? The idea of putting more eggs in one basket. Yes, it reduces cost, but how are you going to ensure that these baskets are strong enough to hold your eggs and that they wont break under pressure?

So you let the ‘Hypervisor wars’ begin – Hyper-V, VMware, you decided and chose your weapon. “What next?” you say. Well, the battle has not yet begun. Today, the hypervisor is more like a commodity, whatever you choose, it will let you virtualize – the art and science of creating a thin layer that abstracts operating system environments from the underlying hardware.

Some hypervisors provide more features than the others. The choice is simple. What matters to you is (1) which of these hypervisor features do you really need for your business, and (2) is the feature that you get worth the cost and complexity that particular hypervisor brings with it?

I am of the view that any technology you implement should contribute to the business of your organization. If it does not support the business, then that technology is useless to your organization. If you, for example, chose VMware to virtualize your (otherwise primarily Microsoft) datacenter just because it offers ‘memory overcommit’, a feature which you will probably never use in the first place, (because is not recommended for production), then you know exactly what I’m talking about.

You could go online and spend hours scouring pages and pages of information comparing hypervisors from Microsoft and VMware, but what you’re looking at is basically a piece of code between 1.7 and 3.6 GB in size.

So what does really matter?

What really matters is Management. The ability to manage and monitor every service you offer in your datacenter, end-to-end, physical or virtual. What your users see is not the hypervisor, it is way beyond that – the users see the service that you’re offering. And a robust management tool helps you ensure that services you offer are healthy so that you meet your SLA.

Imagine being able to have to look at one single monitoring dashboard that will proactively alert you of problems on hardware, operating system environments, virtualization layers, apps running on physical servers, and apps running on virtual servers. Imagine being able to look at one interface to discover that your hardware is overheating, or your server power supply is not redundant, and then look at the same interface to discover there is a shortage of disk space on your physical host server or if an application service is stopped. Imagine looking at the very same interface to know that the outbound mail queue on your Exchange Server machine that you virtualized on Hyper-V (or VMware) is building up faster than it should, or discovering that a service on one of your Linux servers virtualized on Hyper-V is failing.

That, my friends, is what I call robust end-to-end management. Microsoft System Center provides you just that. Don’t virtualize without it.

Let’s take one step forward. Let’s say you’re a local bank and you have virtualized your web servers on Hyper-V. You’ve deployed System Center components for managing your gear. Let’s say that you get most hits on your website during the day. During your “peak” operating hours you need 3 machines in a load balancing configuration to handle the load. During “off peak” hours you barely have any traffic, so all you need is one server. In the absence of virtualization or management, you would still leave 3 physical machines running 24/7 to handle the load.

But when you have virtualization with System Center, things are different. System Center Operations Manager is monitoring your servers (physical and virtual) 24/7. You can configure System Center to raise an alert when the number of transactions on your application running on IIS on the first server exceeds a threshold ‘x’, and trigger an event that results in automatically starting the 2nd virtualized web server, and the third, and so on as the number of transactions increase. Similarly, when the number of transactions drop, the additional servers can be powered off automatically, freeing up processor, memory and other resources on the host machine, which can potentially be used by other services that require additional servers to be powered up during ‘off peak’ hours. Hence, you are able to run more servers than the capacity of your Hyper-V host machine by dynamically provisioning and de-provisioning servers and efficiently utilizing your resources. Because your management tool can now see inside your virtual machines. What this means, basically, is that you get ‘more bang for the buck’.

And this is what I’d call a ‘Dynamic Datacenter’. And that’s where we’re taking you with System Center and virtualization. We’re not arguing over who’s got the smallest hypervisor; we’re giving you the much bigger picture and what really matters to you and your datacenter at large.

VMware’s vCenter, on the other hand, does not see “inside” the guest. It cannot monitor the number of connections/sec on your web service, or the length of your Exchange mail queue or the number of transactions on your database. It just sees your VM from a hypervisor perspective and does not know how the application on that VM is performing (or even if it is running). And that’s not sufficient from a service level perspective.

Even if you run VMware, System Center can still work together with it and manage your VMware environment – but of course this is an integration with vCenter. You still have an island of a management tool that you’re joining together with System Center. When it’s an all-Microsoft platform, you definitely have the Microsoft advantage. Everything’s integrated by default and everything works.

Thanks for reading and be sure to subscribe to this blog for more to come.

Dynamic Datacenter Workshop, Dubai

by Shijaz Abdulla on 01.02.2010 at 15:03

Last week I was in Dubai for the Dynamic Datacenter workshop along with two partners from Qatar – Mannai Trading and EBLA Consulting.

Among Qatar attendees, we had Pradeep Joy and Johny John from Mannai Trading and Bashar Badr from EBLA.

The course provides IT Professionals with the knowledge and skills necessary to install and configure the underlying components of a Microsoft Dynamic Data Center solution. Partners were trained on how to onboard a Proof of Concept using the DDC Toolkit. The products include Windows Server 2008 R2 Hyper-V, System Center Operations Manager, Configuration Manager, Data Protection Manager and Virtual Machine Manager.

The trainer was Jeffrey Roach from Wright & Robbins (a Microsoft vendor based in Seattle) and it was four days packed with excitement and interaction. Jeff is now in Prague, delivering the same workshop to our European partners.

DDTK-H Workshop

While we were in Dubai, we also got a chance to attend a separate Unified Communications Sales training, which included a hands-on user experience of Microsoft UC solutions.

100124_142011 100124_142052