by Shijaz Abdulla
on 31.01.2013 at 15:45
Microsoft, in partnership with Qatar Datamation Systems and HP are organizing an IT Pro bootcamp in Qatar during Feb 27-28, focusing on Windows Server 2012 Virtualization and Management.
This much-awaited bootcamp has only limited space and seats are filling up quickly, please register soon and be on time at the venue to avoid disappointment!
Please join us for our upcoming Windows Server 2012 and Infrastructure Management IT Pro Camp. Microsoft IT Pro Camps are expert-led, no-cost, hands-on training events for IT professionals, centered on the issues and workloads you’re tackling in your environment today.
At Windows Server 2012 & Infrastructure Management IT Pro Camps you’ll gain deep technical insight into the new features and functionalities of Windows Server 2012. Hands-on demos and interactive discussions with Microsoft technical experts will cover a variety of topics including Server Virtualization, Storage, Networking, Server Management and Automation, Identity and Access, Virtual Desktop Infrastructure, and Web and Application Platform.
27th – 28th February 2013
8:30 AM – 5.30 PM
Ritz Carlton Hotel
by Shijaz Abdulla
on 24.01.2013 at 09:34
This week, NEC unveiled a virtual switch for Windows Server 2012 Hyper-V hypervisor which is designed to bring OpenFlow-based software-defined networking and network virtualization to those Microsoft environments.
The NEC ProgrammableFlow PF1000 provides a single control plane for integrating server and network virtualization in Windows Server 2012 Hyper-V deployments. This integration is designed to enable network automation, more rapid delivery of network services, VM mobility and consistent application of business policy across the network.
The PF1000 supports 1,280 ports per switch and up to 260,000 flows. It supports OpenFlow 1.0 and can work with any OpenFlow-enabled switches from any vendor, according to NEC.
The PF1000 runs on Hyper-V, the hypervisor Microsoft has created to compete with VMware’s. NEC also claims it’s the first virtual switch running OpenFlow. The PF1000 can support 1,280 ports (combining virtual and physical switch ports) and 260,000 data flows, NEC claims.
At the same time, NEC announced a ProgrammableFlow upgrade that includes IPv6 and OpenStack support.
Why this matters
VMware doesn’t offer OpenFlow on virtual switches, as Roy Chua of SDNCentral points out in his analysis. (It does have a workaround, as detailed on the IP Space blog.)
So, this is actually a nice announcement for Microsoft, giving Hyper-V something to brag about versus VMware’s ESXi.
by Shijaz Abdulla
on 25.02.2012 at 16:47
Listen to technology leaders from Target, a US retailing company with over 1750 stores throughout the country, on how Microsoft Virtualization and Management technologies help them remotely manage IT environments in each store, save millions of dollars in operating cost, and provide a delightful customer experience.
"It reduces our operating expense by millions of dollars a year through power savings, break/fix maintenance savings, and avoided capital refresh." – Brad Thompson, Director – Infrastructure Engineering, Target
Target runs over 15,000 virtual machines on over 3600 Hyper-V hosts.
by Shijaz Abdulla
on 25.02.2012 at 11:43
IDC has predicted that 2012 will be VMware’s last year as ‘King of the Hill’.
With Windows Server 8, Hyper-V beats VMware not only in pricing but also in features. Even if VMware brought down pricing, Hyper-V still has features that VMware doesn’t.
And do not forget that System Center (the current version as well as 2012) has more mature and complete management features than VMware ever had, and the key to meaningful virtualization or realizing a private cloud lies in robust management tools.
MVP Aidan Finn wrote on his blog with a touch of humor:
And don’t forget that System Center (current and future) smack VMware’s “management” products around like a one-legged little person in a heavyweight MMA fight.
VMware fanboys and trolls please save yourselves the trouble, comments on this blog are moderated.
by Shijaz Abdulla
on 20.11.2011 at 09:19
Cloud Computing! It’s one of the biggest opportunities for IT Professionals in recent years. But wouldn’t it be great if there was a simple, effective way to get the skills and training you need to take advantage of this opportunity, and also get the recognition and rewards that you deserve?
This is where Microsoft can help give your career a boost.
Visit the Microsoft Virtual Academy training portal now and register to receive free and easy access to training for IT Professionals who want to get ahead in cloud computing. This content was developed by leading experts in the field, and the modules ensure that you acquire the essential skills and gain credibility as a cloud computing specialist in your organization.
MVA courses include:
- Introduction to SCVMM, Architecture & Setup
- Creating VMs,Template & Resources in VMM
- Managing Windows Azure
- SQL Azure Security
- Identity & Access
- Data Security and Cryptography
by Shijaz Abdulla
on 20.10.2011 at 18:19
The Solution Incentives reward partners for driving sales of specific Microsoft solutions, chosen for their growth and market potential. The program creates opportunities for partners to build new sustainable revenue streams and increase their value to customers.
What partner types can participate?
Solution incentives are customer segment and partner-type agnostic. Partners need to meet eligibility requirements and each opportunity being registered needs to meet the eligibility criteria described in the Program Guide, to be eligible for incentives.
What if the Solution Partner also transacts the order?
Whether the Partner is only advising the customer, or advising and transacting, there will be no difference in the solution incentives calculation and payment.
For more information, check out the following documents:
Management and Virtualization
Application Platform (Microsoft SQL Server)
New to PSX? Check out the PSX resources here.
by Shijaz Abdulla
on 10.03.2011 at 11:16
If you pass any Virtualization exam between March 1, 2011 and June 30, 2011 you will receive a complimentary TechNet Subscription. First 1,000 participants only! Registration is required. T&C apply.
Click here for more info
by Shijaz Abdulla
on 12.01.2011 at 12:31
January 24, 2011
Venue: La Cigale Hotel, Doha
February 1, 2011
Venue: JW Marriott, Kuwait City
by Shijaz Abdulla
on 06.01.2011 at 22:57
The ‘cloud’ is definitely an often used (and misused) buzz word in today’s technology industry. So what exactly is a cloud? What is a cloud made of? Is it any different from hosting? These are some of the matters that I will address in this post.
So what is a cloud?
Wikipedia defines Cloud Computing as “internet-based computing, whereby shared servers provide resources, software, and data to computers and other devices on demand, as with the electricity grid. Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing. Details are abstracted from consumers, who no longer have need for expertise in, or control over, the technology infrastructure “in the cloud” that supports them.” (retrieved Jan 6, 2011)
Let’s take a closer look and break it down a bit.
“…shared servers provide resources…”
So the cloud is made of shared servers working together in a manner that results in the abstraction of the underlying infrastructure from the user or the consumer.
The cloud is elastic, which means, it can scale to any extent to help you manage utilization “spikes”, just like an electricity grid. If your business application or website suddenly requires more resources or above normal utilization due to that marketing campaign you just launched, the cloud will be able to provision and make available resources to you “on the fly” during your time of need and then “de-provision” these resources when utilization is back to normal. Because the cloud abstracts the underlying infrastructure, this entire process is invisible to the consumer.
“…a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing.”
By now, you would have realized it. If you need shared servers working together, abstracted from the user, dynamically scalable to any business demand – you need virtualization. But, does simply having the leanest, meanest hypervisor in the market help you implement the cloud? No. It is as important that you have a robust management solution. If your abstracted infrastructure cannot understand how a utilization spike on your application looks like, how will you be able to provide “on demand” services to your users? If your cloud infrastructure does not have visibility on the health of your ‘service’, how can it predict or understand a need to scale up dynamically?
Without doubt, management is an indispensable component of the cloud. I explained this in greater detail in an earlier post.
This is why System Center, with components like Operations Manager, Virtual Machine Manager and Opalis are key players in your journey to hosting your own ‘private’ cloud.
“Details are abstracted from consumers, who no longer have need for expertise in, or control over, the technology infrastructure ‘in the cloud’ that supports them”
This re-affirms the abstraction of the underlying infrastructure. The business does not need to know what hardware, operating environment or hypervisor you’re running on. All the business cares about is the ‘service’. To be able to ensure availability the ‘service’ at any scale that the business requires dynamically, abstracting everything else is a key characteristic of the cloud.
Hosting vs. Cloud:
So is the cloud what my hosting provider offers me?
Well, it depends. Many hosting providers today state that they bring you the cloud. In reality, some of them actually do, others don’t. The key message here is that mere server hosting is not cloud. Only when the benefits I discussed above are realized, then behold — we have a cloud.
If your “cloud” hosting provider states something like they will give you a ‘dedicated’ HP blade server with 2.5GHz Processor, 4 GB RAM, 80 GB SAN storage, 80 GB backup storage, a dedicated Cisco firewall and a 1 TB monthly traffic included – chances are they have missed the cloud by a mile!
Why? Because they are simply not providing you a cloud – shared servers that provision resources on demand. Instead, they are just giving you a hosted server. There is no elasticity, no dynamic resource provision and no abstraction. In a real cloud, you wouldn’t know what hardware spec you’re running on, simply because it doesn’t remain constant – just as your business doesn’t remain constant.
Interesting. So why should I care about the cloud?
My colleague Michael Mansour lists out top 10 reasons why the cloud is changing the consumer and business landscape. His post is definitely worth a read.
‘Stop Press’ Humor: Wikipedia also defines ‘cloud’ as a visible mass of water droplets or frozen ice crystals suspended in the atmosphere. Certainly not the cloud we’re talking about!
< Previous posts
by Shijaz Abdulla
on 01.11.2010 at 00:52
“Virtualization without management is more dangerous than not using virtualization in the first place.”
– Tom Bittman, Gartner VP & Analyst
So you realized that you need to virtualize. The idea of being able to run multiple workloads on a smaller number of boxes sounded interesting to you. You saw how virtualization can save you many a buck in hardware maintenance, energy, cooling, rack space, and were fascinated by it.
Server consolidation – now that’s a term you liked to hear. Sounds like its going to simplify things up, doesn’t it? The idea of putting more eggs in one basket. Yes, it reduces cost, but how are you going to ensure that these baskets are strong enough to hold your eggs and that they wont break under pressure?
So you let the ‘Hypervisor wars’ begin – Hyper-V, VMware, you decided and chose your weapon. “What next?” you say. Well, the battle has not yet begun. Today, the hypervisor is more like a commodity, whatever you choose, it will let you virtualize – the art and science of creating a thin layer that abstracts operating system environments from the underlying hardware.
Some hypervisors provide more features than the others. The choice is simple. What matters to you is (1) which of these hypervisor features do you really need for your business, and (2) is the feature that you get worth the cost and complexity that particular hypervisor brings with it?
I am of the view that any technology you implement should contribute to the business of your organization. If it does not support the business, then that technology is useless to your organization. If you, for example, chose VMware to virtualize your (otherwise primarily Microsoft) datacenter just because it offers ‘memory overcommit’, a feature which you will probably never use in the first place, (because is not recommended for production), then you know exactly what I’m talking about.
You could go online and spend hours scouring pages and pages of information comparing hypervisors from Microsoft and VMware, but what you’re looking at is basically a piece of code between 1.7 and 3.6 GB in size.
So what does really matter?
What really matters is Management. The ability to manage and monitor every service you offer in your datacenter, end-to-end, physical or virtual. What your users see is not the hypervisor, it is way beyond that – the users see the service that you’re offering. And a robust management tool helps you ensure that services you offer are healthy so that you meet your SLA.
Imagine being able to have to look at one single monitoring dashboard that will proactively alert you of problems on hardware, operating system environments, virtualization layers, apps running on physical servers, and apps running on virtual servers. Imagine being able to look at one interface to discover that your hardware is overheating, or your server power supply is not redundant, and then look at the same interface to discover there is a shortage of disk space on your physical host server or if an application service is stopped. Imagine looking at the very same interface to know that the outbound mail queue on your Exchange Server machine that you virtualized on Hyper-V (or VMware) is building up faster than it should, or discovering that a service on one of your Linux servers virtualized on Hyper-V is failing.
That, my friends, is what I call robust end-to-end management. Microsoft System Center provides you just that. Don’t virtualize without it.
Let’s take one step forward. Let’s say you’re a local bank and you have virtualized your web servers on Hyper-V. You’ve deployed System Center components for managing your gear. Let’s say that you get most hits on your website during the day. During your “peak” operating hours you need 3 machines in a load balancing configuration to handle the load. During “off peak” hours you barely have any traffic, so all you need is one server. In the absence of virtualization or management, you would still leave 3 physical machines running 24/7 to handle the load.
But when you have virtualization with System Center, things are different. System Center Operations Manager is monitoring your servers (physical and virtual) 24/7. You can configure System Center to raise an alert when the number of transactions on your application running on IIS on the first server exceeds a threshold ‘x’, and trigger an event that results in automatically starting the 2nd virtualized web server, and the third, and so on as the number of transactions increase. Similarly, when the number of transactions drop, the additional servers can be powered off automatically, freeing up processor, memory and other resources on the host machine, which can potentially be used by other services that require additional servers to be powered up during ‘off peak’ hours. Hence, you are able to run more servers than the capacity of your Hyper-V host machine by dynamically provisioning and de-provisioning servers and efficiently utilizing your resources. Because your management tool can now see inside your virtual machines. What this means, basically, is that you get ‘more bang for the buck’.
And this is what I’d call a ‘Dynamic Datacenter’. And that’s where we’re taking you with System Center and virtualization. We’re not arguing over who’s got the smallest hypervisor; we’re giving you the much bigger picture and what really matters to you and your datacenter at large.
VMware’s vCenter, on the other hand, does not see “inside” the guest. It cannot monitor the number of connections/sec on your web service, or the length of your Exchange mail queue or the number of transactions on your database. It just sees your VM from a hypervisor perspective and does not know how the application on that VM is performing (or even if it is running). And that’s not sufficient from a service level perspective.
Even if you run VMware, System Center can still work together with it and manage your VMware environment – but of course this is an integration with vCenter. You still have an island of a management tool that you’re joining together with System Center. When it’s an all-Microsoft platform, you definitely have the Microsoft advantage. Everything’s integrated by default and everything works.
Thanks for reading and be sure to subscribe to this blog for more to come.