The hype has been huge. The benefits, unbelievably good. But as loudly as we might sing its praises, it’s time for us to stop for a minute and admit something about virtualization: it’s harder to manage than we initially thought. Okay, it’s much harder.

If you’ve gone virtual (and most of us have — at least partially), you’ve probably already received a morning-after reality check: the difficulties of tracking roving virtual machines (VMs), tracking trends in VM usage levels and containing virtual sprawl. Through new technology and best practices, we’ve made some respectable progress in these areas. But an equally steep challenge lies ahead on the virtualization horizon: maintaining application performance.

For many virtualization adopters, applications have been an afterthought. Most IT teams have rushed to virtualize servers without considering the long-term impact of their actions. Few have paused to think about how the applications running on their VMs are actually performing from an end-user perspective. Yet, this is something we can’t ignore. Consider how much revenue is at risk when Web-based applications crash. For one of the world’s largest e-commerce businesses, going dark for just one minute can amount to tens of thousands of dollars in missed purchases. If the outage lasts one hour, revenue losses for the company could be millions of dollars.

We don’t have a lot of time to figure out how to manage the applications in VMs. Server virtualization is advancing at a breakneck pace, with no slowdown in sight. According to Forrester Research, virtual servers are expected to represent close to one-third of application deployments by 2011. According to IT Process Institute, 64 percent of organizations already aggressively pursuing virtualization initiatives are comfortable virtualizing business-critical systems. Before long, virtualization will touch every part of your organization, including the applications that are directly responsible for your business’s revenue stream. We simply cannot afford to get so enamored with the technology of virtualization that we ignore end users. At the end of the day, we need to be able to answer questions like, “What is happening now that prevents my customers from placing orders?” or, better yet, “What is going to happen next?”

Born to Run

IT’s fast-track attitude toward server virtualization reminds me of the lyrics to the Bruce Springsteen song “Blinded By the Light.”*

Yeah he was blinded by the light

Cut loose like a deuce another runner in the night…

To some extent, most of us have been blinded by the hype. Part of our surging excitement can be attributed to the way it has been (re)packaged — as a shiny, new “disruptive” technology. And then there are the benefits, which seem too good to pass up. After all, who would say no to consolidating 15 underused servers into one? And what about how quickly they can be deployed? Or the fact that virtualized servers can be moved to accommodate shifting workloads?

Virtualization has given many organizations more bang for their server bucks. Forrester Research reports that server consolidation via virtualization can save up to 50 percent in hardware costs – and even more, if you factor in lower energy consumption and improved disaster recovery.

This is all well and good, but if the applications running on these VMs aren’t performing, the cost-saving advantages of virtualization really don’t matter.

Worlds Apart

How well applications run in a virtual environment depends, in part, on virtual infrastructure performance. But to manage this effectively, organizations need different capabilities than were necessary in a physical environment.

I might think I know how to manage the performance of a virtual server, but once I deploy several, I will discover something very important: a virtual server is not just another server. I can’t run any application I want on it for one simple reason: it’s not a dedicated resource. Virtual servers share physical host machines with other guests, so the one-to-one ratio of a physical environment no longer holds. While the host may have abundant resources to support a VM, there’s no guarantee that the workload running in the VM will perform as well as it would in a purely physical environment (where it would be running on its own dedicated physical server). For example, I can decide to place two guests with an average use of 50 percent on the same host, but when their usage levels simultaneously peak at 75 percent, one of the VMs will have to wait while the other’s workload finishes—or possibly both will suffer performance degradation, impacting the applications running within the VMs.

Thanks to hypervisors, we have less guesswork in this area than we had five years ago. Not only do these tools track how much physical/hardware resources are being used by each VM, but they are also able to pool resources and allocate a pro rata share of physical host’s resources among multiple guest machines.

This sounds good, at least until we run into the all-too-common scenario of having to manage virtualization tools from multiple vendors.

Different people from different departments with different perspectives and different responsibilities use different virtualization tools. (Case in point: one department may concentrate on performance and availability, another may manage assets and a third may focus on capacity.) According to research from enterprise management associates (EMA), 98 percent of enterprises are using multiple virtualization platforms, technologies and vendors — on average three to four of each, and as many as 20 in total. A single organization can have hypervisors from VMware, Microsoft, Citrix and other vendors, scattered throughout its virtual environment. Inevitably, these diverse platforms will intermingle in the pursuit of specific initiatives.

But let’s assume that my hypervisors are interoperable. Let’s also assume that I can monitor and analyze the performance of my virtual infrastructure. What does this mean for the applications that are running on my VMs? As long as I tend to the needs of my hypervisor, my VMs should have no application performance problems, right?

Not necessarily. It’s entirely possible for 1) hypervisor to show me that all is well with the CPU, memory, disk and network of a VM, while 2) the end user of an application running on that same VM is experiencing problems. This scenario is plausible because a hypervisor does not provide insight into what is happening within a VM guest operating system.

Indeed, managing virtualization is a lot harder than we originally thought.

I can buy a hypervisor server and invest substantial effort, energy and expense into making sure I can back up and recover the hypervisor. But I still won’t have the foggiest insight into whether an application process has just crashed on a VM. Nor will I know if the VM is located in Calgary, Canada, or Asbury Park, New Jersey.

Unfortunately, most IT organizations base their infrastructure and performance decisions on a shortsighted view of the heterogeneous virtualized environment. According to an August 2009 study by EMA, 75 percent of administrators are either using legacy physical tools with no insight into virtual machines or virtualization-specific tools with no insight into physical capacity and performance.

Better Days

Clearly, we need to manage virtual and physical systems together. Today, only 14 percent of administrators use tools that are purpose-built for managing both physical and virtual infrastructures according to EMA. But undoubtedly, this is the direction in which we need to go because the 100 percent virtual data center won’t appear anytime in the near future. With an average of only 25 to 30 percent of most enterprise infrastructures currently virtualized, EMA predicts that physical infrastructures will remain dominant through at least 2010.

Seeing inside the VM is just as important as seeing outside of it. But if we are to get a clear picture of IT service and performance, this visibility must extend beyond what we need to know to manage VM resource use EMA reports that. IT’s true quality of service is determined by end-to-to-end performance — including system performance, application performance, network latency and end-user response times. The physical and the virtual, host and guest, system and application, need to be monitored and managed in an integrated fashion.

This approach is vital for IT organizations that want to increase their virtualization maturity. IT managers who want to use virtualization to accelerate their service delivery need an enterprise-wide view of each business service. Typically, this service view would incorporate multiple applications from widely dispersed locations. So, in terms of managing application performance in a virtual environment, it isn’t enough to know how well a host machine is performing or if an application is running successfully inside the virtual machine; we must also be able track the big picture of a set of services running in combination as a business processes on different VMs.

It’s pretty clear that we can’t manage virtualization in slices. Instead, we need to start developing strategies for managing our infrastructure and applications throughout the enterprise, across both virtual and physical environments — for the ultimate benefit of business users. Hypervisors can help, but they won’t support virtualization as a service delivery strategy.

We need to take stock of what we have today and take the (sometimes painful) steps necessary for properly positioning ourselves for the long years of managing mixed environments. Research indicates that few organizations are taking this route. In fact, according to EMA, less than 15 percent of organizations currently use heterogeneous tools for managing physical and virtual environments together.

To truly succeed at virtualization, we must start thinking in terms of processes and services (rather than just devices), expanding our infrastructure perspective with end-user views. We need to explore holistic solutions (instead of point products) that support our integrated service view and help us manage our virtual and physical domains seamlessly.

It won’t be easy, but as long as we broaden our perspective and choose solutions that can help answer our everyday questions — about any area in our service-oriented purview — we’ll succeed in securing end-user satisfaction.

Sure, it’s a challenge, but I think we’re up to it. If the “blinded” characters in Springsteen’s song can eventually recover their bearings, so can we. The virtualized world offers some pretty tough terrain. But like The Boss says, we’ll make it alright.

*Lyrics © Bruce Springsteen (ASCAP)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access