The data center has come a long way. Where once, banks had farms of servers that operated independently, each dedicated to one or two applications and typically 20% or less utilized, virtualization, grid computing and information technology management software have made it possible to harness all that hardware into a shared computing environment in which applications run across multiple machines or several smaller apps run on a single machine. Either way, software dictates where each process should take place, unburdened by concerns about the underlying hardware.

In the next phase of data center technology evolution, manufacturers of network switches, servers, storage units and even database and virtualization software are working more closely together to make their products integrate and work together harmoniously in what is sometimes called the "data center fabric." Products that fit into this fabric mode include Cisco's Unified Computing System, Hewlett-Packard's CloudSystem Matrix and Dell's Virtual Integrated System.

"We're extending the boundaries outside the shell or skins of the computer system, to its associated networking and storage, and calling that the common integrated environment or converged infrastructure," says George Weiss, vice president and analyst at Gartner in explaining the term "data center fabric," which he says Gartner coined.

In a data center fabric, one administrative console can be used to manage all the hardware as a compatible set of resources rather than as individual pieces that each need to be configured and managed separately. This management software is particularly tricky to develop, Weiss says. "Consider the complexity and some of the challenges that are part of this process," he says. "You need to have very intelligent software that knows precisely what the resources are out there, and how much memory, CPU, I/O, networking bandwidth, storage capacity and so on is available. Then you have to understand the apps and what demands they have for resources to run at a certain service level where you get specific response times."

The data center fabric also requires a naming convention, a protocol and the ability to distribute and virtualize workloads, Weiss says. There's also a need for abstraction from the physical bare metal to the actual services delivered.

Software plays a major role in the discovery, virtualization, automation and orchestration of the hardware, Weiss says. "It would be nice if some of the software offered by companies could paper over the incompatibilities that the hardware presents, so that the app doesn't care where it's running, it could be Dell hardware or HP, and if it doesn't have all the resources it needs there they can be moved easily and fluidly to where they are needed with the same security and availability that's required by the end users."

The challenges are especially great in banks, where legacy software and hardware abound. "Try moving some core banking applications that were written in COBOL on to x86 products with Linux," Weiss says. "If the apps have been written in a very proprietary way to a specific hardware architecture, those are the hardest to tear away from the integrated systems they're running on. If you have existing technologies that can't be integrated within a new framework, then you either begin anew or you create another silo of sorts, and operate this new architecture side by side with what you have," Weiss says.

Cisco, BMC Software, CA Technologies, IBM Tivoli and HP are all building such management layers, but not necessarily in a compatible way, Weiss says. "It's like an orchestra conductor - all the musical instruments have to be in tune, the musicians have to come in at precisely the right time so the work being performed is in harmony with the composer's intent," he says. "In this case, the orchestra is still missing a few instruments and is sometimes a little out of tune, so it's a little jarring to our ears sometimes."

Not all vendors are completely on board with the idea of an open, plug-and-play data center fabric. "If you create a common level playing field that all vendors can play in, then more of the components become commodity in nature and substitutable," Weiss points out. "As they become a commodity, the prices will shift down, margins will diminish, differentiation will be lost and the competition will be on my widget versus your widget, which are almost the same. But those vendors that veer too far in the other direction, and become more proprietary in nature, might miss potential business because users are reluctant to deploy one vendor's solution. The user will ask, what happens in five years if my vendor is no longer in a leadership position or able to deliver my needs?"

True openness involves programming languages, interfaces and software, not just hardware. "We're talking about mutlitiered architectures that have databases, applications, middleware, browsers and operating systems," Weiss says.

Standards would help. "If everything were standardized at the API level between storage, networking and CPU components and they all fit together like neatly formed Lego blocks, then it would be a very easy task for the user," Weiss says. Several enterprise user groups have formed to create interoperability standards and "frameworks" for data center products to work together. The Enterprise Cloud Leadership Council and the Open Data Center Alliance are two examples.

"But unfortunately, the state of industry is not plug and play, although there are partnerships among some of those vendors to make their products work together," Weiss notes. "On the whole, the IT infrastructure is still too complex to declare that there's an overarching standard or set of standards in which everybody's technologies plug and play."

Data center fabrics could be a foundation for cloud computing, Weiss says. "Whether it's on premise or public, somehow cloud computing will comprise a set of resources," he says. "To the user, it should be concealed or transparent; the user shouldn't be concerned with the details of where the memory, computing and networking are. I just want to know that when I click a button and make a request, I get a response from an application that's processing somewhere among resources that are available to me, on my premises or off premises, like Amazon, or some combination of the two."

Cisco and several of its partners, including VMware, BMC, NetApp and Microsoft, say they have created a framework in which their products will work together.

HP also makes a claim for data center equipment interoperability. Dell is getting on the bandwagon and expected to make some announcements in the first quarter of next year. Both are lining up partners or acquiring companies in storage and networking to provide similar types of solutions. "They may be different in terms of the technical details and philosophy but in general, the Dells, HPs, Ciscos and IBMs are all pursuing this fabric-based infrastructure in some form," Weiss says.

Although the data center fabric is not pervasive yet, and the software to manage it is still being refined, Cisco and HP have sold a number of their fabric solutions into enterprises. "We're going up a staircase, and while we may be at the lower steps on that staircase, we feel there's an inevitable rise, a maturity, and that we'll see sophisticated approaches as we move out through 2015," Weiss says.

This originally appeared at Bank Technology News.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access