Applications drive everything from running storefronts, to managing supply lines, running back-end office operations, and identifying new business opportunities. But today’s data center infrastructure is completely ignorant about application topology. IT admins typically dismember applications, deploying each component in a separate silo. That has often led to underutilized hardware.

This method of dismemberment has also made the whole deployment process extremely complex, specifically when it comes to new age Big Data applications, which have many moving parts. On top of the deployment complexity, to ensure application-specific SLA, administrators build walls around applications or application components.

This leads to the tight coupling of an application with a server. Applications are now mapped to a physical machine or a virtual machine. The two predominant deployment models used are bare-metal physical servers and clusters built out using virtual machines. Both these approaches suffer from serious drawbacks.

Bare Metal Deployments Lead to Operational Efficiencies, Ballooning Costs

Concerns about data privacy or cost constraints, while dealing with massive data volume, lead many organizations to favor on-premise deployments for their data-driven distributed applications. The state of the art for Big Data applications, established by the flagship enterprises where these technologies were born, was to use bare metal and eschew virtualization. As these technologies are being democratized to the broader enterprise segment, bare metal is now becoming the de facto standard. This leads to a number of pain points:

Cluster sprawl: To ensure performance isolation, each application is deployed on a dedicated physical cluster, which is typically over-provisioned to accommodate bursts of activity. For the same application, separate physical clusters are provisioned for development, test, QA and production environments. This leads to ballooning infrastructure costs, inefficient resource utilization, and increased management complexity.

Unregulated data copies: Each physical cluster hosts its own copy of data. Apart from the obvious cost implications, data copy management raises its own unique set of challenges, including security, data governance and questions around the “single source of truth.” Development and test environments typically operate only on a subset of the production data, which leads to undesirable consequences of its own.

Poor operational agility: The bare-metal approach couples application deployment cycles with hardware procurement cycles. When a new application needs to be deployed, the process includes procurement and installation of hardware, followed by application installation, configuration, and tuning. This can add several months delay to the application release cycle.

Lack of scalability: When an application’s scalability exceeds the capacity of the existing hardware infrastructure, additional hardware must be procured, installed, configured and added to the existing physical cluster – a process that can take weeks. Even when spare capacity exists elsewhere in the data center, the process of reclaiming hardware to add capacity to an existing cluster can take days. Virtual Machines – Short-Term Relief, But Not a Long-Term Solution

While hypervisors and virtual machines provide some relief from hardware underutilization and infrastructure management complexity, the larger issue of complex application management remains totally unaddressed.

In addition, virtual environments use a best-effort approach to meet the resource requirements of an application. Unfortunately, a high range of variability can be expected in application performance. A well-known blight for massively distributed applications is the “long-tail” phenomenon, where a small variation in the performance of one of the “n” identical tasks in a distributed application can lead to a large impact on response time. Virtual environments dramatically exacerbate the long-tail problem.

Performance penalties: Highly distributed applications that deal with large volumes of data are highly sensitive to storage as well as network bandwidth and latency for performance. These are precisely the pathways that are hit the most by hypervisor overhead. It is not unusual to experience a 20 percent impact on storage and network latencies when using a hypervisor. This is a deal breaker since performance is often a critical consideration for these applications.

An Application-Centric Approach

As innovation in the application space continues to accelerate, it is imperative to create an infrastructure that understands application as a first-class citizen. This means that an enterprise should look to not only liberating applications from the complex time-consuming deployment processes, but to ensuring isolation of system resources among all running applications via built-in monitoring.

With cloud adoption, the data center has become amorphous, scaling across private and public data centers. Applications need to be portable. They must have the flexibility to move across data center boundaries with the click of a button.

New technologies such as containers enable creation of an application-centric computing paradigm by unshackling applications from the underlying infrastructure. Just as a hypervisor abstracts the OS from the underlying hardware, containers help abstract applications from the OS and everything underneath, leading to simplified application deployment and seamless portability across hardware and operating platforms.

Containers therefore can herald an application-defined data center era, extending the gain of the software-defined journey all the way to what really matters — the applications!

Realizing that dream, however, requires the reach of containers to extend beyond the niche of web applications to all the classes of enterprise applications — including mission-critical databases and business applications. Containers have as much to offer to these applications, if not more, as they do to web-scale applications.

For IO-intensive data applications, containers on bare metal provide a high-performance, lightweight alternative to the traditional hypervisor virtualization with potential performance gains up to 50 percent. Containers also help consolidate multiple applications per machine, which helps maximize hardware utilization. And just in case you thought hypervisors made server utilization a thing of the past, think again.

The majority of your servers are probably still running in a range of 10 percent to 30 percent utilization, and this is only going to get worse as processors become more and more powerful. (The latest Xeon processors support 22 cores per socket, which with hyper threading goes up to 44 “virtual cores” per socket!) But most importantly, application management simplicity and agility gains are so very needed for enterprise applications, which account for a significant part of the IT OPEX budget and can often be the roadblocks to rapid business innovation and time to market.

It is also important to recognize that containers are just one element of the solution that is required to create an Application-Defined Data Center. Just as containers allow application-driven, fine-grain slicing of the compute resources, the storage subsystems also need to evolve to “understand” applications and manage resources at the application level — whether that application runs in a single container or is composed of many containers spread across multiple servers.

In spite of the rampant “container washing” by storage vendors, the fact remains that the current storage subsystems are simply not designed to handle the containerized workload. They have neither any understanding of applications nor were they designed to operate at the container scale and speed. And this is currently a major roadblock to containers going mainstream.

A true application-centric solution must weave all these threads together to create an intelligent infrastructure platform that completes abstracts’ infrastructure boundaries to bring simplicity, agility, and performance predictability for applications.

The success of digital business transformation hinges on the ease and the pace at which business services can be delivered using applications and APIs. Modern applications are therefore fundamental to strategic business evolution. But without an application-centric mindset for infrastructure and data center management, most business will continue to struggle and will ultimately fail in this endeavor.

The need of the hour is to take the enterprise software-defined data center journey to its next logical level – that of the Application-Defined Data Center.

(About the author: Premal Buch is chief executive officer at Robin Systems, Inc. )

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access