No one denies that the volume of information traversing the Internet is growing at a rate faster than the rabbit population at your local pet store. A 2014 study conducted by Cisco attempted to quantify the volume this activity and arrived at the following estimates: - Annual Global IP traffic will pass a zettabyte—that’s a one followed by 21 zeros this year-- and surpass 1.6 zettabytes by 2018 - By 2018 over half of the traffic will be generated by non-PC devices, the number of which will be double the global population.

The other aspect of this continued escalation in data volume is the immediacy of its availability required by its consumers. The demands of this combination of volume and speed will continue to act as stressors on existing data centers and are forcing providers to examine how to keep data as close to end users as possible to enhance its value and eliminate latency.

The Inefficiencies of the Traditional Data Center Structure

The nature of the data passing through the data center is evolving. IoT implementations generate billions of tiny bits of information from billions of devices with much of it requiring real time processing on a continuous basis. Due to deficiencies in their design, construction and operation, many of today’s existing facilities cannot ensure that they are up to the challenges posed by these demanding processing requirements.

The Stratified Structure and New Roles for Data Centers

The underlying diktat in a stratified structure is to keep the data as close to the end user as possible to reduce or eliminate the negative impact of latency. From a structural perspective, a stratified structure features one or more large centralized data centers facilities that work in concert with multiple, smaller “edge” data centers located strategically to be closer in proximity to end users and, in the very near future, micro facilities that will operate in concert with specific edge sites.

Although they have yet to be implemented on a large scale, increasing data volumes and latency “sensitivity” will give rise to micro data centers (<150kW) that will serve as the initial point of interaction with “end users”. In this role they will function as screening and filtering agents for edge facilities to move information to and from a localized area.

More specifically they will determine what data or “requests” are passed on to data centers above them in the hierarchy and also deliver “top level” (most common) content directly to end users themselves. The primary impact of this “localization” of delivery will be levels of latency below what is currently achievable.

The next level within a stratified architecture is the edge data center. The specific function of edge facilities is to serve as “regional” sites supporting one or more micro locations. As a result, they perform the processing and cache functions within the parameters of what would defined as the lowest acceptable level of latency for their coverage region. As the regional point of aggregation and processing, edge facilities will require a high level of capacity and reliability than their connected micro sites.

The central facilities within a stratified structure house an organization’s mission applications and serve as the central applications processing points (CAPP) within the architecture with back-ups conducted on an infrequent basis in relation to their connected edge data centers. Due to their level of criticality, the applications within a centralized data center are “non-divisible”. From a practical perspective this means that it is inefficient in terms of both a cost and overall system overhead to run these functions in multiple facilities.

Division of Labor

Simply put, the primary efficiency of stratified versus centralized data center network architecture is based upon the historical concept of the division of labor. Each component has a specific role in the structure that promotes efficiency by reducing overhead and negating distance limitations.


As in most things in life the 80/20 rule also applies to data centers and the operation of mesh architectures. In this case, 80% of the most accessed applications and data take 20% of storage and network capacity, with 20% of the least accessed consuming the other 80%.

The important element of this particular equation is that due to factors such as the fickleness of end users or seasonality the content that make up these ratios is continually changing. Due to its distributed structure, with each element serving in a prescribed capacity, stratified structures are uniquely qualified to support these dynamic requirements as opposed to centralized alternatives.


Data center security has always been about more than berms and mantraps, but as we are seeing on a far too regular basis, attacks on the content of our data centers can come from virtually anyplace in the world. In a stratified structure security (both physical and cyber) is not only distributed but escalating as well.

Housing the organization’s mission critical applications in the CAPP of the configuration makes them both easier to track, and more importantly, defend. Multi-levels of protection—firewalls, access points, etc.—are built into the central facility to immediately identify the initiation of the attack (Denial of Service, for example) and ensure that it is immediately thwarted.

Security is also enhanced in the stratified structure by the existence of the edge sites. Unlike, a conventional architecture in which a disaster recovery site would be called into action in the event of a cyber-attack, an action that may require precious time, the existence on multiple edge sites allow traffic to be re-routed or off-loaded from a site under siege. In short, security in a stratified configuration offers a host of defensive options that are not available in centralized structure.

Network Many of today’s mega facilities boast of the cornucopia of network options available within their POP rooms or even localized rings that connect their facilities within a region. While these attributes may be attractive to an end user seeking one or two facilities, they are insufficient when viewed from the perspective stratified architectures where the hub must support three specific network capabilities:

1) The ability to scale- Localized rings and even the most robust POP room are not designed to provide the ability to support multiple edge facilities. Traditional data center architectures have been with their focus on the access they can provide from a single location. A stratified hub, however, must be designed and built with the ability to support multiple satellites as one of its fundamental responsibilities.

2) Multiple providers- Even the most robust carrier hotel/data center the network offerings are constrained by the number of providers who service that area. Stratified designs dictate that the hub provide supra-regional network support at a minimum and preferably nationwide connectivity support. The vast majority of today’s mega facilities are not equipped to adequate meet this challenge.

3) Cost and Control- Network costs typically constitute a large share of the operating expense for even a single data center. Cost consideration becomes an even more important element in a mesh environment, thereby creating the need for hub locations to provide a multitude of connectivity options, including those outside of traditional providers, and concentration that can be leveraged to deliver network costs far below those that can be delivered in the current site by site environment that currently characterizes the data center industry.

The massive volumes of information and the continually declining definition of “acceptable latency” are placing new demands not just on the individual data center but their aggregated network architectures as well. The need to keep data as close to the end user as possible is rapidly becoming the most important factor in data center planning. Relying on the time tested principle of the “division of labor” stratified architectures will come to dominate the data center landscape in the coming years.

(About the author: Chris Crosby is a recognized visionary and leader in the datacenter space and the founder and CEO of Compass Datacenters. Chris has over 20 years of technology experience and over 15 years of real estate and investment experience, including senior roles at Digital Realty, Proferian, and CRG West (now Coresite). His career began at Nortel Networks and he earned a B.S. degree in Computer Sciences from the University of Texas at Austin.)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access