After decades of dominance by “big box” storage vendors, storage technology is undergoing a transformation. Indeed, the entire data center is evolving. Five years from now, every organization of any size will look at its data center in a completely different light.

In this article, we will identify six trends driving this transformation in the data center and propose ways for companies to prepare themselves to meet the challenges of data storage in the twenty-first century.

Increasing Adoption of the Internet of Things

While the unprecedented explosion of data in the enterprise is no longer news, what is catching enterprises off-guard is the growing use of data-producing and data-sharing devices, whose presence in business and government is often referred to as the Industrial Internet of Things (or IIoT).

A few years ago, Cisco predicted there would be 50 billion objects in the Internet of Things by 2020. Many believe Cisco may have underestimated the number of objects that will be connected to the IoT within the next four years.

We can already see this happening in the consumer world, where devices designed to track health and fitness are taking off, and companies like Nest Labs continue to expand their portfolio of home automation products that offer 24x7 data on the home environment.

Devices like these will become commonplace as more than half of all businesses embrace IoT. As companies are discovering, it’s the data that is valuable. The cost of the devices themselves is often insignificant compared to the value of the data and insight they produce, which will lead organizations to look for or invent even more data-collecting objects, further increasing the pressure on storage requirements.

IT professionals are currently realizing that storing this data on big box storage arrays is simply not possible. The explosion of data makes this option cost prohibitive. And that leads us to the next trend.

Commoditization of Storage Hardware

You can’t get fired for buying IBM may have worked during the height of the “big box” era, when storage vendors capitalized on that idea to push expensive, high-end systems for large organizations.

It’s time to retire this worn-out adage. Today, those systems are the high-priced, gas-guzzling SUVs of the data center – but more and more companies opt for an energy-efficient and cost-effective hybrid model. Technological innovation has led to game-changing software-defined storage (SDS), which detaches storage software from storage hardware and allows organizations to replace expensive storage arrays with commodity hardware and cloud storage. The appeal of SDS is greater flexibility at a fraction of the cost.

Gartner believes that “Software-defined technologies will become the technical cornerstone of enterprise infrastructure over the next five years, shaping vendors' solutions to drive a more programmable and automated IT infrastructure.” The analyst believes that “By 2019, more than 50% of the storage capacity installed in enterprise data centers will be deployed with software-defined storage (SDS) or hyperconverged integrated system (HCIS) architectures based on x86 hardware systems, up from less than 10% today.”

Much of this capacity will come from new storage vendors now entering the competitive landscape. Gartner predicts that “By 2018, more than 50% of enterprise storage customers will consider bids from storage vendors that have been in business for less than five years, up from less than 30% today.”

One likely outcome is price pressure, as innovation and increased competition drive down storage costs for customers. When I talk to IT directors about the benefits of SDS, I advise them to make the storage vendors compete for their business. I remind them of LendingTree, whose slogan is, “When banks compete, you win.” When it comes to storage hardware, the same strategy applies.

Impact of “Containers”

Containers are a relatively recent innovation that further increases efficiency in the data center. Containers essentially create virtual environments where applications can run, isolated and independent of other applications while sharing underlying resources. For example, a single application may be shared across multiple containers, whereas a virtual machine would house multiple applications.

While you might expect to achieve 10x increase in efficiency from virtualization of your physical servers, using containers could multiple that efficiency by another 10 to 20 times – or more. Containers also increase portability by packaging an application with all of its dependencies so it can run anywhere, on any machine.

The concept of containers is also being applied to storage, to further increase both the efficiency of storage resources and the portability of containerized applications that access storage within this environment. As containers gain traction, persistent data storage will not go away, and application developers will need to take advantage of software-defined storage for maximum speed, efficiency, and portability.

Consolidation of Storage and Server Providers

If you have been to any big IT conferences lately, you may have noticed the large number of vendors offering storage-related products and services. Innovation in this area has led to a flood of new vendors, who see in this transformation of the data center an opportunity for new ideas, new approaches. This is great for customers, who also benefit from all the competition in storage technology.

This is a much different scenario than the one that customers previously experienced over several decades when big box vendors monopolized the industry, prices were high, and choice was limited. However, the current number of vendors is unsustainable. There are more vendors than the industry can support.

We’re already seeing consolidation – most notably, the Dell/EMC merger and NetApp’s acquisition of SolidFire. This consolidation will continue over the next several years, and the industry will see other significant acquisitions.

It’s clear to anyone surveying the landscape that traditional storage is likely to take the biggest hit, as it is replaced by commodity storage and cloud services. Technology analyst IDC predicts that SDS will enable the storage software market to grow at the expense of legacy hardware-defined storage systems. Large cloud providers like Amazon, Google, and Microsoft, along with smaller players and niche providers are also expected to erode the legacy market.

Consolidation, acquisition, and competition will likely reduce the number of vendors in the coming years, and enterprises need to consider each vendor’s long-term strategy and viability.

Open Source Approach Gaining Popularity

Taking an open-source approach to software is appealing, in large part due to the ardent communities that typically form around open-source software technologies. These communities are vital for giving input on the practical functionality that software should have. An example is OpenStack, a global collaboration of developers and technologists who are producing an open-source cloud-computing platform for public and private clouds.

An important benefit of OpenStack is its openness and interoperability. By building SDS solutions on OpenStack, commercial vendors can take advantage of its flexibility and openness while offering the benefits of SDS, among them the ability for customers to remain vendor-agnostic with respect to storage hardware. Commercialized solutions built on open-source software can also address the need for enterprise support along with the ability to precisely tune the solution to meet a customer’s unique requirements.

Should you consider open-source software for storage management? With community-driven organizations like OpenStack.org providing a platform for cloud computing and storage, and developers offering commercial solutions built on OpenStack, it’s highly likely open-source software will play a growing role in storage management. In fact, Gartner predicts that 20% of large organizations will be using open-source storage software by 2019. SDDC: The Larger Perspective

Many believe that the next evolution is the software-defined data center (SDDC), in which all infrastructure is virtualized and delivered as a service, including networking, storage, and compute. (Some would include security as well.) Virtualizing elements of the data center should make IT more agile, more responsive, and more cost-effective.

There are of course other potential benefits, including cost reduction from more efficient use of hardware and lower energy or cooling costs. But Forrester recently suggested that the primary benefit has to do with the IT staff itself: “The whole subject of economics comes down to people.”

In Forrester’s view, it’s wasteful and counterproductive to hire smart – and expensive – people to manage IT and then assign them work that’s repetitive or that adds little value. And yet, much of the work in IT is exactly that: administering systems, networks, and storage. In other words, keeping the lights on.

By creating a more programmable and automated IT infrastructure, a software-defined data center removes some of the non-strategic tasks that are currently handled by sys admins, network admins, and db admins. In doing so, an SDDC can relieve these talented employees of tedious work and instead offer them opportunities to innovate, create new services, or deliver “genuine customer value” (in Forrester’s words).

In addition, an SDDC supports the goal of many organizations to achieve something resembling ITaaS, or IT as a service. In the ITaaS model, IT packages its capabilities as discrete services (or microservices) that can be delivered or provisioned as needed to business units or departments. There are numerous benefits of this model, including a greater ability to scale up or down to meet demand using on-premise, cloud, and hybrid cloud infrastructures.

Preparing Your Organization for Transformation

When thinking about transformation in data storage (and the data center more broadly), it’s easy to feel overwhelmed. There is so much change going on in the data center, and shifting your thinking toward new models and new approaches is not easy.

My advice to customers and prospects is always to start small. Bite off a project that is manageable. Start with non-critical infrastructure – for example, you may have a couple of servers supporting non-essential functions in the business. Start there. Explore the possibility of SDS on this scale, and investigate the possibility of integrating cloud storage.

Take sufficient time for research. Look to your current vendors (or new vendors) to help you develop a proof of concept. Most IT vendors have pre-sales teams who can assist you; take advantage of these resources.

With the right steps, thorough research, and a successful partnership with your IT vendors, you may achieve results similar to those realized by Northern Arizona University (NAU). With a growing demand for storage across several campuses, NAU discovered it could deploy an SDS solution for the same cost as adding a relatively small amount of additional storage. Its SDS solution delivered better performance while using thin provisioning to reduce the amount of storage required for its VMs by factor of 20 to 1.

IT personnel at the university took a chance by stepping out of their comfort zone, but the results were more than justified.

(About the author: By Michael Letschin is Field CTO at Nexenta)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access