Top five hurdles to the adoption of hyperconverged infrastructure
In the software business, we’re used to marketing hyperbole about the promise of nascent technology. Often, innovators and vendors can get so evangelical about the merits of a particular area that they begin to describe the future as if it’s the present.
Hence, this is why Gartner tried to temper this runaway fervor with its now celebrated Hype Cycle and why it has described hyperconvergence as “not a destination, but an evolutionary journey” in a recent report entitled "The Next Phase of Hypercovergence."
However, hyperconverged infrastructure (HCI) already offers substantial benefits in terms of more efficient and cost-effective data center resources – a substantial cost-drain for the increasingly online-dependent, everyday business. For many companies, however, making changes to their infrastructure setup is not a simple task.
As proponents of the merits of HCI, we’re not going to get very far by ignoring the barriers to broader business adoption. So, let’s consider here the five key challenges to adoption of HCI as the default data center infrastructure and what can be done to overcome these challenges.
1. A learning curve for purchase and implementation
It’s an inconvenient truth for HCI proponents, but the orchestration, maintenance and purchasing process for HCI differs substantially from traditional virtualization and storage—presenting huge challenges to IT departments that are considering making a change.
Primarily, this comes down to overcoming the split storage vs. compute mindset. The IT budget has previously had a line item dedicated to storage and one dedicated to compute. Breaking out of that compartmentalized mentality is hard to do. A fair number of decision makers know what they are buying—or think they do—when they are purchasing SAN capacity. It’s hard for IT buyers to fully grasp the benefits of converged storage and compute when the concept of this setup remains so alien to the typical MO within most companies.
In addition, most IT buyers have spent princely sums in recent years on physical hardware in the form of legacy storage arrays. To be presented now with an option that unites storage and compute in a fully virtualized software-defined environment only creates a difficult internal sell for the CIO to his CFO who lucidly remembers previous expenditures!
The critical element here is that, while money talks, selling only cost-savings is really tough. CIOs and other leading executives do need to be engaged on a pragmatic basis and persuaded about where this particular hype cycle ends. It can’t be all dollars and cents, it has to be hearts and minds too. Know your options – including both on-premises and as-a-service offerings – as you may find easier paths to entry at a lower cost that allow you to make the business case sooner.
2. Maintenance for distributed storage
The mindset shift doesn’t just come down to purchase and implementation. For HCI to flourish, companies need to recognize significant maintenance differences when compared to legacy storage arrays. There’s nothing that torpedoes infrastructure updates quicker than business critical apps going down or a major security vulnerability being accessed because of flawed update processes. HCI updates are significantly different to traditional legacy systems.
While it’s commonplace in many existing infrastructure systems to shut down the storage cluster entirely to make updates and even to action preparatory tests in advance of upgrades, this approach is not fit-for-purpose when it comes to HCI. Rather, modern software-based HCI infrastructures can deliver high availability and built-in redundancy, and perform rebootless updates with zero downtime.
3. Inertia caused by legacy virtualization
Many companies have invested in traditional virtualization solutions and have built up a vast internal IT infrastructure based on their existing virtualization and storage solutions. Making changes to years of blood, sweat and tears that have been dedicated to synchronizing the front office with the back office setup is not an easy leap.
Making change of this magnitude will always involve a toe in the water. Companies need to try small deployments in specific environments or departments and compare against their more traditional deployments. Enterprises can consider trialing open source HCI options to see how transitioning can work, before jumping in with both feet and spending on an external vendor. You can also deploy public cloud options and use service providers for added support.
4. Increased pressure on the network
The world’s biggest companies already have high stress on their computer networks. When these IT teams look at the use cases for HCI at mid-sized or smaller firms, they are going to see a lot of interruptions across nodes. Better execution can absolutely minimize this disruption of resources and, thus, increase use of bandwidth; but it’s important to address these potential concerns instead of outright dismissing them.
5. Ahead of its time?
The major concern here is that HCI may be likened to using a sledgehammer to crack a nut. That’s before you consider that the adoption of containers and microservices applications are very much in their infancy. According to Gartner’s report, the next phase of hyperconvergence represents continuous application and microservices delivery on HCIS platforms.
The future of applications and application development lies in both microservices and also—from a development perspective—in containers. The current infrastructure ecosystem that persists as the status quo is not equipped to handle this major IT trend.
IT leaders at companies can either be followers or industry leaders in this regard. They can actually give their companies a massive competitive advantage if they consider how HCI can help their innovators create, run and manage next generation apps.