Cloud vendor lock-in risk reportedly puts the brakes on cloud-first strategies, but how worried should CIO's be about these issues?
This blog reviews three principal areas of concern and the available strategies to mitigate such problems. Data lock-in and application lock-in are the two concerns discussed most often, particularly in regards to Platform as a Service (PaaS). Infrastructure lock-in worries are emerging as enterprises begin the move to second-generation cloud solutions.
Data lock-in risks become evident when you need to move your data from one cloud vendor’s servers to another. This decision may be driven by factors outside your control, such as the ruling against the validity of Safe Harbor agreements from the recent Schrems case. Planning for situations like this raises several issues: who is responsible for extracting the data? Where can you store it before transferring to the new provider? In what format will the data be, and will that work with the new provider? How long will it take to move terabytes or petabytes of data, and what are the associated network costs?
CSPs are helping to alleviate data lock-in risk by offering the capability to move large application products and their correspondingly datasets. For example, both AWS and Capgemini have well-developed approaches that will move your SAP estate from your data center to a cloud provider.
Application lock-in risk emerges as you build cloud-native applications. Reconfiguring these applications to operate on a new platform is time consuming and expensive. In recent work with a client, we had to keep a significant portion of their applications estate in a traditional hosting arrangement because the underlying operating systems and architecture couldn’t facilitate a cost-effective way of moving them to cloud. This resulted in a long trail of costly and inflexible working conditions until the apps could be replaced or rewritten.
For another client, we implemented a program to eliminate these kinds of applications in order to reduce cost and increase the organization’s ability to adopt cloud. This case highlights the change in thinking necessary for organizations to become cloud ready.
We’ve been used to managing stable applications that deliver value over many years, but the applications of the future may only exist in their original form for a matter of months before requiring significant change. As the lifespan of cloud-native applications shortens, the potential burden of application lock-in will grow: the more rapidly you have to change, the more open and portable your applications need to become. This may be less of an issue for business cases concerning fast-moving, customer-facing enterprises, where a rapid pace of change in applications can be accommodated more easily.
Lock-in mitigation in the applications domain is more highly developed than in the data or infrastructure space with an increasing number of paths to portability, particularly for custom and cloud-native applications. For the CIO, the choice of route will largely be decided by what the organization can tolerate in terms of changing skills and ways of working.
The use of containerization (for example with Docker) improves application portability, and combining this approach with a Cloud Management Platform enables hybrid clouds to be managed across multiple environments. Organizations committed to continuous change using DevOps are further along the infrastructure-as-code continuum, giving them the option to use tools such as Terraform to control their deployment landscapes.
We’re increasingly hearing concerns about infrastructure lock-in risks from clients. Right now, the infrastructure domain is perhaps the most challenging for the CIO considering a transfer to another provider because the building blocks of the new service and the cost model for consuming those blocks will likely differ.
It’s common to talk about application compute requirements as if they are T-shirt sizes (S, M, L, XL), but if you’ve ever bought a T-shirt, you’ll know that “M” in a different brand can be a much tighter fit than you were expecting. This isn’t a serious issue if you’re moving a single application—you just opt for the next size up—but it is if you’ve built a business case on hundreds of applications and find you need to bump them all up a size.
A cloud architect whose role is to optimize performance, resource usage, and cost is a vital asset in this situation. Their job is to figure out when, for example, eight small blocks work out cheaper than four big blocks. A cloud architect will also understand when your current approach needs to change and when an alternative provider could offer a lower total cost. Some of these skills exist among cloud service providers, but many IT organizations do not currently have this kind of SIAM capability in place to optimize their cloud landscapes.
With each aspect of vendor lock-in risk, the key is to think in terms of the full lifecycle of a cloud solution. Your business model must balance the agility benefits of migrating to a second-generation cloud setup with the cost of that migration. For example, do you need maximum agility? Or does it make more sense for your enterprise to form a long-term partnership with one vendor and adapt to their features as and when they upgrade?
Having the capability in your team to architect a solution that mitigates lock-in risk, to define an approach that fits your business need, and to plan your exit strategy from the start is a major factor in the CIO’s ability to manage the vendor lock-in challenge.
(About the author: Nick Slane is an enterprise architect at Capgemini. This post originally appeared on his Capgemini blog, which can be viewed here)
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access