Is defense against cloud vendor lock-in a counterproductive posture ?

Olivier Schmitt
9 min readDec 15, 2018

Cloud lock-in appears to be a major concern to some organizations.

Why is that ?

Ghosts of the past

Many big companies have had to face a dispute with some big vendors, especially software vendors.

Might disputes have been triggered by an improper use of the licensing model, a forced migration to the new shiny version or an abrupt change of the pricing model ? It does not matter.

Common sense says vendors are greedy and always try to lock you in, one way or the other.

Today, the public cloud adds a new episode to this ongoing story: some vendors are not really cloud friendly and cloud providers are vendors too.

This is seemingly getting worse …

The context

AWS then Azure are the leaders of public cloud by all means and both rely heavily on proprietary technologies.

Even if containers are a way to soften the dependency to the cloud provider technologies, the various services publish different APIs and are designed differently : it’s a war and one of the most important and violent ones in years.

Now, let’s consider the ACME corporation : the company wants to start a cloud journey but because of the ghosts of the past; the CXOs are very worried about being locked in with the big public cloud providers.

The word “independence” is the new mantra in town; it is spelled out loud in every cloud journey meeting, to tackle the future issue.

Soon enough, the idea to have an abstraction layer of some sort is in every conversation. This magical silver bullet, will save ACME from the greedy cloud vendors. It is the obvious thing to do, right ?

My personal experience with the abstraction layer principle

As a technical leader of a JEE framework for more than ten years, I spent long hours crafting a huge abstraction layer on top of many popular frameworks such as Hibernate, Spring IOC, and so on.

The core idea was to protect applications from undesirable changes coming from the famous frameworks we used.

It worked well : 200+ applications were built, displaying a coherent design and a unique set of predictable behaviors.

Developers, who were mostly internal employees, could move from one application to another with ease : the abstraction layer acted as a technological shield and a tool to manage human resources.

Ops were delighted with this stability and homogeneity.

It had some downsides though :

  • Every change to the layer has to be carefully weighted because the impacts were global
  • The layer was a subset of underlying capabilities: some capabilities were inaccessible for the sake of the abstraction layer and the applications’ coherence
  • There was a specific learning curve we could not outsource

The last five years have seen an unprecedented acceleration in the pace of innovation and it became impossible to update the abstraction layer accordingly.

More and more developers were from the outside world and need to be specifically trained to use the abstraction layer although they know the famous frameworks behind the layer. Moreover, external developers don’t spend their careers with the same customer : to force them to learn a proprietary abstraction layer is not the best way to attract them.

To me, an abstraction layer is an impractical idea when times are innovative, markets immature and there is a developer shortage. In the end, It might even be a deadly idea because it tends to prevent diversity and innovation and this not the best way to compete with others who don’t follow the same principles (BTW: they won’t ask for your permission to be more agile than you).

My personal experience with the underutilization principle

One tactical move some companies embraced is to use only common technologies, mainly based on virtualization. Every cloud provider supports virtualization. Those companies don’t use managed services or disruptive stacks such as lambdas.

The problem with this approach is it is very expensive on many sides because requires a big team of Ops and you will spend a lot of money in compute capacities which do nothing most of the day.

Additionally, it’s very hard to build an IT service aligned with the de facto standards the cloud giants have set : 24/24, 99,95% available or more, 99,99999% data durability. New regulations raised the bar in a recent years: GPDR is one of these.

Again, in a free market, the company with the biggest costs and the weakest agility dies, even if its products are good. It’s only a matter of time. Look at Amazon.com and see how technology and agility can destroy an established market in a decade.

My thoughts on a multi-cloud provider strategy

Well, do the guys telling you this fairy tale have ever tried to keep up with the release train of AWS or Azure ?

This a recipe for underutilization of capacities : I don’t think a company can get the most of each cloud providers. It will need two different and expensive set of skills. yes, there is a severe shortage of qualified cloud professionals.

Why am I thinking all that ? See you in the next sections.

The pace of innovation

The public cloud providers’ success rely on an amazing pace of innovation: let’s have a look on AWS delivery metrics:

Delivered capabilities per year

AWS released three new capabilities per week in 2017.

This is becoming a major concern for IT leaders, because one unnoticed capability could be a major enabler to a company’s business. The company’s competitors might not miss it : too bad !

Additionally, a cloud service is a unique combination of technology, compliance and economics, such as the pricing model.

A pricing model is not portable: it’s an alchemy deeply rooted into the cloud provider’s strategy and its economic model.

A cloud service enables your business because of it has many dimensions: IT guys tend to see only the technological aspects. The pricing model, the resulting agility and the supported compliance are equally important to deliver the required level of enablement to a company’s business.

I can’t see a decent way to abstract capabilities and make them portable because the capability set is constantly changing. Some features can’t be abstracted because they are the product of a unique context you can not find elsewhere; by essence. If you do, you might loose some very important capabilities on one side without replacing them on the other side.

The economics of hyperscalers

As an introduction to capital expenditure in the Cloud business, see this article “Separate the Clowns from the Clouds” to understand the financial aspects of this business.

Here is an interesting quote from the author:

Amazon, Google, and Microsoft each spent more on CAPEX in 2017 than Oracle has in its entire history.

Hyperscalers are those ITs giants which dominate the cloud market : Google, AWS, Azure, and some others big tech companies such as Facebook.

When a specific scale is reached it allows the provider to deliver capabilities smaller providers can’t provide.

For instance, they can build and own very expansive and complex pieces of infrastructures such as a subsea cable.

For instance, Microsoft is known to have spent 22 billions of dollars in order to build its cloud infrastructure, Azure.

AWS infrastructure is so big that Intel partners with AWS to build some processors families.

Imagine the best engineers of AWS and Intel working together to build the best compute capabilities.

Needless to say AWS offers unique services in regards to Intel products which will be very hard to mimic on premise or by a smaller provider.

The number of cloud providers which are considered as hyperscalers is very small and each as its own strengths and weaknesses.

Additionally, don’t forget they are at war, they compete to offer unique capabilities; convergence is not on the agenda because it is a ruthless competitive market.

If you picked AWS because it provided a specific set of capabilities you did not find elsewhere, you probably won’t find the same with another provider. This ruins the idea of an abstraction layer.

Cloud computing at scale requires a tremendous amount of capital and a very rare set of skills at scale (IT, business, legal, …) : the very nature of this business implies few players can play this game.

Smaller cloud providers won’t provide the same level of service, they can’t.

Know Cloud computing axioms

Before the cloud computing, a customer bought a licence or a capacity for a fair amount of time, sometime forever. There was a strong sense of commitment because the investment of capital was substantial.

The main drawback of this practice was the over provisioning of resources and the costs related to it. Resources were obsolete after a few years but the company had to use them anyway because it can not afford to buy new ones.

Fear and belief are the real risks

Cloud computing model is based on the completely different assumptions : you are suppose to pay what you use.

There are no such things as long term commitment in a pure cloud computing model.

Moreover, the recent trend is very agressive on the economic side : the Function as a Service model is based on a radical economic model. You are supposed to pay when you call your application code.

IT professionals know perfectly that most infrastructure resources are unused most of the time. Given that simple fact, there is a tremendous impact on the commitment a customer should endure to have his application running in the cloud.

People tend to rely on past experiences to see the present but when new and disruptive models emerge they might be completely misled by the past.

The main risk to me is to rely on belief and fear fueled by the past which does not longer exists. By doing so, many companies will miss opportunities to make their businesses shine.

For instance, Microsoft is clearly not the same company it was ten years ago.

Who had forecasted Microsoft will be a major contributor to Opensource ? Will release Typescript,Visual Studio Code ?

Read the NIST Cloud Computing Reference Architecture

Cloud computing is complex and the various cloud providers don’t make it easy to customers or IT professionals.

If you want to become successful with cloud computing, do your homework, look at it with honesty and stop referring to old stories with Oracle or Microsoft.

The first step is to understand the cloud computing principles, then you will be able to assess the value proposition fo each cloud provider.

Is there a form of locking ?

Yes, probably.

But it’s not what you think.

As there are almost no commitment in a pure Cloud computing model: the locking comes from your mindset.

The cloud providers does not prevent you to leave him by a legal commitment or a long running investment : you can’t buy AWS resources for 5 years or more.

The trend is just the opposite: the race is about billing customer to the millisecond.

Have you designed your workloads for agility ?

Is your architecture simple enough or based on common principles to be able to move from one provider to another ?

Simplicity is hard to achieve but to me it’s key if you want to be able to move from one cloud provider to another. But remember, you won’t get the same experience.

Conclusion

I think defense against cloud vendor lock-in is a very counterproductive posture in a free world and in free markets.

Cloud computing is so powerful that a cloud-savy competitor could sink your business because he would get better financial margins, he would release products a faster pace, and so on.

If a whole country, such as China, decide to have its own cloud champions, its own Internet infrastructure, its own economic rules, the problem would be different. But in western societies, the context is different.

Companies have no choice but to embrace the Cloud computing principles.

If they don’t one bold and cloud-savy competitor is enough to bring them down because classic IT approaches are so inefficient in comparison to managed services of the Cloud giants.

Locking does not come from the Cloud provider, it’s up to you to design your architecture to be moved. But there is no free lunch.

--

--

Olivier Schmitt

Started coding in Basic at 11, never stopped since then. Solution Architect & Software Craftsman, check my portfolio, here https://goo.gl/6iRhsz.