We learned from utilities everything we need to know about cloud infrastructure

We learned from utilities everything we need to know about cloud infrastructure


Electricity is ambient. It’s all around us. We turn on lights and plug in devices without even thinking about the complex electrical grid that supplies us with juice at will. It’s become a necessary infrastructure that will never go away — but it’s only the bottom of the stack.

That’s exactly how we need to think about cloud infrastructure. Connecting to the cloud should be as effortless and accessible as connecting to a power grid — one point of access to a variety of sources of infrastructure and services.

Imagine an ambient layer on top of cloud infrastructure players, provider-agnostic, that could give us just that. A layer with a single API standard (as copper wire is to electricity) that would allow companies to connect to the infrastructures below, without being locked into specific tools and services. A serverless world, where we won’t think about “provisioning,” “instances,” “containers” or “operating systems.”

In this world, code would automatically run in the cloud, and infrastructure would scale as needed.

Can it be done?

There have been attempts in the past to make this cloud vision a reality — one coming from a standards bent called OpenStack. Frankly, it’s a failed experiment by traditional datacenter companies and hardware vendors attempting to remain relevant.

But in theory, they were on to something. The container-based grid computing models, such as those enabled by Docker, Rkt, Mesos and Kubernetes, provide a model for this type of vendor-agnostic layer. If we could abstract and rebuild this technology, we could create a multi-cloud layer stretching across vendors and regions. I’m not the only one who’s considered this — Eric Schmidt and Werner Vogels have recently discussed the importance of “serverless” architectures.

But we need something more analogous to an electrical grid that supplies multiple regions (or countries) with power, versus a multi-product line utility company (think: Comcast’s TV/phone/internet bundle). Right now, if a single company controls this layer, the vision of choosing from multiple cloud and software providers gets much tougher to realize.

How would a vendor-agnostic layer benefit organizations? Easier, universal access to the cloud would help companies create a more secure IT infrastructure, provide the flexibility to mix and match the most important features and services and make auto-scalability a reality.

Security

Companies can either build their infrastructure in the cloud or on-premise. Companies are hesitant to build in the cloud because of fear of the unknown. They have entire IT teams dedicated to managing their data on-premise. By moving to a cloud infrastructure, they fear a loss of control and disintermediation of IT and operations people. But that fear has to change.

The unknown is scary … but there’s a huge amount of opportunity for those that take the leap.

Owned, on-premise infrastructures are actually not as secure — look at recent hacks on Target and Home Depot. In many cases, the cloud can be a safer place to store and manage assets. If the cloud were ambient, adoption would become more mainstream. The unknown is scary for companies that are used to doing things on-premise, but, just like electricity, there’s a huge amount of opportunity for those that take the leap. And in this case, much more security.

Microservices

An ambient layer would also free companies from vendor lock-in. As they stand, cloud services are very monolithic — you build on the AWS erector set, you use AWS tools and APIs. But with an ambient cloud layer, everyone connected to the cloud would have access to the same breadth of services.

This microservice architecture allows for more flexibility — companies can decide which individual, best-of-breed services they want, versus using the pre-packaged set of tools from their cloud infrastructure provider of choice. Google is headed in this direction with a focus on specific and vertical technical integrations over a litany of commodity services that require lots of back-office configurations, as with AWS.

Auto-scalability

Finally, an ambient layer would save companies time and money by making it easier to auto-scale server capacity. Currently, companies connect to one infrastructure provider and buy a specific number of servers to meet their needs. Many companies waste money because they don’t reach their bandwidth limit in a given day. But if you provision too few, one major moment could break your site (we all remember healthcare.gov).

But what if we had a common API that could mix and match capacity from different providers to help companies auto-scale their needs at the best price — some bandwidth from Amazon, some from Google and so on? The API would make auto-scalability easier for all, and prevent crashes. This idea is particularly important as we use the cloud to build more “life or death apps.” Availability is essential.

Fantasy versus reality

A common API is a bit of a fantasy, but it’s hard to ignore the benefits that such a system would provide. I think we’ll see cloud infrastructure develop in much the same way as electricity. Eventually, it will be an ambient utility. Then, companies can focus on real innovation versus building clunky connections to cloud infrastructures that don’t suit their specific needs. That’s when the real technology revolution will heat up.

Featured Image: Intel Free Press/Flickr UNDER A CC BY 2.0 LICENSE

Source link

Leave a Reply

Your email address will not be published.