The utilization of cloud services often proceeds through three levels. Typically, the further you have progressed, the more value you can produce. What is your current level concerning cloud computing?
- Level 1: The charm of ease — machines and data in the cloud
- Level 2: Searching for elasticity and reliability, orchestration, and containers.
- Level 3: Forget infrastructure, focus on serverless architecture
Taking full advantage of the cloud is naturally easier for a startup, with no legacy systems in the background. However, the transition to the cloud is beginning to look like a necessity for businesses rather than something they can optionally do. This transformation also includes those organizations that have the burden of moving existing systems to the cloud. Although this challenge may seem high from the other side of the mountain, the valley waiting on the other side is probably greener.
Cloud services help you innovate and respond to changing situations faster. Why then companies only spend a fraction of IT costs on cloud services? The costs and workload of cloud transformation may seem daunting, but fortunately, there is a considerable amount of examples of how to do it properly.
Often the questions we wrestle are related to how a company can use the cloud and its capabilities to full value vs. the extent a company is willing to go all-in to the cloud.
Perhaps you are still experimenting with limited data set and getting your feet wet with cloud services. How to proceed the right way forward, and what would be the correct road signs to follow? I’ll unlock in this article some of these opportunities with the most prominent trends at the moment that can be related to your level of cloud usage. I’ve simplified this model for this article, but I’m hoping you can still evaluate your organization’s usage of cloud computing compared to these trends.
Level 1: The charm of ease — machines and data in the cloud
Businesses that are not yet in the cloud or are just trying out cloud platforms, there are often concerns about the reliability, availability, cost of services, and security. Each of these concerns is undoubtedly valid, but at the same time, cloud providers like Google, AWS, and Azure have used millions or billions of dollars to solve these issues. Such resources are generally not available to invest in building the infrastructure for computing, unless infrastructure itself is your global business.
Also, the most prominent cloud operators benefit greatly from the economies of scale that they have. Even the last bastions that have required on-premise computing like high-performance computing are now available from the cloud.
Especially in the early stages of cloud transition, freedom, and the speed in which you can make changes in the cloud bring moments of joy for the development organization. Bringing up servers in minutes and tearing them down even faster is an eye-opener for organizations that are used to waiting for new servers for hours, days, or even weeks. For example, a massive number of images can be processed per hour if you do it with a large number of machines. As an example, thousand basic servers (m5.large) would cost slightly over a hundred euros an hour from AWS Cloud, though you might not be allowed to create that many machines on one go.
Often, the first cloud service experiments are on individual servers, data storage, or backing up to the cloud. Similarly, with Office 365 or SalesForce, cloud computing paradigms are becoming more and more familiar to a broader audience. Through outputs received from these services organization gains experience on how to use cloud services and the possibilities they offer.
However, the journey is just starting, and you can only claim these additional benefits when you the next step on the cloud journey. The next stage also requires you to have a little faith in the journey as costs may increase temporarily if both cloud and on-premise environments are in use at the same time.
Level 2: Searching for elasticity and reliability, orchestration, and containers
One particular benefit of the cloud is the resilience and scalability that can help you to react to rapidly changing situations.
There are multiple different ways to build reliable services in the cloud, with uptime that you can secure by duplicating services in multiple zones, even when there is no need for actual dynamic service scaling.
Often, the journey to migrating services to the cloud begins with containerization, which means making existing applications easier to run on a common platform. Applications can then be transferred from one server to another, doubled and scaled much faster than traditional virtualization. This model is particularly suitable for solutions based on microservices.
The logical continuation of this journey is container orchestration, where Google’s open-source Kubernetes is the most used solution. Orchestration provides reliable control over servers for the right number of containers in the right place — and best of all, you can get Kubernetes cluster management as a service from the cloud. Azure has Kubernetes Service (AKS), Amazon has Elastic Kubernetes Service (EKS), and Google has Kubernetes Engine (GKE).
Using Kubernetes allows multiple copies of a particular critical service to run in a cluster, and the problem of a single server in a cluster is not usually visible to users of the services.
The underlying idea behind Kubernetes is that it defines the desired situation for the cluster services and content, and Kubernetes maintains that situation. To maintain the service health, frequently moving containers from one server to another or setting up new containers is required. Kubernetes also allows you to respond to rising or falling usage volumes by scaling services up or down as needed.
Level 3: Forget infrastructure, focus on serverless architecture
Servers and clusters respond to a variety of needs, but the time spent on maintaining the cluster is usually not a value-added work. As a result, many companies have already moved on to the next phase of natural evolution. In this model, you shift the responsibility of managing infrastructure, servers, clusters, and the like to the cloud provider.
This kind of architecture is called serverless architecture, where the time spent on building and maintaining services goes mostly to building the services themselves — not the infrastructure.
The recently released University of Berkeley’s vision for cloud services adds up well to the predictions that serverless architecture is the way to do cloud computing in the future.
Serverless computing will become the default computing paradigm of the Cloud Era, largely replacing serverful computing and thereby bringing closure to the Client-Server Era.
As a result of this trend, servers and managing orchestration yourself will become mostly redundant when you migrate various services to a serverless model. These ideas resonate strongly with cloud-based businesses — focus on adding value, not infrastructure.
At best, you can scale such services can almost indefinitely, and peak traffic caused by Black Friday is not a problem. On a smaller scale, the benefits of serverless architecture are undeniable, as the costs are generally significantly lower, and the reliability of the base infrastructure does not have to be the primary concern.
Cloud services even offer the ability to run the Kubernetes cluster on a serverless model, as recently introduced by Amazon EKS on AWS Fargate. This kind of model helps the company transition from its own Kubernetes management to a serverless model very seamlessly.
Changing this scale can also be daunting, as servers that have been at the center of many solutions disappear entirely from the map. Sure, servers are still running somewhere, but they are completely invisible when viewed from the perspective of a cloud user. As a result, orchestration is the responsibility of someone else.
What we have seen, the typical journey for a company goes through these three phases when adopting cloud services. Naturally, you can also skip some of the steps in a green-field environment and jump straight into a serverless model. Of course, serverless is not for everyone, but still suitable for a vast number of use cases. Give it a go if you haven’t already!