25-March-2021
[This article builds on the previous article discussing Cloud Computing Technologies. Click here to read that first]
In the initial days of cloud deployment, the approach was to ‘cloudify’ an application – a term to signify taking an application from traditional computer system, and making it work on cloud. This approach is also known as lift and shift.
This approach didn’t allow full use of features provided by cloud as the applications were inherently designed and built as per non-cloud environment.
Cloud native computing is an approach in software development that utilizes cloud computing to build and run scalable applications in modern, dynamic cloud environments.
Cloud Native refers less to where an application resides and more to how it is built and deployed.
A cloud native application consists of discrete, reusable components known as microservices that are designed to integrate into any cloud environment.
These microservices act as building blocks and are often packaged in containers.
Microservices work together as a whole to comprise an application, yet each can be independently scaled, continuously improved, and quickly iterated through automation and orchestration processes.
The flexibility of each microservice adds to the agility and continuous improvement of cloud-native applications.
I consider the following three as key philosophies for the development of a Cloud Native application:
Microservices based architecture
Immutable Infrastructure
DevOps
Technologies such as containers, orchestrators and serverless computing help achieve these objectives.
Microservices is an architectural style that structures an application as a collection of services that are
Highly maintainable and testable
Loosely coupled
Independently deployable
Organized around business capabilities
Owned by a small team
For example an eShopping application can be broken into small services such as Item Search, Catalogue Display, Marketing, Recommendation, Ordering, Basket, Payment, etc. Each of these microservices can be developed and maintained independently from others. The eShop service provider can decide which of these microservices to be kept in higher number. For example, The traffic for Payment micro service would be less than the traffic for Item Search or Catalogue micro services (not everybody searching products, actually goes for purchase). Hence the number of instances for Item Search micro service could be higher than the number of instances for Payment micro service. This helps in optimal utilization of resources.
Each of the above microservices can be developed and maintained by independent teams. When a team wants to release an updated version of their service, they will create a new container with newer version of the mircoservice and replace some active instances of that mircoservice with it. Once the new mirco service is found to be working satisfactorily, all instances of the old one are replaced with the new one.
The microservice architecture enables the rapid, frequent and reliable delivery of large, complex applications. It also enables an organization to evolve its technology stack. Microservices are especially useful for larger companies since they allow teams to work on separate items with minimal coordination requirement.
In a traditional software deployment, an application or service update requires that a component is changed in production, while the complete service or application remains operational. Immutable Infrastructure is an approach to software deployment wherein if a component needs to be changed, a whole new deployment with changed component is provided rather than updating the component of existing deployment.
With this approach, once the service or application is deployed, its components are set - thus, the service or application is immutable, unable to change. When a change is made to one or more components of a service or application, a new instance is assembled, tested, validated and made available for use. And the old instance is discontinued to free the computing resources within the environment for other tasks.
Immutability helps in minimizing the version mismatch among plethora of application components, which is one of the biggest challenges of software deployment. It also restricts the potential for configuration drift, reducing the IT infrastructure's vulnerability to attack. Uptime is improved in unexpected events, because instances are redeployed instead of restored from multiple unique configurations and versions.
Immutable infrastructure benefits include lower IT complexity and failures, improved security and easier troubleshooting than on mutable infrastructure. It eliminates server patching and configuration changes, because each update to the service or application workload initiates a new, tested and up-to-date instance. There is no need to track changes. If the new instance does not meet expectations, it is simple to roll back to the prior known-good instance. Since you’re not working with individual components within the environment, there are far fewer chances for unpredictable behaviors or unintended consequences of code changes.
For an enterprise application, two key functions are development of application and the deployment of that application (which is to make it available for use by targeted users). The deployment function (also called Operations) is typically handled by the IT team that manages the IT infrastructure on which this application is deployed. The development function wants new features and updates to be delivered quickly. On the other hand, operations function wants stability, which means not changing the production systems too often. As a result, development and operations often work in silos.
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development - several DevOps aspects came from the Agile methodology.
The fundamental idea behind DevOps is Continuous Integration and Continuous Delivery (CI/CD) which basically means that any change in software goes through entire testing cycle so that any issue can be detected at the earliest. The testing cycle can be initiated even multiple times a day and successful releases even get deployed several times a day without users noticing that. For example – we don’t know which versions of facebook or gmail we are using. The updates keep on happening without coming to our attention.
This CI/CD approach requires establishing a ‘pipeline’ of software delivery. Both Dev and Ops teams need to focus on automation without which the model of continuous development, continuous testing and continuous deployment cannot work.
What are containers?
Containers are an executable unit of software in which application code is packaged, along with its libraries and dependencies, so that it can be run anywhere, whether it be on desktop, traditional IT, or the cloud. This approach is analogous to shipping industries where physical containers are used to isolate different cargo.
Containers are built on the concept of Operating System (OS) virtualization. OS virtualization essentially involves isolating processes and controlling the amount of CPU, memory, and disk that those processes have access to. They are small, fast, and portable as they do not need to include a guest OS in every instance and can, instead, simply leverage the features and resources of the host OS.
Docker is the most popular container technology currently, so much so that it has become synonymous with it, because it has been the most successful at popularizing it. But container technology is not new - it has been built into Linux in the form of LXC for over 10 years.
Containerization is the process of software being designed and packaged in order to take advantage of containers. The process includes packaging an application with its relevant environment variables, configuration files, libraries, and software dependencies. The result is a container image that can then be run on a container platform.
At a high level, VM’s virtualize the underlying hardware so that multiple operating system (OS) instances can run on the hardware. Each VM runs an OS and has access to virtualized resources representing the underlying hardware. The flip side is that each VM contains an OS image, libraries, applications and more and therefore can become quite large.
A container virtualizes the underlying OS and causes the containerized app to perceive that it has the OS—including CPU, memory, file storage and network connection - all to itself. This abstraction of OS and infrastructure enables containers to be much more efficient and lightweight. Containerized applications can start in seconds and many more instances of the application can fit onto the machine as compared to a VM scenario. The shared OS approach has the added benefit of reduced overhead when it comes to maintenance, such as patching and updates.
As Containers are light weight, they are easy to replace in production and suit well to the needs of immutable infrastructure for application packaging and deployment. They form a critical building block of Cloud Native applications.
What is container orchestration?
Container orchestration is the way to manage software deployments across IT infrastructure.
A typical application has multiple containers such as DB or messaging/middleware services or other back-end services. If the number of users increase and the application needs to be scaled up, containers need to be added. Similarly, the deployment needs to scale down when the load decreases. This whole process of automatically deploying and managing containers is known as Container Orchestration.
Kubernetes (also known as k8s or "kube") is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications. Due to its reach and popularity, it has almost become synonymous with Container Orchestration.
Kubernetes orchestrates the operation of multiple containers in harmony together. It manages areas like the use of underlying infrastructure resources for containerized applications such as the amount of compute, network, and storage resources required. Orchestration tools like Kubernetes make it easier to automate and scale container-based workloads for live production environments.
Kubernetes can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling.
Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. This is the technology behind Google’s cloud services. Google donated the Kubernetes project to the newly formed Cloud Native Computing Foundation (CNCF) in 2015.
Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers.
With Serverless computing, the application developers can focus on development and scaling of Business Logic while hosting and scaling of utility functions is handled by Cloud Service provider.
There are still servers in serverless, but they are abstracted away from app development. A cloud provider handles the routine work of provisioning, maintaining, and scaling the server infrastructure. Developers can simply package their code in containers for deployment. Once deployed, serverless apps respond to demand and automatically scale up and down as needed.
More commonly, when developers refer to serverless, they’re talking about a FaaS (Function As A Service) model. Under FaaS, developers still write custom server-side logic, but it’s run in containers fully managed by a cloud services provider.
With serverless architecture, apps are launched only as needed. When an event triggers app code to run, the public cloud provider dynamically allocates resources for that code. The user stops paying when the code finishes executing. In addition to the cost and efficiency benefits, serverless frees developers from routine and menial tasks associated with app scaling and server provisioning.
With serverless, routine tasks such as managing the operating system and file system, security patches, load balancing, capacity management, scaling, logging, and monitoring are all offloaded to a cloud services provider. It’s possible to build an entirely serverless app, or an app composed of partially serverless and partially traditional microservices components.
Serverless architecture is a good fit for use cases that see infrequent, unpredictable surges in demand. For example, an application that does batch processing of incoming image files, which might run infrequently but also must be ready when a large batch of images arrives all at once. Or a task like watching for incoming changes to a database and then applying a series of functions, such as checking the changes against quality standards, or automatically translating them. Serverless apps are also a good fit for use cases that involve incoming data streams, chat bots, scheduled tasks, or business logic.
Although microservices enable an iterative approach to application improvement, they also create the necessity of managing more elements. Rather than one large application, it becomes necessary to manage far more small, discrete services.
Cloud native apps demand additional toolsets to manage the DevOps pipeline, replace traditional monitoring structures, and control microservices architecture.
Cloud native applications allow for rapid development and deployment, but they also demand a business culture that can cope with the pace of that innovation.
Cloud Native approach is going to bring in next wave of transformation in Software Development. It is still evolving and more tools and methodologies are expected to evolve based on how Cloud Services are delivered to users.
Want to know how to build skills in Cloud Computing?
To know how to build skills in Cloud Computing Technologies, please read our other post.