Kubernetes an Overview
Kubernetes is a container orchestration platform, it helps organizations to maintain microservices and distributed systems.
WHY Kubernetes
To understand why Kubernetes it's important to know the problems which led to development of Kubernetes.
As we know the containers are ephemeral in nature that is they die and revive anytime. So if there is an application deployed with multiple containers, lets think about a situation where the containers are not getting sufficient resources because the container at the start is taking much resources the container at last will have lack of resources or no resources at all.
Due to this lack or resources the container may die or in many cases it will not be started at all and kernel will kill the process with scheduling algorithm.
PROBLEMS WITH DOCKER
Single Host :- Docker has single host architecture so here container is being impacted by over utilization of resources by any other container
Killed Container :- The container which is being killed due to internal or external factor is not being automatically started we need to manually start it again. In technical terms we can say that docker does not support auto healing.
Auto scaling :- As the load on our application increases or decreases we need to scale up by creating more containers so that we can handle load better without our system going down or scale down to save resources and time. In Docker we need to do this manually there is no feature of autoscaling in simplest form now there are extra services being added by docker to achieve this in better way.
How Kubernetes is solving these problems
To understand this we need to understand the architecture of Kubernetes or how it is installed on a system.
By default Kubernetes is installed as cluster, cluster is a group of nodes unlike docker which is installed on a single system.
Kubernetes is installed in a master node architecture, like Jenkins here also we can create one master node and many other worker nodes. Although Kubernetes can also be installed one single system it is not the preferred way, in production environment it is always installed as a cluster.
If it is installed as a cluster then it provides function that it will automatically separate faulty container to a separate node from all other containers.
So it is clear now that Kubernetes is installed as a cluster of nodes and on each node pods will be running.
Solution to Autoscaling
Kubernetes has replication controller or replica set and has a file more popularly known as replicaSet.yml file where we can increase or decrease the pod automatically by just changing some value in this file.
And thus it solves one problem we have seen with docker that is autoscaling.
If we don't want to change the values manually then we have one more option called horizontal pod auto scaler (HPA) and we can give instructions to HPA like if the load goes to 80% create a new pod and thus our load will be distributed.
Solution Auto Healing
Kubernetes has feature that it control and fix the damage.
In docker we had to run docker ps command then we check which container is going down and accordingly create or restart the container. But when we come to Kubernetes it has API server which automates the process as, whenever the API server gets the signal that any container is going down it automatically roll out a new container or pod and this process is so smooth that the user will not know when one pod goes down and another roll up.
And thus it automates the process of auto healing and provides the solution to problem faced while using docker.
Although Docker has started providing enterprise level or production level containerization solution with the help of Docker swarm it is always good practice to use Kubernetes.
This blog consists of basic overview of Kubernetes in next blog i will cover the architecture of Kubernetes of K8s architecture.