Kubernetes

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It will helps us to manage application containers across multiple hosts.

Kubernetes provides lot of management features for containers-oriented applications, such as auto scaling, rolling update, compute resources and volume management.

Kubernetes considers most of operational needs for application containers.
1) Persistent storage
2) Container heath monitoring.
3) Compute resource management
4) Auto-Scaling
5) Load Balancing
6) Replication of components
7) Service Discovery
8) Authentication

Kubernetes Architecture


Master Node
API server
The API server is the entry points for all the REST commands used to control the cluster. It processes the REST requests, validates them, and executes the bound business logic. The result state has to be persisted somewhere, and that brings us to the next component of the master node.
etcd storage
etcd is a simple, distributed, consistent key-value store. It’s mainly used for shared configuration and service discovery. It provides a REST API for CRUD operations as well as an interface to register watchers on specific nodes, which enables a reliable way to notify the rest of the cluster about configuration changes. An example of data stored by Kubernetes in etcd is jobs being scheduled, created and deployed, pod/service details and state, namespaces and replication information, etc.
scheduler
The deployment of configured pods and services onto the nodes happens thanks to the scheduler component. The scheduler has the information regarding resources available on the members of the cluster, as well as the ones required for the configured service to run and hence is able to decide where to deploy a specific service.
controller-manager
Optionally you can run different kinds of controllers inside the master node. controller-manager is a daemon embedding those. A controller uses apiserver to watch the shared state of the cluster and makes corrective changes to the current state to change it to the desired one. An example of such a controller is the Replication controller, which takes care of the number of pods in the system. The replication factor is configured by the user, and it's the controller’s responsibility to recreate a failed pod or remove an extra-scheduled one.

Worker node
The pods are run here, so the worker node contains all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the containers scheduled.
kubelet
kubelet gets the configuration of a pod from the apiserver and ensures that the described containers are up and running. This is the worker service that’s responsible for communicating with the master node. It also communicates with etcd, to get information about services and write the details about newly created ones.
kube-proxy
kube-proxy acts as a network proxy and a load balancer for a service on a single worker node. It takes care of the network routing for TCP and UDP packets.

                               
                                                                                                                                                  Next >>


1 comment: