GCP: Kubernetes Networking

Part 2: Master Components

In this part, we will focus on the components that make up the master and nodes. For a detailed explanation of each component, please see the official Kubernetes documentation. What we will focus on is showing how each of the components look like on GCP.

More information about the Kubernetes Components can be found here –  https://kubernetes.io/docs/concepts/overview/components/

We will be making reference to the diagram below which represents the cluster we created in Part 1. Our aim is to explore the cluster components via a mixture of kubectl commands and GCP console GUI.

Master Components

The master is deployed on GCP on the Kubernetes engine in the back-end. This means that the master is not a compute instance consumed on your GCP environment. A quick look at the GCE instances on the cloud console shows only the 3 nodes that will run our application.

And we can see from information about the kubernetes master in the image below.

Some of the components in the master are cannot be observe from the above dashboard. We will cover all the components next.

kube-apiserver

The kube-apiserver is the front end of the Kubernetes control plane. It resides on the master and exposes the Kubernetes API. That means all control plane communication with the cluster – both internal (within the cluster) and external (outside the cluster) – must pass through the kube-apiserver.

When we submit kubectl commands we are interacting with the kube-apiserver. Similarly when the kubelet (primary node agent) in a node tries to contact the master, the communication is received via the kube-apiserver on the master.

From the output below, we can see that the Kubernetes master (and inherently the kube-apiserver) is running at the url https://35.197.245.149 in this example.

salawu@k8s-tutorial-192815:~$ kubectl cluster-info
Kubernetes master is running at https://35.197.245.149
GLBCDefaultBackend is running at https://35.197.245.149/api/v1/namespaces/kube-system/services/default-http-backend/proxy
Heapster is running at https://35.197.245.149/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://35.197.245.149/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://35.197.245.149/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

etcd

The etcd is a distributed key value store used as Kubernetes’ backing store for all cluster data. In the output below, we can see 2 etcd stores in healthy states.

salawu@k8s-tutorial-192815:~$ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
salawu@k8s-tutorial-192815:~$

kube-scheduler

The kube-scheduler assigns newly created pods to available nodes in the cluster. In the output above, we can see that the kube-scheduler status is healthy.

kube-controller-manager

The kube-controller-manager component runs all the controllers on the master. The various controllers are:

●  Node Controller: monitors node status
●  Replication Controller: ensures the system maintains the correct number of pods specified in each replication controller object.
●  Endpoints Controller: maintains an endpoint object that maps all pods belonging to a service
●  Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces

cloud-controller-manager

The cloud-controller-manager runs controllers that interact with the underlying cloud providers. This allows us to decouple cloud provider specific code from the core kubernetes code – so that each of them can evolve separately. Since we are running our kubernetes cluster on GCP, this implies that GCP cloud-controller-manager is used in our scenario. The various controllers are:

●  Node Controller: monitors and responding to node status on cloud provider infrastructure
●  Route Controller: creates network routes in underlying cloud provider infrastructure. We will cover the routes in Part 5.
●  Service Controller: For creating, updating and deleting cloud provider load balancers. We will cover services and load balancers in Part 7.
●  Volume Controller: For creating, attaching, and mounting volumes on cloud provider infrastructure.

Go to the next page to view Part 3: Node Components

Leave a Reply

Your email address will not be published. Required fields are marked *