Tuesday, January 15, 2019

Flexible server Part VIII: Building a Kubernetes cluster


In the previous post we built a Kubernetes master node on one of many guest VMs we cloned. We'll complete the setup of the Kubernetes cluster to pave the way for your own pod deployments.

Perform an initial kubeadm init and create you user directory and file copy per the master configuration steps. You'll then go back and reset the installation prior to joining the node as a worker.

?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
root@goat-lin002:~> kubeadm init --service-cidr "10.1.0.0/16"
root@goat-lin002:~> mkdir -p $HOME/.kube
root@goat-lin002:~> cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@goat-lin002:~> chown $(id -u):$(id -g) $HOME/.kube/config
root@goat-lin004:/home/gsw> kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

  
root@goat-lin002:~> kubeadm init --service-cidr "10.1.0.0/16"
root@goat-lin002:~> mkdir -p $HOME/.kube
root@goat-lin002:~> cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@goat-lin002:~> chown $(id -u):$(id -g) $HOME/.kube/config

root@goat-lin004:/home/gsw> kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
Now refer to the string in the output of the master creation step to join the current node to the master we created in the previous post.
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
root@goat-lin002:/home/gsw> kubeadm join 192.168.1.101:6443 --token c1bzuk.j5lgevurg1zv77kr --discovery-token-ca-cert-hash sha256:d5c3839745caaa7df8df34c77d416585bfa00d19c64206099b74f1de0d0159c7
[preflight] running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.1.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.101:6443"
[discovery] Requesting info from "https://192.168.1.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.101:6443"
[discovery] Successfully established connection with API Server "192.168.1.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin004" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.

  
root@goat-lin002:/home/gsw> kubeadm join 192.168.1.101:6443 --token c1bzuk.j5lgevurg1zv77kr --discovery-token-ca-cert-hash sha256:d5c3839745caaa7df8df34c77d416585bfa00d19c64206099b74f1de0d0159c7
[preflight] running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.1.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.101:6443"
[discovery] Requesting info from "https://192.168.1.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.101:6443"
[discovery] Successfully established connection with API Server "192.168.1.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin004" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
Get the updated config for your local account.
?

  
1
2
root@goat-lin002:/home/gsw> cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
root@goat-lin002:/home/gsw> chown $(id -u):$(id -g) $HOME/.kube/config

  
root@goat-lin002:/home/gsw> cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
root@goat-lin002:/home/gsw> chown $(id -u):$(id -g) $HOME/.kube/config
Now try running some admin commands, starting with getting the nodes in the cluster (I added 4):
?

  
1
2
3
4
5
6
root@goat-lin002:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
goat-lin001   Ready    master   37m     v1.12.0
goat-lin002   Ready    <none>   24m     v1.12.0
goat-lin003   Ready    <none>   9m12s   v1.12.0
goat-lin004   Ready    <none>   4m13s   v1.12.0

  
root@goat-lin002:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
goat-lin001   Ready    master   37m     v1.12.0
goat-lin002   Ready    <none>   24m     v1.12.0
goat-lin003   Ready    <none>   9m12s   v1.12.0
goat-lin004   Ready    <none>   4m13s   v1.12.0
There are some really useful commands that can help you understand how the cluster has been configured, try experimenting with them. Below the kubectl describe nodes command shows key details about the current node:
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
root@goat-lin002:/home/gsw> kubectl describe nodes goat-lin002
Name:               goat-lin002
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=goat-lin002
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 28 Sep 2018 19:35:40 +1000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 28 Sep 2018 19:47:34 +1000   Fri, 28 Sep 2018 19:47:34 +1000   WeaveIsUp                    Weave pod has set this
  OutOfDisk            False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure       False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:35:40 +1000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:39 +1000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.1.102
  Hostname:    goat-lin002
Capacity:
 cpu:                2
 ephemeral-storage:  4062912Ki
 memory:             4089508Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  3744379694
 memory:             3987108Ki
 pods:               110
System Info:
 Machine ID:                 b3ffe8f6ceb44f348e01da5c0f27b9cb
 System UUID:                b3ffe8f6ceb44f348e01da5c0f27b9cb
 Boot ID:                    99dccf81-2531-4302-9586-51b47bf7d737
 Kernel Version:             4.15.0-34-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.12.0
 Kube-Proxy Version:         v1.12.0
Non-terminated Pods:         (2 in total)
  Namespace                  Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                ------------  ----------  ---------------  -------------
  kube-system                kube-proxy-2h295    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-mggnp     20m (1%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (1%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:     <none>

  
root@goat-lin002:/home/gsw> kubectl describe nodes goat-lin002
Name:               goat-lin002
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=goat-lin002
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 28 Sep 2018 19:35:40 +1000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 28 Sep 2018 19:47:34 +1000   Fri, 28 Sep 2018 19:47:34 +1000   WeaveIsUp                    Weave pod has set this
  OutOfDisk            False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure       False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:35:40 +1000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:39 +1000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.1.102
  Hostname:    goat-lin002
Capacity:
 cpu:                2
 ephemeral-storage:  4062912Ki
 memory:             4089508Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  3744379694
 memory:             3987108Ki
 pods:               110
System Info:
 Machine ID:                 b3ffe8f6ceb44f348e01da5c0f27b9cb
 System UUID:                b3ffe8f6ceb44f348e01da5c0f27b9cb
 Boot ID:                    99dccf81-2531-4302-9586-51b47bf7d737
 Kernel Version:             4.15.0-34-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.12.0
 Kube-Proxy Version:         v1.12.0
Non-terminated Pods:         (2 in total)
  Namespace                  Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                ------------  ----------  ---------------  -------------
  kube-system                kube-proxy-2h295    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-mggnp     20m (1%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (1%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:     <none>
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-bjqh4              1/1     Running   0          42m
kube-system   coredns-576cbf47c7-wkf2t              1/1     Running   0          42m
kube-system   etcd-goat-lin001                      1/1     Running   0          41m
kube-system   kube-apiserver-goat-lin001            1/1     Running   0          41m
kube-system   kube-controller-manager-goat-lin001   1/1     Running   0          41m
kube-system   kube-proxy-2h295                      1/1     Running   0          29m
kube-system   kube-proxy-77lb9                      1/1     Running   0          14m
kube-system   kube-proxy-fjbb6                      1/1     Running   0          9m12s
kube-system   kube-proxy-m6fv5                      1/1     Running   0          42m
kube-system   kube-scheduler-goat-lin001            1/1     Running   0          41m
kube-system   weave-net-b27kr                       2/2     Running   0          9m12s
kube-system   weave-net-gj2s6                       2/2     Running   0          14m
kube-system   weave-net-mggnp                       2/2     Running   1          29m
kube-system   weave-net-rs9h8                       2/2     Running   0          38m

  
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-bjqh4              1/1     Running   0          42m
kube-system   coredns-576cbf47c7-wkf2t              1/1     Running   0          42m
kube-system   etcd-goat-lin001                      1/1     Running   0          41m
kube-system   kube-apiserver-goat-lin001            1/1     Running   0          41m
kube-system   kube-controller-manager-goat-lin001   1/1     Running   0          41m
kube-system   kube-proxy-2h295                      1/1     Running   0          29m
kube-system   kube-proxy-77lb9                      1/1     Running   0          14m
kube-system   kube-proxy-fjbb6                      1/1     Running   0          9m12s
kube-system   kube-proxy-m6fv5                      1/1     Running   0          42m
kube-system   kube-scheduler-goat-lin001            1/1     Running   0          41m
kube-system   weave-net-b27kr                       2/2     Running   0          9m12s
kube-system   weave-net-gj2s6                       2/2     Running   0          14m
kube-system   weave-net-mggnp                       2/2     Running   1          29m
kube-system   weave-net-rs9h8                       2/2     Running   0          38m

Running a test service

Here's how to get an initial pod up and running on the Kubernetes cluster. We're choosing nginx here as it's pre-built and you will probably use it on your system eventually. You can see nginx has been created in the default namespace, and you can see some broken kube-flannel pods. nginx is showing as pending because it needs a port exposed. Debugging services can be tricky until you wrap your head around the various utilities and logs that can help you locate things. You can try starting here for more information on debugging pods and replication controllers.

?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
root@goat-lin001:/home/gsw> kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS              RESTARTS   AGE
default       nginx-app-85c7f8ddf-ngnfq             0/1     Pending             0          43s
kube-system   coredns-576cbf47c7-k5zqn              0/1     ContainerCreating   0          49m
kube-system   coredns-576cbf47c7-mhmz6              0/1     ContainerCreating   0          49m
kube-system   etcd-goat-lin001                      1/1     Running             0          49m
kube-system   kube-apiserver-goat-lin001            1/1     Running             0          48m
kube-system   kube-controller-manager-goat-lin001   1/1     Running             0          49m
kube-system   kube-flannel-ds-amd64-kprml           0/1     Pending             0          9m9s
kube-system   kube-flannel-ds-amd64-mcz55           0/1     Pending             0          8m31s
kube-system   kube-flannel-ds-amd64-sc4b9           0/1     Pending             0          8m47s
kube-system   kube-proxy-h7xxh                      1/1     Running             0          8m47s
kube-system   kube-proxy-ng8c4                      1/1     Running             0          9m9s
kube-system   kube-proxy-qmpfw                      1/1     Running             0          8m32s
kube-system   kube-proxy-rncb6                      1/1     Running             0          49m
kube-system   kube-scheduler-goat-lin001            1/1     Running             0          49m
root@goat-lin001:/home/gsw> sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http

  
root@goat-lin001:/home/gsw> kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS              RESTARTS   AGE
default       nginx-app-85c7f8ddf-ngnfq             0/1     Pending             0          43s
kube-system   coredns-576cbf47c7-k5zqn              0/1     ContainerCreating   0          49m
kube-system   coredns-576cbf47c7-mhmz6              0/1     ContainerCreating   0          49m
kube-system   etcd-goat-lin001                      1/1     Running             0          49m
kube-system   kube-apiserver-goat-lin001            1/1     Running             0          48m
kube-system   kube-controller-manager-goat-lin001   1/1     Running             0          49m
kube-system   kube-flannel-ds-amd64-kprml           0/1     Pending             0          9m9s
kube-system   kube-flannel-ds-amd64-mcz55           0/1     Pending             0          8m31s
kube-system   kube-flannel-ds-amd64-sc4b9           0/1     Pending             0          8m47s
kube-system   kube-proxy-h7xxh                      1/1     Running             0          8m47s
kube-system   kube-proxy-ng8c4                      1/1     Running             0          9m9s
kube-system   kube-proxy-qmpfw                      1/1     Running             0          8m32s
kube-system   kube-proxy-rncb6                      1/1     Running             0          49m
kube-system   kube-scheduler-goat-lin001            1/1     Running             0          49m

root@goat-lin001:/home/gsw> sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http

Labeling nodes

It can be helpful to designate nodes with a role.
?

  
1
2
3
4
5
6
7
8
9
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
root@goat-lin001:/home/gsw> kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
node/goat-lin002 labeled
root@goat-lin001:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
goat-lin001   Ready    master   46m   v1.12.0
goat-lin002   Ready    worker   33m   v1.12.0
goat-lin003   Ready    <none>   18m   v1.12.0
goat-lin004   Ready    <none>   13m   v1.12.0

  
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
root@goat-lin001:/home/gsw> kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
node/goat-lin002 labeled
root@goat-lin001:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
goat-lin001   Ready    master   46m   v1.12.0
goat-lin002   Ready    worker   33m   v1.12.0
goat-lin003   Ready    <none>   18m   v1.12.0
goat-lin004   Ready    <none>   13m   v1.12.0
Removing a label is also possible, in this example where 'custom' is the label name:
?

  
1
2
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/custom-
node/goat-lin002 labeled

  
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/custom-
node/goat-lin002 labeled

Summary

We've now got a simple Kubernetes cluster running on the VMs we've built and we're now ready to jump into the world of Kubernetes configuration and deployments.

References


1 comment:

  1. It's great that you are getting thoughts from this article as well as from our argument made at this place.

    ReplyDelete