Tuesday, January 15, 2019

Flexible server Part VIII: Building a Kubernetes cluster


In the previous post we built a Kubernetes master node on one of many guest VMs we cloned. We'll complete the setup of the Kubernetes cluster to pave the way for your own pod deployments.

Perform an initial kubeadm init and create you user directory and file copy per the master configuration steps. You'll then go back and reset the installation prior to joining the node as a worker.

?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
root@goat-lin002:~> kubeadm init --service-cidr "10.1.0.0/16"
root@goat-lin002:~> mkdir -p $HOME/.kube
root@goat-lin002:~> cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@goat-lin002:~> chown $(id -u):$(id -g) $HOME/.kube/config
root@goat-lin004:/home/gsw> kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

  
root@goat-lin002:~> kubeadm init --service-cidr "10.1.0.0/16"
root@goat-lin002:~> mkdir -p $HOME/.kube
root@goat-lin002:~> cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@goat-lin002:~> chown $(id -u):$(id -g) $HOME/.kube/config

root@goat-lin004:/home/gsw> kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
Now refer to the string in the output of the master creation step to join the current node to the master we created in the previous post.
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
root@goat-lin002:/home/gsw> kubeadm join 192.168.1.101:6443 --token c1bzuk.j5lgevurg1zv77kr --discovery-token-ca-cert-hash sha256:d5c3839745caaa7df8df34c77d416585bfa00d19c64206099b74f1de0d0159c7
[preflight] running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.1.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.101:6443"
[discovery] Requesting info from "https://192.168.1.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.101:6443"
[discovery] Successfully established connection with API Server "192.168.1.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin004" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.

  
root@goat-lin002:/home/gsw> kubeadm join 192.168.1.101:6443 --token c1bzuk.j5lgevurg1zv77kr --discovery-token-ca-cert-hash sha256:d5c3839745caaa7df8df34c77d416585bfa00d19c64206099b74f1de0d0159c7
[preflight] running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.1.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.101:6443"
[discovery] Requesting info from "https://192.168.1.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.101:6443"
[discovery] Successfully established connection with API Server "192.168.1.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin004" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
Get the updated config for your local account.
?

  
1
2
root@goat-lin002:/home/gsw> cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
root@goat-lin002:/home/gsw> chown $(id -u):$(id -g) $HOME/.kube/config

  
root@goat-lin002:/home/gsw> cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
root@goat-lin002:/home/gsw> chown $(id -u):$(id -g) $HOME/.kube/config
Now try running some admin commands, starting with getting the nodes in the cluster (I added 4):
?

  
1
2
3
4
5
6
root@goat-lin002:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
goat-lin001   Ready    master   37m     v1.12.0
goat-lin002   Ready    <none>   24m     v1.12.0
goat-lin003   Ready    <none>   9m12s   v1.12.0
goat-lin004   Ready    <none>   4m13s   v1.12.0

  
root@goat-lin002:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
goat-lin001   Ready    master   37m     v1.12.0
goat-lin002   Ready    <none>   24m     v1.12.0
goat-lin003   Ready    <none>   9m12s   v1.12.0
goat-lin004   Ready    <none>   4m13s   v1.12.0
There are some really useful commands that can help you understand how the cluster has been configured, try experimenting with them. Below the kubectl describe nodes command shows key details about the current node:
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
root@goat-lin002:/home/gsw> kubectl describe nodes goat-lin002
Name:               goat-lin002
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=goat-lin002
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 28 Sep 2018 19:35:40 +1000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 28 Sep 2018 19:47:34 +1000   Fri, 28 Sep 2018 19:47:34 +1000   WeaveIsUp                    Weave pod has set this
  OutOfDisk            False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure       False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:35:40 +1000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:39 +1000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.1.102
  Hostname:    goat-lin002
Capacity:
 cpu:                2
 ephemeral-storage:  4062912Ki
 memory:             4089508Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  3744379694
 memory:             3987108Ki
 pods:               110
System Info:
 Machine ID:                 b3ffe8f6ceb44f348e01da5c0f27b9cb
 System UUID:                b3ffe8f6ceb44f348e01da5c0f27b9cb
 Boot ID:                    99dccf81-2531-4302-9586-51b47bf7d737
 Kernel Version:             4.15.0-34-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.12.0
 Kube-Proxy Version:         v1.12.0
Non-terminated Pods:         (2 in total)
  Namespace                  Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                ------------  ----------  ---------------  -------------
  kube-system                kube-proxy-2h295    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-mggnp     20m (1%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (1%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:     <none>

  
root@goat-lin002:/home/gsw> kubectl describe nodes goat-lin002
Name:               goat-lin002
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=goat-lin002
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 28 Sep 2018 19:35:40 +1000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 28 Sep 2018 19:47:34 +1000   Fri, 28 Sep 2018 19:47:34 +1000   WeaveIsUp                    Weave pod has set this
  OutOfDisk            False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure       False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:19 +1000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:35:40 +1000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 28 Sep 2018 20:03:02 +1000   Fri, 28 Sep 2018 19:47:39 +1000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.1.102
  Hostname:    goat-lin002
Capacity:
 cpu:                2
 ephemeral-storage:  4062912Ki
 memory:             4089508Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  3744379694
 memory:             3987108Ki
 pods:               110
System Info:
 Machine ID:                 b3ffe8f6ceb44f348e01da5c0f27b9cb
 System UUID:                b3ffe8f6ceb44f348e01da5c0f27b9cb
 Boot ID:                    99dccf81-2531-4302-9586-51b47bf7d737
 Kernel Version:             4.15.0-34-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.12.0
 Kube-Proxy Version:         v1.12.0
Non-terminated Pods:         (2 in total)
  Namespace                  Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                ------------  ----------  ---------------  -------------
  kube-system                kube-proxy-2h295    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-mggnp     20m (1%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (1%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:     <none>
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-bjqh4              1/1     Running   0          42m
kube-system   coredns-576cbf47c7-wkf2t              1/1     Running   0          42m
kube-system   etcd-goat-lin001                      1/1     Running   0          41m
kube-system   kube-apiserver-goat-lin001            1/1     Running   0          41m
kube-system   kube-controller-manager-goat-lin001   1/1     Running   0          41m
kube-system   kube-proxy-2h295                      1/1     Running   0          29m
kube-system   kube-proxy-77lb9                      1/1     Running   0          14m
kube-system   kube-proxy-fjbb6                      1/1     Running   0          9m12s
kube-system   kube-proxy-m6fv5                      1/1     Running   0          42m
kube-system   kube-scheduler-goat-lin001            1/1     Running   0          41m
kube-system   weave-net-b27kr                       2/2     Running   0          9m12s
kube-system   weave-net-gj2s6                       2/2     Running   0          14m
kube-system   weave-net-mggnp                       2/2     Running   1          29m
kube-system   weave-net-rs9h8                       2/2     Running   0          38m

  
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-bjqh4              1/1     Running   0          42m
kube-system   coredns-576cbf47c7-wkf2t              1/1     Running   0          42m
kube-system   etcd-goat-lin001                      1/1     Running   0          41m
kube-system   kube-apiserver-goat-lin001            1/1     Running   0          41m
kube-system   kube-controller-manager-goat-lin001   1/1     Running   0          41m
kube-system   kube-proxy-2h295                      1/1     Running   0          29m
kube-system   kube-proxy-77lb9                      1/1     Running   0          14m
kube-system   kube-proxy-fjbb6                      1/1     Running   0          9m12s
kube-system   kube-proxy-m6fv5                      1/1     Running   0          42m
kube-system   kube-scheduler-goat-lin001            1/1     Running   0          41m
kube-system   weave-net-b27kr                       2/2     Running   0          9m12s
kube-system   weave-net-gj2s6                       2/2     Running   0          14m
kube-system   weave-net-mggnp                       2/2     Running   1          29m
kube-system   weave-net-rs9h8                       2/2     Running   0          38m

Running a test service

Here's how to get an initial pod up and running on the Kubernetes cluster. We're choosing nginx here as it's pre-built and you will probably use it on your system eventually. You can see nginx has been created in the default namespace, and you can see some broken kube-flannel pods. nginx is showing as pending because it needs a port exposed. Debugging services can be tricky until you wrap your head around the various utilities and logs that can help you locate things. You can try starting here for more information on debugging pods and replication controllers.

?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
root@goat-lin001:/home/gsw> kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS              RESTARTS   AGE
default       nginx-app-85c7f8ddf-ngnfq             0/1     Pending             0          43s
kube-system   coredns-576cbf47c7-k5zqn              0/1     ContainerCreating   0          49m
kube-system   coredns-576cbf47c7-mhmz6              0/1     ContainerCreating   0          49m
kube-system   etcd-goat-lin001                      1/1     Running             0          49m
kube-system   kube-apiserver-goat-lin001            1/1     Running             0          48m
kube-system   kube-controller-manager-goat-lin001   1/1     Running             0          49m
kube-system   kube-flannel-ds-amd64-kprml           0/1     Pending             0          9m9s
kube-system   kube-flannel-ds-amd64-mcz55           0/1     Pending             0          8m31s
kube-system   kube-flannel-ds-amd64-sc4b9           0/1     Pending             0          8m47s
kube-system   kube-proxy-h7xxh                      1/1     Running             0          8m47s
kube-system   kube-proxy-ng8c4                      1/1     Running             0          9m9s
kube-system   kube-proxy-qmpfw                      1/1     Running             0          8m32s
kube-system   kube-proxy-rncb6                      1/1     Running             0          49m
kube-system   kube-scheduler-goat-lin001            1/1     Running             0          49m
root@goat-lin001:/home/gsw> sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http

  
root@goat-lin001:/home/gsw> kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS              RESTARTS   AGE
default       nginx-app-85c7f8ddf-ngnfq             0/1     Pending             0          43s
kube-system   coredns-576cbf47c7-k5zqn              0/1     ContainerCreating   0          49m
kube-system   coredns-576cbf47c7-mhmz6              0/1     ContainerCreating   0          49m
kube-system   etcd-goat-lin001                      1/1     Running             0          49m
kube-system   kube-apiserver-goat-lin001            1/1     Running             0          48m
kube-system   kube-controller-manager-goat-lin001   1/1     Running             0          49m
kube-system   kube-flannel-ds-amd64-kprml           0/1     Pending             0          9m9s
kube-system   kube-flannel-ds-amd64-mcz55           0/1     Pending             0          8m31s
kube-system   kube-flannel-ds-amd64-sc4b9           0/1     Pending             0          8m47s
kube-system   kube-proxy-h7xxh                      1/1     Running             0          8m47s
kube-system   kube-proxy-ng8c4                      1/1     Running             0          9m9s
kube-system   kube-proxy-qmpfw                      1/1     Running             0          8m32s
kube-system   kube-proxy-rncb6                      1/1     Running             0          49m
kube-system   kube-scheduler-goat-lin001            1/1     Running             0          49m

root@goat-lin001:/home/gsw> sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http

Labeling nodes

It can be helpful to designate nodes with a role.
?

  
1
2
3
4
5
6
7
8
9
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
root@goat-lin001:/home/gsw> kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
node/goat-lin002 labeled
root@goat-lin001:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
goat-lin001   Ready    master   46m   v1.12.0
goat-lin002   Ready    worker   33m   v1.12.0
goat-lin003   Ready    <none>   18m   v1.12.0
goat-lin004   Ready    <none>   13m   v1.12.0

  
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
root@goat-lin001:/home/gsw> kubectl label node goat-lin002 node-role.kubernetes.io/worker=worker
node/goat-lin002 labeled
root@goat-lin001:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
goat-lin001   Ready    master   46m   v1.12.0
goat-lin002   Ready    worker   33m   v1.12.0
goat-lin003   Ready    <none>   18m   v1.12.0
goat-lin004   Ready    <none>   13m   v1.12.0
Removing a label is also possible, in this example where 'custom' is the label name:
?

  
1
2
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/custom-
node/goat-lin002 labeled

  
gsw@goat-lin001:~$ kubectl label node goat-lin002 node-role.kubernetes.io/custom-
node/goat-lin002 labeled

Summary

We've now got a simple Kubernetes cluster running on the VMs we've built and we're now ready to jump into the world of Kubernetes configuration and deployments.

References


Flexible server Part VII: Building a Kubernetes master


In the previous post we looked at building up a server from bare metal to being able to create lots of VMs by cloning an image we'd built with some common packages and integrations like LDAP. I created ten guest servers with a view to installing various apps on them. In this post, we'll look at building a Kubernetes master with a view to building a cluster.

We start by installing a number of packages onto our chosen VM. Need to add kubernetes to our sources.


?

  
1
2
gsw@goat-lin001:~$ sudo vi /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

  
gsw@goat-lin001:~$ sudo vi /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
?

  
1
2
3
4
5
6
7
gsw@goat-lin001:~$ sudo apt-get update
gsw@goat-lin001:~$ sudo apt install -y apt-transport-https docker.io curl gnupg2
gsw@goat-lin001:~$ sudo systemctl start docker
gsw@goat-lin001:~$ sudo systemctl enable docker
gsw@goat-lin001:~$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
gsw@goat-lin001:~$ sudo apt-get update
gsw@goat-lin001:~$ sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni

  
gsw@goat-lin001:~$ sudo apt-get update
gsw@goat-lin001:~$ sudo apt install -y apt-transport-https docker.io curl gnupg2
gsw@goat-lin001:~$ sudo systemctl start docker
gsw@goat-lin001:~$ sudo systemctl enable docker
gsw@goat-lin001:~$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
gsw@goat-lin001:~$ sudo apt-get update
gsw@goat-lin001:~$ sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni
The kubeadm function can be used to initialise the master, which is run below as root.
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
root@goat-lin001:/home/gsw$ kubeadm init --service-cidr "10.1.0.0/16"
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [goat-lin001 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [goat-lin001 localhost] and IPs [192.168.1.101 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [goat-lin001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 25.504183 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node goat-lin001 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node goat-lin001 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin001" as an annotation
[bootstraptoken] using token: c1bzuk.j5lgevurg1zv77kr
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join 192.168.1.101:6443 --token c1bzak.j5lgevuwr1zv47kr --discovery-token-ca-cert-hash sha256:d5c3839745caaa7dfddf34c77d416485bfa00d29c64206098b74f1de0d0129c7

  
root@goat-lin001:/home/gsw$ kubeadm init --service-cidr "10.1.0.0/16"
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [goat-lin001 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [goat-lin001 localhost] and IPs [192.168.1.101 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [goat-lin001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 25.504183 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node goat-lin001 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node goat-lin001 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin001" as an annotation
[bootstraptoken] using token: c1bzuk.j5lgevurg1zv77kr
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.101:6443 --token c1bzak.j5lgevuwr1zv47kr --discovery-token-ca-cert-hash sha256:d5c3839745caaa7dfddf34c77d416485bfa00d29c64206098b74f1de0d0129c7
You should now go ahead and create the directory and copy the files as indicated above under the appropriate account. Kubernetes commands won't work without this being in place. You should also take note of the last command above which will allow additional nodes to be added to the cluster. At this point, the master is running but there is no pod network installed. I tried to get flannel up and running but it kept enter CrashLoopBackOff and had switched to weave instead.
?

  
01
02
03
04
05
06
07
08
09
10
export kubever=$(kubectl version | base64 | tr -d '\n')
gsw@goat-lin001:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created

  
export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

gsw@goat-lin001:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
Finally, check that the master has been set up and is showing Status=Ready
?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
root@goat-lin001:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
goat-lin001   Ready    master   5m    v1.12.0
root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-bjqh4              1/1     Running   0          8m50s
kube-system   coredns-576cbf47c7-wkf2t              1/1     Running   0          8m50s
kube-system   etcd-goat-lin001                      1/1     Running   0          7m51s
kube-system   kube-apiserver-goat-lin001            1/1     Running   0          7m49s
kube-system   kube-controller-manager-goat-lin001   1/1     Running   0          7m56s
kube-system   kube-proxy-m6fv5                      1/1     Running   0          8m50s
kube-system   kube-scheduler-goat-lin001            1/1     Running   0          7m54s
kube-system   weave-net-rs9h8                       2/2     Running   0          4m42s

  
root@goat-lin001:/home/gsw> kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
goat-lin001   Ready    master   5m    v1.12.0

root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-bjqh4              1/1     Running   0          8m50s
kube-system   coredns-576cbf47c7-wkf2t              1/1     Running   0          8m50s
kube-system   etcd-goat-lin001                      1/1     Running   0          7m51s
kube-system   kube-apiserver-goat-lin001            1/1     Running   0          7m49s
kube-system   kube-controller-manager-goat-lin001   1/1     Running   0          7m56s
kube-system   kube-proxy-m6fv5                      1/1     Running   0          8m50s
kube-system   kube-scheduler-goat-lin001            1/1     Running   0          7m54s
kube-system   weave-net-rs9h8                       2/2     Running   0          4m42s
Next: Flexible server Part VIII: Building a Kubernetes cluster