In the previous post we looked at building up a server from bare metal to being able to create lots of VMs by cloning an image we'd built with some common packages and integrations like LDAP. I created ten guest servers with a view to installing various apps on them. In this post, we'll look at building a Kubernetes master with a view to building a cluster.
We start by installing a number of packages onto our chosen VM. Need to add kubernetes to our sources.
1
2
| gsw@goat-lin001:~$ sudo vi /etc/apt/sources .list.d /kubernetes .list deb http: //apt .kubernetes.io/ kubernetes-xenial main |
gsw@goat-lin001:~$ sudo vi /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main
1
2
3
4
5
6
7
| gsw@goat-lin001:~$ sudo apt-get update gsw@goat-lin001:~$ sudo apt install -y apt-transport-https docker.io curl gnupg2 gsw@goat-lin001:~$ sudo systemctl start docker gsw@goat-lin001:~$ sudo systemctl enable docker gsw@goat-lin001:~$ sudo curl -s https: //packages .cloud.google.com /apt/doc/apt-key .gpg | sudo apt-key add gsw@goat-lin001:~$ sudo apt-get update gsw@goat-lin001:~$ sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni |
gsw@goat-lin001:~$ sudo apt-get update gsw@goat-lin001:~$ sudo apt install -y apt-transport-https docker.io curl gnupg2 gsw@goat-lin001:~$ sudo systemctl start docker gsw@goat-lin001:~$ sudo systemctl enable docker gsw@goat-lin001:~$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add gsw@goat-lin001:~$ sudo apt-get update gsw@goat-lin001:~$ sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cniThe kubeadm function can be used to initialise the master, which is run below as root.
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
| root@goat-lin001: /home/gsw $ kubeadm init --service-cidr "10.1.0.0/16" [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06 [preflight /images ] Pulling images required for setting up a Kubernetes cluster [preflight /images ] This might take a minute or two, depending on the speed of your internet connection [preflight /images ] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd /ca certificate and key. [certificates] Generated etcd /server certificate and key. [certificates] etcd /server serving cert is signed for DNS names [goat-lin001 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd /peer certificate and key. [certificates] etcd /peer serving cert is signed for DNS names [goat-lin001 localhost] and IPs [192.168.1.101 127.0.0.1 ::1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd /healthcheck-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [goat-lin001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster. local ] and IPs [10.1.0.1 192.168.1.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 25.504183 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node goat-lin001 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node goat-lin001 as master by adding the taints [node-role.kubernetes.io /master :NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin001" as an annotation [bootstraptoken] using token: c1bzuk.j5lgevurg1zv77kr [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin .conf $HOME/.kube /config sudo chown $( id -u):$( id -g) $HOME/.kube /config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https: //kubernetes .io /docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.1.101:6443 --token c1bzak.j5lgevuwr1zv47kr --discovery-token-ca-cert- hash sha256:d5c3839745caaa7dfddf34c77d416485bfa00d29c64206098b74f1de0d0129c7 |
root@goat-lin001:/home/gsw$ kubeadm init --service-cidr "10.1.0.0/16" [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.06 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [goat-lin001 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [goat-lin001 localhost] and IPs [192.168.1.101 127.0.0.1 ::1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [goat-lin001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 25.504183 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node goat-lin001 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node goat-lin001 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "goat-lin001" as an annotation [bootstraptoken] using token: c1bzuk.j5lgevurg1zv77kr [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.1.101:6443 --token c1bzak.j5lgevuwr1zv47kr --discovery-token-ca-cert-hash sha256:d5c3839745caaa7dfddf34c77d416485bfa00d29c64206098b74f1de0d0129c7You should now go ahead and create the directory and copy the files as indicated above under the appropriate account. Kubernetes commands won't work without this being in place. You should also take note of the last command above which will allow additional nodes to be added to the cluster. At this point, the master is running but there is no pod network installed. I tried to get flannel up and running but it kept enter CrashLoopBackOff and had switched to weave instead.
01
02
03
04
05
06
07
08
09
10
| export kubever=$(kubectl version | base64 | tr -d '\n' ) kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" gsw@goat-lin001:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" serviceaccount /weave-net created clusterrole.rbac.authorization.k8s.io /weave-net created clusterrolebinding.rbac.authorization.k8s.io /weave-net created role.rbac.authorization.k8s.io /weave-net created rolebinding.rbac.authorization.k8s.io /weave-net created daemonset.extensions /weave-net created |
export kubever=$(kubectl version | base64 | tr -d '\n') kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" gsw@goat-lin001:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.extensions/weave-net createdFinally, check that the master has been set up and is showing Status=Ready
01
02
03
04
05
06
07
08
09
10
11
12
13
14
| root@goat-lin001: /home/gsw > kubectl get nodes NAME STATUS ROLES AGE VERSION goat-lin001 Ready master 5m v1.12.0 root@goat-lin001: /home/gsw > kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-bjqh4 1 /1 Running 0 8m50s kube-system coredns-576cbf47c7-wkf2t 1 /1 Running 0 8m50s kube-system etcd-goat-lin001 1 /1 Running 0 7m51s kube-system kube-apiserver-goat-lin001 1 /1 Running 0 7m49s kube-system kube-controller-manager-goat-lin001 1 /1 Running 0 7m56s kube-system kube-proxy-m6fv5 1 /1 Running 0 8m50s kube-system kube-scheduler-goat-lin001 1 /1 Running 0 7m54s kube-system weave-net-rs9h8 2 /2 Running 0 4m42s |
root@goat-lin001:/home/gsw> kubectl get nodes NAME STATUS ROLES AGE VERSION goat-lin001 Ready master 5m v1.12.0 root@goat-lin001:/home/gsw> kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-bjqh4 1/1 Running 0 8m50s kube-system coredns-576cbf47c7-wkf2t 1/1 Running 0 8m50s kube-system etcd-goat-lin001 1/1 Running 0 7m51s kube-system kube-apiserver-goat-lin001 1/1 Running 0 7m49s kube-system kube-controller-manager-goat-lin001 1/1 Running 0 7m56s kube-system kube-proxy-m6fv5 1/1 Running 0 8m50s kube-system kube-scheduler-goat-lin001 1/1 Running 0 7m54s kube-system weave-net-rs9h8 2/2 Running 0 4m42sNext: Flexible server Part VIII: Building a Kubernetes cluster
No comments:
Post a Comment