Kuberenetes the hard way Part 02


Bootstrapping the Kubernetes Control Plane:

Installing Kubernetes Control Plane Binaries:

The first step in bootstrapping a new Kubernetes control plane is to install the necessary binaries on the controller servers

You can install the control plane binaries on each control node like this:

$sudo mkdir -p /etc/kubernetes/config

$wget -q --show-progress --https-only --timestamping \
  "https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-scheduler" \

$chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl

$sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

Setting up the Kubernetes API Server:

The Kubernetes API server provides the primary interface for the Kubernetes control plane and the cluster as a whole. When you interact with Kubernetes, you are nearly always doing it through the Kubernetes API server.

configuring the Kubernetes API server

$sudo mkdir -p /var/lib/kubernetes/  - **Making a diectory**

$sudo cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
  service-account-key.pem service-account.pem \
  encryption-config.yaml /var/lib/kubernetes/

Set some environment variables that will be used to create the systemd unit file. Make sure you replace the placeholders with their actual values:

$CONTROLLER0_IP=<private ip of controller 0>
$CONTROLLER1_IP=<private ip of controller 1>

Generate the kube-apiserver unit file for systemd :

cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
Description=Kubernetes API Server

ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address= \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --enable-swagger-ui=true \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://$CONTROLLER0_IP:2379,https://$CONTROLLER1_IP:2379 \\
  --event-ttl=1h \\
  --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --kubelet-https=true \\
  --runtime-config=api/all \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-cluster-ip-range= \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2 \\


Setting up the Kubernetes Controller Manager:

Now that we have set up kube-apiserver , we are ready to configure kube-controller-manager

You can configure the Kubernetes Controller Manager like so:

$sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/

Generate the kube-controller-manager systemd unit file:

cat << EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
Description=Kubernetes Controller Manager

ExecStart=/usr/local/bin/kube-controller-manager \\
  --address= \\
  --cluster-cidr= \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range= \\
  --use-service-account-credentials=true \\


Setting up the Kubernetes Scheduler:

Copy kube-scheduler.kubeconfig into the proper location:

$sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/

Generate the kube-scheduler yaml config file.

cat << EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
  leaderElect: true

Create the kube-scheduler systemd unit file:

cat << EOF | sudo tee /etc/systemd/system/kube-scheduler.service
Description=Kubernetes Scheduler

ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\


Start and enable all of the services:

$sudo systemctl daemon-reload
$sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
$sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Make sure all the services are active:

$sudo systemctl status kube-apiserver kube-controller-manager kube-scheduler

Use kubectl to check componentstatuses:

$kubectl get componentstatuses --kubeconfig admin.kubeconfig

Enable HTTP Health Checks:

set up a basic nginx proxy for the healthz endpoint by first installing nginx:

$sudo apt-get install -y nginx

Create an nginx configuration for the health check proxy:

cat > kubernetes.default.svc.cluster.local << EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass          ;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;

Set up the proxy configuration so that it is loaded by nginx:

$sudo mv kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local

$sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
$sudo systemctl restart nginx
$sudo systemctl enable nginx

You can verify that everything is working like so:

$curl -H "Host: kubernetes.default.svc.cluster.local" -i

Set up RBAC for Kubelet Authorization:

One of the necessary steps in setting up a new Kubernetes cluster from scratch is to assign permissions that allow the Kubernetes API to access various functionality within the worker kubelets.

Create a role with the necessary permissions:

cat << EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
    rbac.authorization.kubernetes.io/autoupdate: "true"
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
  - apiGroups:
      - ""
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - "*"

Bind the role to the kubernetes user:

cat << EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
  name: system:kube-apiserver
  namespace: ""
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes

Setting up a Kube API Frontend Load Balancer:

In order to achieve redundancy for your Kubernetes cluster, you will need to load balance usage of the Kubernetes API across multiple control nodes

designated as your load balancer server:

$sudo apt-get install -y nginx
$sudo systemctl enable nginx
$sudo mkdir -p /etc/nginx/tcpconf.d
$sudo vi /etc/nginx/nginx.conf

Add the following to the end of nginx.conf:

include /etc/nginx/tcpconf.d/*;

Set up some environment variables for the lead balancer config file:

$CONTROLLER0_IP=<controller 0 private ip>
$CONTROLLER1_IP=<controller 1 private ip>

Create the load balancer nginx config file:

cat << EOF | sudo tee /etc/nginx/tcpconf.d/kubernetes.conf
stream {
    upstream kubernetes {
        server $CONTROLLER0_IP:6443;
        server $CONTROLLER1_IP:6443;

    server {
        listen 6443;
        listen 443;
        proxy_pass kubernetes;

Reload the nginx configuration:

$sudo nginx -s reload

You can verify that the load balancer is working like so:

$curl -k https://localhost:6443/version

Bootstrapping the Kubernetes Worker Nodes:

Installing Worker Node Binaries:

We are now ready to begin the process of setting up our worker nodes. The first step is to download and install the binary file which we will later use to configure our worker nodes services.

You can install the worker binaries like so. Run these commands on both worker nodes:

$sudo apt-get -y install socat conntrack ipset

$wget -q --show-progress --https-only --timestamping \
  https://github.com/kubernetes-incubator/cri-tools/releases/download/v1.0.0-beta.0/crictl-v1.0.0-beta.0-linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-the-hard-way/runsc \
  https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
  https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
  https://github.com/containerd/containerd/releases/download/v1.1.0/containerd-1.1.0.linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl \
  https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-proxy \
$sudo mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \

$chmod +x kubectl kube-proxy kubelet runc.amd64 runsc
$sudo mv runc.amd64 runc
$sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
$sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/
$sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
$sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C /

Configuring Containerd:

Containerd is the container runtime used to run containers managed by Kubernetes

You can configure the containerd service like so. Run these commands on both worker nodes:

$sudo mkdir -p /etc/containerd/

Create the containerd config.taml:

cat << EOF | sudo tee /etc/containerd/config.toml
    snapshotter = "overlayfs"
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runc"
      runtime_root = ""
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runsc"
      runtime_root = "/run/containerd/runsc"

Create the containerd unit file:

cat << EOF | sudo tee /etc/systemd/system/containerd.service
Description=containerd container runtime

ExecStartPre=/sbin/modprobe overlay


Configuring Kubelet:

Kubelet is the Kubernetes agent which runs on each worker node. Acting as a middleman between the Kubernetes control plane and the underlying container runtime, it coordinates the running of containers on the worker node

Set a HOSTNAME environment variable that will be used to generate your config files. Make sure you set the HOSTNAME appropriately for each worker node:


$sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/

$sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig

$sudo mv ca.pem /var/lib/kubernetes/

Create the kubelet config file:

cat << EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
    enabled: false
    enabled: true
    clientCAFile: "/var/lib/kubernetes/ca.pem"
  mode: Webhook
clusterDomain: "cluster.local"
  - ""
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"

Create the kubelet unit file:

cat << EOF | sudo tee /etc/systemd/system/kubelet.service
Description=Kubernetes Kubelet

ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2 \\
  --hostname-override=${HOSTNAME} \\


Configuring Kube-Proxy:

Kube-proxy is an important component of each Kubernetes worker node. It is responsible for providing network routing to support Kubernetes networking components.

You can configure the kube-proxy service like so. Run these commands on both worker nodes:

$sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

Create the kube-proxy config file:

cat << EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: ""

Create the kube-proxy unit file:

cat << EOF | sudo tee /etc/systemd/system/kube-proxy.service
Description=Kubernetes Kube Proxy

ExecStart=/usr/local/bin/kube-proxy \\


Now you are ready to start up the worker node services! Run these commands:

$sudo systemctl daemon-reload

$sudo systemctl enable containerd kubelet kube-proxy

$sudo systemctl start containerd kubelet kube-proxy

Check the status of each service to make sure they are all active (running) on both worker nodes:

$sudo systemctl status containerd kubelet kube-proxy

Finally, verify that both workers have registered themselves with the cluster. Log in to one of your control nodes and run this:

$kubectl get nodes

You should see the hostnames for both worker nodes listed.

Configuring Kubectl for Remote Access:

In a separate shell, open up an ssh tunnel to port 6443 on your Kubernetes API load balancer:

$ssh -L 6443:localhost:6443 [email protected]<your Load balancer cloud server public IP>

You can configure your local kubectl in your main shell like so. Set KUBERNETES_PUBLIC_ADDRESS to the public IP of your load balancer.

$cd ~/kthw

$kubectl config set-cluster kubernetes-the-hard-way \
  --certificate-authority=ca.pem \
  --embed-certs=true \

$kubectl config set-credentials admin \
  --client-certificate=admin.pem \

$kubectl config set-context kubernetes-the-hard-way \
  --cluster=kubernetes-the-hard-way \

$kubectl config use-context kubernetes-the-hard-way

Verify that everything is working with:

$kubectl get pods

$kubectl get nodes

$kubectl version

Installing Weave Net:

We are now ready to set up networking in our Kubernetes cluster.

First, log in to both worker nodes and enable IP forwarding:

$sudo sysctl net.ipv4.conf.all.forwarding=1

$echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf

The remaining commands can be done using kubectl. To connect with kubectl, you can either log in to one of the control nodes and run kubectl there or open an SSH tunnel for port to the load balancer server and use kubectl locally.

You can open the SSH tunnel by running this in a separate terminal. Leave the session open while you are working to keep the tunnel active:

$ssh -L port:localhost:Port [email protected]<your Load balancer cloud server public IP>

Install Weave Net like this:

$kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE="

Now Weave Net is installed, but we need to test our network to make sure everything is working.

First, make sure the Weave Net pods are up and running:

$kubectl get pods -n kube-system

Next, we want to test that pods can connect to each other and that they can connect to services. We will set up two Nginx pods and a service for those two pods. Then, we will create a busybox pod and use it to test connectivity to both Nginx pods and the service.

First, create an Nginx deployment with 2 replicas:

cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
  name: nginx
      run: nginx
  replicas: 2
        run: nginx
      - name: my-nginx
        image: nginx
        - containerPort: 80

Next, create a service for that deployment so that we can test connectivity to services as well:

$kubectl expose deployment/nginx

Now let’s start up another pod. We will use this pod to test our networking. We will test whether we can connect to the other pods and services from this pod.

$kubectl run busybox --image=radial/busyboxplus:curl --command -- sleep 3600

$POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")

Now let’s get the IP addresses of our two Nginx pods:

$kubectl get ep nginx

There should be two IP addresses listed under ENDPOINTS.

Now let’s make sure the busybox pod can connect to the Nginx pods on both of those IP addresses.

$kubectl exec $POD_NAME -- curl <first nginx pod IP address>

$kubectl exec $POD_NAME -- curl <second nginx pod IP address>

Both commands should return some HTML with the title “Welcome to Nginx!” This means that we can successfully connect to other pods.

Now let’s verify that we can connect to services.

$kubectl get svc

This should display the IP address for our Nginx service.

Let’s see if we can access the service from the busybox pod!

$kubectl exec $POD_NAME -- curl <nginx service IP address>

This should also return HTML with the title "Welcome to Nginx!"

This means that we have successfully reached the Nginx service from inside a pod and that our networking configuration is working!

Deploying Kube-dns to the Cluster:

Kube-dns is an easy-to-use solution for providing DNS service in a Kubernetes cluster.

To install and test kube-dns, you will need to use kubectl . To connect with kubectl , you can either log in to one of the control nodes and run kubectl there, or open an SSH tunnel for port to the load balancer server and use kubectl locally.

You can open the SSH tunnel by running this in a separate terminal. Leave the session open while you are working to keep the tunnel active:

$ssh -L port:localhost:port [email protected]<your Load balancer cloud server public IP>

Install kube-dns :

$kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml

Verify that the kube-dns pod starts up correctly:

$kubectl get pods -l k8s-app=kube-dns -n kube-system

Make sure that 3/3 containers are ready, and that the pod has a status of Running . It may take a moment for the pod to be fully up and running, so if READY is not 3/3 at first, check again after a few moments.

Now let’s test our kube-dns installation by doing a DNS lookup from within a pod. First, we need to start up a pod that we can use for testing:

$kubectl run busybox --image=busybox:1.28 --command -- sleep 3600

$POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")

Next, run an nslookup from inside the busybox container:

$kubectl exec -ti $POD_NAME -- nslookup kubernetes