ARCHITECTURE DIAGRAM OF KUBERNETES
KUBE-API SERVER:- Front end to the control plane. Exposes the API(Rest). Consumes JSON (Via-manifest files).
CLUSTER STORE:- Stores all data - Persistent storage. Cluster state and config. Uses etcd to store cluster keyvalues.
KUBE CONTROL-MANAGER:- Maintain the desired state. Node controller. Watches for changes & Helps maintain desired state.
KUBE-SCHEDULER:- Watches-Apiversion for new pods (delete or create pods)
KUBELET:- The main kubernetes agent. Registers nodes with cluster. Watches apiserver. Instatiates pods. Report back to master. Exposes end point on.
CONTAINERS ENGINER:- Does container Management. Pulling images and starting and stopping containers.
KUBE PROXY:- Pod networking and Kubernetes networking. Pod IP addresses. Every pod as one IP address. All containers in a pod share a single IP. Load Balance across all pods in a service.
HOW TO INSTALL DOCKER AND KUBERNETES
We have set up a cluster in an Cent-OS-7 environment. We have to create Servers. One Master (i.e.2 Virtual CPU, 4 GB Memory) and Two Nodes (i.e.1 Virtual CPU, 2 GB Memory).
Turn off swap on all servers.
sudo swapoff -a
sudo vi /etc/fstab
Look for the line in /etc/fstab that says /root/swap
and add a #
at the start of that line, so it looks like: #/root/swap
. Then save the file.
Install and configure Docker.
sudo yum -y install docker
sudo systemctl enable docker
sudo systemctl start docker
Add the Kubernetes repo.
cat << EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Turn off selinux.
sudo setenforce 0
sudo vi /etc/selinux/config
Change the line that says SELINUX=enforcing
to SELINUX=permissive
and save the file.
Install Kubernetes Components.
sudo yum install -y kubelet kubeadm kubectl
sudo systemctl enable kubelet
sudo systemctl start kubelet
Configure sysctl
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Initialize the Kube Master. Do this only on the master node.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install flannel networking.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
The kubeadm init
or by running the command “kubeadm token create --print-join-command”
command that you ran on the master should output a kubeadm join
command containing a token and hash. You will need to copy that command from the master and run it on all worker nodes with sudo
.
sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash
Now you are ready to verify that the cluster is up and running. On the Kube Master server, check the list of nodes.
kubectl get nodes
Make sure that all of your nodes are listed and that all have a STATUS of Ready
.
API Primitives
Persistent entities in the Kubernetes System.
Uses these to represent state of the cluster.
Describe:
• What applications are running.
• Which nodes those applications are running on.
• Policies around those applications. Kubernetes Objects are “records of intent”.
Object Spec:
• Provided to Kubernetes.
• Describes desired state of objects.
Object Status:
• Provided by Kubernetes.
• Describes the actual state of the object.
COMMON KUBERNETES OBJECTS.
• Nodes
• Pods
• Deployments
• Services
• ConfigMaps
Names
• All objects have a unique name.
• Client provided.
• Can be reused.
• Maximum length of 253 characters.
• Lower case alphanumeric characters, - and . allowed
UIDs
• All objects have a unique UID.
• Generated by Kubernetes.
• Spatially and temporally unique.
NAMESPACES
• Multiple virtual clusters back by the same virtual cluster.
• Generally for large deployments.
• Provide scope for names.
• Easy way to divide cluster resources.
• Allows for multiple teams of users.
• Allows for resource quotas.
• Special “kube-system” namespace.
NODES
• Might be a VM or physical machine.
• Services necessary to run pods.
• Managed by the master.
• Services necessary: 1) Container runtime 2) Kubelet 3) Kube-proxy
• Not inherently created by Kubernetes, but by the Cloud Provider.
• Kubernetes checks the node for validity.
CLOUD CONTROLLER MANAGERS
• Route controller (gce clusters only)
• Service Controller
• PersistentVolumeLabels controller
NODE CONTROLLER
• Assigns CIDR block to a newly registered node.
• Keeps track of the nodes.
• Monitors the node health.
• Evicts pods from unhealthy nodes.
• Can taint nodes based on current conditions in more recent versions.
KUBERNETES SERVICES & NETWORK PREMITIVES
KUBERNETES SERVICES
• Underlying architecture
• Pod – Simplest kubernetes object, represents one or more containers running on a single node.
• Ephemeral, disposable & replaceable – Stateless
• Cattle vs Pets.
• Usually Managed via Deployments
• Deployment specifications
• Image
• Number of replicas
• Particular port or IP address
• Running the application pods.
• How you set up a service depends on your networking configuration and how you will handle load balancing and port forwarding.
• If you use the pod network IP address method, then a deployment gets assigned a single IP address – even if there are multiple replicas of that pod.
• The kubernetes service (using kube-proxy on the node) redirects traffic
Imperative :
kubectl run nginx --image=nginx
Declarative :
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
DESIGNING A KUBERNETES CLUSTER
MINIKUBE:
• Minikube is the recommended method for creating a single node kubernetes deployment on your local workstation.
• Its installation is automated and doesn’t require a cloud provider, and it works pretty well.
• I played with minikube a lot while I wrote this course and can definitely recommend it on Linux, Mac OS X, and Windows.
KUBEADM
• You can use Kubeadm to deploy multi-node locally if you like, but it’s a little more challenging.
• You’ll need to select your own CNI (Cluster Network Interface) if you go this route.
• You’ll recall we used Flannel in our lab, and I can say it’s pretty stable and good for learning.
ADD-ON SOLUTIONS
• Vendors also offer a wide variety of add-ons to k8s. One of the most important ones to consider is the CNI – Container Networking Interface.
• Some CNIs include:
- Calico is a secure L3 networking and network policy provider.
- Canal unites Flannel and Calico, providing networking and network policy.
- Cilium is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
- CNI-Genie enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
CONTIV
• Contiv provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework.
• Contiv project is fully open sourced.
• The installer provides both kubeadm and non-kubeadm based installation options
FLANNEL AND MULTUS SOLUTIONS
• Flannel is an overlay network provider that can be used with Kubernetes, and is the one we’re using in our Linux Academy servers
• Multus is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
HARDWARE AND INFRASTRUCTURE
• The hardware and infrastructure that k8s will run on is truly staggering.
• Many developers run full, 6 node kubernetes and docker clusters on raspberry pis.
• I’ll be honest – after I finish this course and I find some spare time.
• Nodes, including the master, can be physical or virtual machines running kubernetes components and a container manager such as docker or rocket.
• The nodes should be connected by a common network, though the Internet will work as long as port 443 and whatever your pod networking system uses are unblocked.
• A pod networking application such as Flannel is needed to allow the pods to communicate
• With one another, and it makes use of an overlay network (by default it’s vxlans) to provide that service.
SECURING CLUSTER COMMUNICATIONS
• Cluster communications cover communication to the API server, control-plane communications inside the cluster, and can even include pod-to-pod communications.
• This is an in-depth topic with a fair amount of detail, so let’s start by discussing how to secure communications to the Kubernetes API server.Everything in Kubernetes goes through the API driver, so controlling and limiting who has access to the cluster and what they are allowed to do is arguably the most important task in securing the cluster.
• The default encryption communication in K8s is TLS.
• Most of the installation methods will handle the certificate creation
• Kubeadm created certificates during the Linux Academy Cloud Server Kubernetes Cluster lab.
• No matter how you’ve install kubernetes, some components and installation methods may enable local ports over HTTP and you should double check the settings of these components to identify potentially unsecured traffic and address these issues.
• Anything that connects to the API, including nodes, proxies, the scheduler, volume plugins in addition to users, should be authenticated.
• Again, most installation methods create certificates for those infrastructure pieces, but if you’ve chosen to install manually, you might need to do this yourself.
• Once authenticated, every API call should pass an authorization check.
ROLE-BASED ACCESS CONTROL
• Kubernetes has an integrated Role-Based Access Control (RBAC) component
• Certain roles perform specific actions in the cluster
• Kubernetes has several well thought out, pre-created roles
• Simple roles might be fine for smaller clusters
• If a user doesn’t have rights to perform an action but they do have access to perform a composite action that includes it, the user WILL be able to indirectly create objects.
• Carefully consider what you want users to be allowed to do prior to making changes to the existing roles.
SECURING THE KUBELET
• Secure the kubelet on each node.
• The Kubelets expose HTTPS endpoints which give access to both data and actions on the nodes. By default, these are open.
• To secure those endpoints, you can enable Kubelet Authentication and Authorization by starting it with an --anonymous-auth=false flag and assigning it an appropriate x509 client certificate in its configuration.
SECURING THE NETWORK
• Network Policies restrict access to the network for a particular namespace.
• This allows developers to restrict which pods in other namespaces can access pods and ports within the current namespace.
• The pod networking CNI must respect these policies which, fortunately, most of them do.
• Users can also be assigned quotas or limit ranges Use Plug-Ins for more advanced functionality That should secure all the communications in a cluster.
VULNERABILITIES
• Kubernetes makes extensive use of etcd for storing configuration and secrets. It acts as the key/value store for the entire cluster.
• Gaining write access to etcd is very much like gaining root on the whole cluster, and even read access can be used by attackers to cause some serious damage.
• Strong credentials on your etc server or cluster is a must.
• Isolate those behind a firewall that only allows requests from the API servers.
• Audit logging is also critical
• Records actions taken by the API for later analysis in the event
• Enable audit logging and archive the audit file on a secure server
• Rotate your infrastructure credentials frequently
• Smaller lifetime windows for secrets and credentials create bigger problems for attackers attempting to use it.
• You can even set these up to have short lifetimes and automate their rotation.
THIRD PARTY INTEGRATIONS
• Always review third party integrations before enabling them.
• Integrations to Kubernetes can change how secure your cluster is. Add-ons might be nothing more than just more pods in the cluster, but those can be powerful.
• Don’t allow them into the kube-system namespace.
MAKING KUBERNETES HIGHLY AVAILABLE
• The challenges associated with making any platform “H.A.” are many.
• While the exam isn’t going to make you deploy a full HA kubernetes cluster, we should take a look at the process from a high level to gain an understanding of how it should work.
THE PROCESS
• Create the reliable nodes that will form our cluster.
• Set up a redundant and reliable storage service with a multinode deployment of etcd.
• Start replicated and load balanced Kubernetes API servers.
• Set up a master-elected kubernetes scheduler and controller manager daemons.
STEP ONE
• Make the Master node reliable.
• Ensure that the services automatically restart if they fail
• Kubelet already does this, so it’s a convenient piece to use
• If the kubelet goes down, though, we need something to restart it
• Monit on Debian systems or systemctl on systemd-based systems.
STEP TWO – STORAGE LAYER
• If we’re going to make it H.A., then we’ve got to make sure that persistent storage is rock solid.
• Protect the data!
• If you have the data, you can rebuild almost anything else.
• Once the data is gone, the cluster is gone too.
• Clustered etcd already replicates the storage to all master instances in your cluster.
• To lose data, all three nodes would need to have their disks fail at the same time.
• The probability that this occurs is relatively low, so just running a replicated etcd cluster is reliable enough.
• Add additional reliability by increasing the size of the cluster from three to five nodes
• A multinode cluster of etcd
• If you use a cloud provider, then they usually provide this for you. ○ Example: Persistent Disk on the Google Cloud Platform.
• If you are running on physical machines, you can also use network attached redundant storage using an iSCSI or NFS interface. Alternatively, you can run a clustered file system like Gluster or Ceph.
• RAID array on each physical machine
STEP THREE- REPLICATED API SERVICES
• Create the initial log file so that Docker will mount a file instead of a directory: ○ touch /var/log/kube-apiserver.log
• Next, we create a /srv/kubernetes/ directory on each node which should include:
• basic_auth.csv - basic auth user and password
• ca.crt - Certificate Authority cert
• known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
• kubecfg.crt - Client certificate, public key
• kubecfg.key - Client certificate, private key
• server.cert - Server certificate, public key
• server.key - Server certificate, private key
• Either create this manually or copy it from a master node on a working cluster.
• Once these files exist, copy the kube-apiserver.yaml into /etc/kubernetes/manifests/ on each of our master nodes.
• The kubelet monitors this directory, and will automatically create an instance of the kube-apiserver container using the pod definition specified in the file
• If a network load balancer is set up, then access the cluster using the VIP and see traffic balancing between the apiserver instances.
• Setting up a load balancer will depend on the specifics of your platform.
• For external users of the API (e.g., the kubectl command line interface, continuous build pipelines, or other clients) remember to configure them to talk to the external load balancer’s IP address
STEP FOUR – CONTROLLER/SCHEDULER DAEMONS
• Now we need to allow our state to change.
• Controller managers and scheduler.
• These processes must not modify the cluster’s state simultaneously, use a lease-lock
• Each scheduler and controller manager can be launched with a --leader-elect flag
• The scheduler and controller-manager can be configured to talk to the API server that is on the same node (i.e. 127.0.0.1). ○ It can also be configured to communicate using the load balanced IP address of the API servers.
• The scheduler and controller-manager will complete the leader election process mentioned before when using the --leader-elect flag.
• In case of a failure accessing the API server, the elected leader will not be able to renew the lease, causing a new leader to be elected.
• This is especially relevant when configuring the scheduler and controller-manager to access the API server via 127.0.0.1, and the API server on the same node is unavailable.
INSTALLING CONFIGURATION FILES
• Create empty log files on each node, so that Docker will mount the files and not make new directories: ○ touch /var/log/kube-scheduler.log ○ touch /var/log/kube-controller-manager.log
• Set up the descriptions of the scheduler and controller manager pods on each node by copying kube-scheduler.yaml and kube-controller-manager.yaml into the /etc/kubernetes/manifests/ directory.
• And once that’s all done, we’ve now made our cluster highly available! Remember, if a worker node goes down, kubernetes will rapidly detect that and spin up replacement pods elsewhere in the cluster
END-TO-END TESTING AND VALIDATION
• Provide a mechanism to test end-to-end behavior of the system
• Last signal to ensure end user operations match specifications
• Primarily a developer tool
• Difficult to run against just “any” deployment – many specific tests for cloud providers
• Ubuntu has its own Juju-deployed tests
• GCE Has its own
• Many tests offered
KUBETEST SUITE
• Ideal for GCE or AWS users
• Build
• Stage
• Extract
• Bring up the cluster
• Test
• Dump logs
• Tear down
DEPLOYMENTS, ROLLING UPDATES AND ROLLBACKS
Sample nginx-deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
Ports:
- containerPort: 80
kubectl create -f nginx-deployment.yaml --> create a deployment file
kubectl get deployments
kubectl describe deployment
kubectl get deployment -o yaml --> remove the yaml from deployment file
Deployment Rolling Updates
Roll Outs - Updating image without downtime or services intrepted
kubectl set image deployment/nginx-deployment nginx=nginx:1.8
kubectl rollout status deployment/nginx-deployment
kubectl describe deployment
Now change the version in yaml file also
vi
kubectl apply -f nginx-deployment.yaml
kubectl rollout status deployment/nginx-deployment
kubectl get deployments
kubectl rollout history deployment/nginx-deployment --revision=3
(we can see the history of the changes)
kubectl rollout undo deployment/nginx-deployment – to-revision=2 —> We can go to the previous version of roll-display
HOW TO KUBERNETES CONFIGURES APPLICATIONS
kubectl create configmap my-map --from-literal=school=LinuxAcademy —> to create a configure map
kubectl get configmaps
kubectl describe configmaps my-map
kubectl get configmap my-map -o yaml
Create a pod-config.yaml file
apiVersion:V1
kind: Pod
metadata:
name: config-test-pod
spec:
containers:
- name: test-container
image: busybox
command: ["/bin/sh", “-c”, “env”]
env:
- name: WHAT_SCHOOL
valueFrom:
configMapKeyRef:
name: my-map
key: school
restartPolicy: Never
kubectl create -f pod-config.yaml
kubectl get pods
kubectl logs config-test-pod
SCALING APPLICATIONS:- In side kubernetes clusters
kubectl get deployments —> gives the info about deployment files
kubectl describe deployment nginx-deployment —> gives the full info about deployment file
(Scalling up means adding replicas)
kubectl scale deployment/nginx-deployment --replicas=3
kubectl get pods
Scale down
kubectl scale deployment/nginx-deployment --replicas=1
(This process is manual. For autoscale we got metrics api-install)
SELF-HEALING APPLICATIONS IN KUBERNETES
All applications are self healing
kubectl delete pod —> to delete the pod
*if one pod is deleted other pod is automatically up this is called self-healing.
If we stop node. Default fore deployment in kubernetes, pod not evicted untill 300sec. That is called toleration of pod.
LABELS & SELECTORS
kubectl get pods -l app= —> Give the info about specific label pod
kubectl label pod --overwrite —> To overwrite the pod name
kubectl label pods -l app=nginx tier=frontend —> If i want to lable all nginx commands.
kubectl delete pods -l --> to delete the pod label
DAEMON SETS
kubectl get daemonsets -n kube-system —> Get the info about daemon i.e.nodes & pods etc.
kubectl describe daemonset kube -flannel -ds -n kube-system —> gives the full info about daemon.
(Daemon is a specialize placement of pod i.e. it is a one pod node.)
RESOURCE LIMITS & POD SCHEDULING:-
Taint have a key, value and an effect represented as =:. The key ‘node-role.kubernetes.io/master’ is a null value.
We can remove taints by key, key-value or key-effect. Taint is removed by using the key,
kubectl traint nodes master-
The null value can’t be reapplied, but the taint ‘node-role.kubernetes.io=master:No schedule’ has the same effect.
MANUALLY SCHEDULING PODS
Use have to add label to the node. Then we can schedule the pod.
kubectl label node net=gigabit —> To give the label to the node
Now change the label in pod also same as node in .yaml file
Now create (kubectl create -f )
kuebctl describe node ----> It will give label
MONITORING CLUSTER & APPLICATION COMPONENTS
MONITORING
Must monitor nodes, containers, pods, services, and the entire cluster. • Provide end-users resource usage information.
Heapster – Cluster-wide aggregator of monitoring and event data. Heapter runs on single node with Kubelet/Cadvisor Communicates with Master Kubelet/cAdvisor on the Node Many nodes make up the cluster. Storage backend
• cAdvisor is an open source container resource usage and performance analysis agent.
• Auto-discovers all containers on a node and collects CPU, memory, file system, and network usage statistics.
• Provides the overall machine usage by analyzing the ‘root’ container on the machine.
• Exposes a simple UI for local containers on port 4194.
• The Kubelet acts as a bridge between the Kubernetes master and the nodes.
• Manages the pods and containers running on a node.
• Translates each pod into the containers making it up.
• Obtains usage statistics from cAdvisor.
• Exposes the aggregated pod resource usage statistics via a REST API.
• Grafana with InfluxDB.
• Heapster is setup to use this storage backend by default on most Kubernetes clusters.
• InfluxDB and Grafana run in Pods.
• The pod exposes itself as a Kubernetes service which is how Heapster then discovers it.
• The Grafana container serves Grafana’s UI which provides a dashboard.
• The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can, of course, be fully customized and expanded.
• Google Cloud Monitoring is a hosted monitoring service that allows you to visualize and alert important metrics in your application.
• Heapster can be set up to automatically push all collected metrics to Google Cloud Monitoring.
• These metrics are then available in the Cloud Monitoring Console.
• This storage backend is the easiest to setup and maintain.
• The monitoring console allows you to easily create and customize dashboards using the exported data.
MANAGING LOGS
kubectl logs counter —> Gives total log details
kubectl get pods -all -namespaces —> Gives the info about pods
cd /var/log/containers —> We can find the containers logs
sudo cat —> gives the log file of that container
cd /var/log —> We can find all logs files
UPGRADING KUBERNETES (WITHOUT TAKING DOWN THE CLUSTER)
kubectl get nodes —> Gives the info about node & version details
sudo apt upgrade kubeadm —> To upgrade the kubeadm
kubeadm version —> to check version
sudo kubeadm upgrade plan —> To update
sudo kubeadm upgrade apply
kubectl drain --ignore-daemonsets —> To drain the not working nodes
sudo apt upgrade kubelet —> To upgrade the kubelet
systemctl status kubelet —> To check the status of kubelet
kubectl get nodes —> Status shown as scheduling disabled
kubectl uncordon —> If we want to upgrade pods simultaneously. It must be in the same node only.
First we update in master. Then go to node1&2 follow the same steps. Update all the kubeadm&kubelet etc.
Now version will be change.
UPGRADING THE UNDERLYING OPERATING SYSTEM(S)
How to upgrade the OS in the clusters
first drain the node from daemonsets
kubectl get nodes
kubectl drain --ignore-daemonsets
(If node is in drain condition, we can make any changes not effected on the node)
- stop the server
- delete the node (kubctl delete node )
- sudo kubeadm token list —> It can be shown token
sudo kubeadm token generate --> create a tocken
sudo kubeadm token create --ttl 3h --print-join-command
In node run that join token command.
In this method we can make node out of the service. We can upgrade it and join now.
It will be upgrade the total node.
NODE NETWORKING CONFIGURATION
Master Nodes:
• TCP 6443 – Kubernetes API Server
• TCP 2379-2380 – etcd server client API
• TCP 10250 – Kubelet API
• TCP 10251 – kube-scheduler
• TCP 10252 – kube-controller-manager
• TCP 10255 – Read-only Kubelet API
Worker Nodes:
• TCP 10250 – Kubelet API
• TCP 10255 – Read-only Kubelet API
• TCP 30000 - 32767 NodePort Services
SERVICE NETWORKING
kubectl get pods -o wide —> Gives the pod name
kubectl get deployments —> See the pods which are deployed
kubectl expose deployment --type=“NodePort” --port 80
(Exposed the every pod outside the cluster)
kubectl get services —> Gives the services information
curl localhost: —> we can see the images
kubectl get pods -o wide —> Gives complete info about pod
INGRESS
Ingress is an API object that manages external access to the services in a cluster, usually HTTP. It can provide load balancing, SSL termination and name-based virtual hosting. For our purposes, an Edge router is a router that enforces the firewall policy for your cluster.
This could be a gateway managed by a cloud provider or a physical piece of hardware. Our Cluster network is a set of links, either logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model. Examples of a Cluster network include overlays such as Flannel, like we’re using in our Linux Academy Cloud server cluster, or SDNs such as OpenVSwitch.
A Service is a Kubernetes Service that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network
What is Ingress?
Services and pods have IPs only routable by the cluster network. So an Ingress is a collection of rules that allow inbound connections Can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offers name-based virtual hosting, and the like. Users request ingress by POSTing the Ingress resource to the API server. An Ingress controller is responsible for fulfilling the Ingress, usually by way of a load balancer, though it may also configure the edge router or additional front ends to help handle the traffic in a Highly Available manner.
Relatively new resource and not available in any Kubernetes release prior to 1.1. Ingress controller to satisfy an Ingress object Most cloud providers deploy an ingress controller on the master. Each ingress pod must be annotated with the appropriate class
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
-http:
paths:
-path: /path
backend:
serviceName: test
servicePort: 80
kubectl create -f <filename.yaml>
kubectl get ing
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 107.178.254.228
Where 107.178.254.228 is the IP allocated by the Ingress controller to satisfy this Ingress. The RULE column shows that all traffic sent to the IP is directed to the Kubernetes Service listed under BACKEND.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
-host: some.example.com
http:
paths:
-path: /service1
backend:
serviceName: s1
servicePort: 80
-path: /service2
backend:
serviceName: s2
servicePort: 80
kubectl get ing
How to secure an Ingress:
• Specify secret.
- TLS private key
- Certificate
• Port 443 (Assumes TLS Termination).
• Multiple hosts are multiplexed on the same port by hostnames specified through the SNI TLS extension.
• The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: Secret
metadata:
name: supersecret
namespace: default
type: Opaque
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
-secretName: supersecret
backend:
serviceName: s1
servicePort: 80
• An Ingress controller is bootstrapped with a load balancing policy that it applies to all Ingress objects (e.g. load balancing algorithm, backend weight scheme, etc) Persistent sessions and dynamic weights not yet exposed The service load balancer may provide some of this Health checks are not exposed directly through the Ingress Readiness probes allow for similar functionality
kubectl get ing
kubectl edit ing test
spec:
rules:
-host: services.example.com
http:
paths:
-backend:
serviceName: s1
servicePort: 80
path: /s1
-host: newhost.example.com
http:
paths: - backend:
serviceName: s2
servicePort: 80
path: /s2
kubectl get ing
• Alternatively, kubectl replace -f on a modified Ingress yaml file. This command updates a running Kubernetes object.
• Ingress is a relatively new concept in Kubernetes Other ways to expose a service that doesn’t directly involve the Ingress resource:
- Use Service.Type=LoadBalancer
- Use Service.Type=NodePort
- Use a Port Proxy
DEPLOYING A LOAD BALANCER
Ingress - Internal Load Balance only
For external we use Load Balancer
sample Load Balacer.yaml file
kind: service
apiVersion: v1
metadata:
name: la-lb-service
spec:
selector:
app: la-lb
Ports:
-protocol: TCP
Port: 80
targetPort: 9376
clusterIP: 10.0.171.223
LoadBalancerIP: 78.12.23.17
type: LoadBalancer