## Aviator A demo application to learn about containers and orchestration engines
Just a silly application to iterate with 
## Navigation ? is for Help Escape is for overview Arrows bottom right tell where to go next
## Pre-requisites We need some software available Several options
## Launch a VM All batteries included ```bash $ openstack server create --image UUID \ --flavor m2.medium --key-name YOURKEY YOURID-handson $ ssh root@YOURID-handson ```
## Manual Install Docker ```bash $ yum install -y yum-utils device-mapper-persistent-data lvm2 $ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo $ yum install -y docker-ce $ systemctl start docker ``` Kubernetes ```bash $ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl $ mv kubectl ~/docker $ chmod 755 ~/docker/kubectl ``` Helm ```bash $ wget https://kubernetes-helm.storage.googleapis.com/helm-v2.7.2-linux-amd64.tar.gz $ tar zxvf helm-v2.7.2-linux-amd64.tar.gz $ mv linux-amd64/helm ~/docker ```
## Clone the Repository Fork the repository in your personal gitlab https://gitlab.cern.ch/cloud-infrastructure/aviator Clone the repository locally ```bash $ git clone https://:@gitlab.cern.ch:8443/YOURID/aviator.git * Mount (mnt) * Process ID (pid) * Network (net) * Interprocess Communication (ipc) * UTS * User ID (user) * Proposed (time, syslog)```
## Topics - A Bit of Theory - Docker Basics - OpenStack Magnum - Kubernetes - Application Lifecycle - Auto Devops
## Container Internals
Container History 
cgroups > Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes
Namespaces > Feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a different set of resources
Namespaces * Mount (mnt) * Process ID (pid) * Network (net) * Interprocess Communication (ipc) * UTS * User ID (user) * Proposed (time, syslog)
Useful Links https://github.com/lizrice/containers-from-scratch ( https://github.com/strigazi/containers-from-scratch ) https://ericchiang.github.io/post/containers-from-scratch/
## Docker Basics It all starts with a run ```bash $ docker run -it alpine sh / # ``` We're now in a different process world ```bash / # ps auxw PID USER TIME COMMAND 1 root 0:00 sh 6 root 0:00 ps auxw ``` And filesystem, even networking ```bash / # cat /etc/passwd / # ip addr ``` Though we do share a kernel ```bash / # uname -a Linux 56b4c4d6d6d8 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 Linux ```
When we exit, it's gone... right? ```bash / # exit $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` Maybe not ```bash $ docker ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 29c24c5847f9 alpine "sh" 23 seconds ago Exited (0) 22 seconds ago clever_chandrasekhar $ docker rm -f 29c24c5847f9 ```
A service would look similar ```bash $ docker run --name mynginx -p 1234:80 -d nginx ``` ```bash $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 298dc0089df9 nginx "nginx -g 'daemon ..." 14 seconds ago Up 13 seconds 80/tcp tender_newton ``` Should be really there ```bash $ curl http://localhost:1234 ``` And we can kill it ```bash $ docker rm -f mynginx ```
Host networking can be of use, sometimes ```bash $ docker run --net=host --name mynginx -d nginx ``` No network namespace, listening on the host ```bash $ curl http://localhost ``` And cleanup as usual ```bash $ docker rm -f mynginx ```
## DockerHub https://hub.docker.com/explore/
## Docker Images Application units for sharing and deployment - Dockerfile - Layered - Hosted locally or in shared online repositories
Architecture 
## Dockerfile One command == One layer ```docker FROM golang:1.9.2 WORKDIR / ADD main.go index.html aviator.png / RUN go get -d -v . RUN go build . CMD ["/aviator"] ``` [Dockerfile](../Dockerfile)
## Build golang was our base image, we build a new one on top ```bash $ docker build -t aviator . $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE aviator latest 22b01c5e71cb 48 seconds ago 740MB golang 1.9.2 1a34fad76b34 4 weeks ago 733MB ```
## Inspect Where are my layers? ```bash $ docker images --all REPOSITORY TAG IMAGE ID CREATED SIZE none none 85e6e677bb7f 13 seconds ago 740MB aviator latest 256b23989de5 13 seconds ago 740MB none none b71e4c2bbf47 22 seconds ago 733MB none none f8205be1377c 23 seconds ago 733MB none none b54be6539976 23 seconds ago 733MB golang 1.9.2 1a34fad76b34 4 weeks ago 733MB ```
## Tagging 'latest' is just the default when nothing is given Same build? Same image id, no rebuild ```bash $ docker build -t aviator:mytag . $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE aviator latest 22b01c5e71cb 2 minutes ago 740MB aviator mytag 22b01c5e71cb 2 minutes ago 740MB golang 1.9.2 1a34fad76b34 4 weeks ago 733MB ```
## Multi-stage Builds Reuse previous builds (COPY --from=builder) ``` FROM golang:1.9.2 AS builder WORKDIR / ADD main.go / RUN go get -d -v . RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o aviator . CMD ["/aviator"] FROM alpine:latest WORKDIR / RUN apk --no-cache add ca-certificates ADD index.html aviator.png / COPY --from=builder /aviator / CMD ["/aviator"] ``` [Dockerfile](../Dockerfile.multi)
## Small Runtime Images 740MB vs 11MB for the runtime image (!!) ``` $ docker build -t aviator:small -f Dockerfile.multi . $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE aviator small 3caadbeed7b5 2 minutes ago 11.3MB none none 00c4fd9c5705 2 minutes ago 740MB aviator latest 256b23989de5 2 minutes ago 740MB aviator mytag 256b23989de5 2 minutes ago 740MB alpine latest e21c333399e0 9 days ago 4.14MB golang 1.9.2 1a34fad76b34 5 weeks ago 733MB ```
## Run Run locally, interactivily ```bash $ docker run --rm --name aviator -p 80:80 aviator:small ``` Or in the background (-d), and give it a name ```bash $ docker run -d --rm --name aviator -p 80:80 aviator:small a85efd4fa0eeef45a89051ab2426e143ac54ad3cbb1505552e08402ff4d37771 $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a85efd4fa0ee aviator:small "/aviator" 4 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp aviator ``` Check it at http://localhost
## Access Run a command in a running container, even a shell ```bash $ docker exec aviator ps auxw $ docker exec aviator ip addr $ docker exec -it aviator sh / # ```
## Logging If the application writes to stdout/err, it's easy as ```bash $ docker logs -f aviator 2017/12/08 14:04:13 next /root/next 2017/12/08 14:04:14 next /root/next 2017/12/08 14:04:15 next /root/next 2017/12/08 14:04:16 next /root/next ```
## Destroy -f will also stop it if it was running ```bash $ docker rm -f aviator $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ```
## Pushing Share your images on public repositories If not given, docker.io (dockerhub) is assumed ```bash $ docker build -t gitlab-registry.cern.ch/YOURID/aviator:latest . $ docker login gitlab-registry.cern.ch $ docker push gitlab-registry.cern.ch/YOURID/aviator:latest ``` Check it in gitlab https://gitlab.cern.ch/YOURID/aviator
## Cleanup Prune gets rid of local containers, caches, ... ```bash $ docker system prune ``` Option -a to prune also images (not right now)
## Exercise 1.1 Run a second aviator instance on port 9999 http://localhost:9999
## Exercise 1.2 Check the stats of running containers TIP: docker --help
## Exercise 1.3 Check the history of an existing image TIP: docker --help
## Exercise 1.4 Copy a file to or from a running container TIP: docker --help
## OpenStack Magnum Container Orchestration Engine ```bash $ . Personal\ rbritoda-openrc.sh $ openstack coe cluster template list | kubernetes-preview | | kubernetes-1.13.3-2 | ... ```
## Cluster Templates ```bash $ openstack coe cluster template show kubernetes-1.13.3-2 | master_flavor_id | m2.small | flavor_id | m2.small | labels | {u'kube_tag': u'v1.13.3', ...} ```
## Create Let's create a kubernetes cluster ```bash $ openstack coe cluster create --cluster-template kubernetes-1.13.3 --node-count 2 --keypair YOURKEYPAIR YOURID-handson-kub Request to create cluster 615aa816-0ec5-4be9-a205-a9cdcccc0554 accepted $ openstack coe cluster list +--------------------------------------+----------------------+-------------+------------+--------------+--------------------+ | uuid | name | keypair | node_count | master_count | status | +--------------------------------------+----------------------+-------------+------------+--------------+--------------------+ | 615aa816-0ec5-4be9-a205-a9cdcccc0554 | YOURID-handson-kub | YOURKEYPAIR | 2 | 1 | CREATE_IN_PROGRESS | +--------------------------------------+----------------------+-------------+------------+--------------+--------------------+ ```
## Access Config to fetch credentials ```bash $ mkdir -p ~/clusters/YOURID-handson-kub $ cd ~/clusters/YOURID-handson-kub $ openstack coe cluster config YOURID-handson-kub > env.sh ``` Access using native clients ``` $ . env.sh $ kubectl get node NAME STATUS ROLES AGE VERSION YOURID-handson-kub-6dc43wjvbdcv-minion-0 Ready none 1m v1.13.3 ```
## Scale Up or down, change the node_count ```bash $ openstack coe cluster update YOURID-handson-kub replace node_count=3 Request to update cluster YOURID-handson-kub has been accepted. $ openstack coe cluster list | b34d0745-ade8-4d0a-94ba-96febf9e30fd | YOURID-handson-kub | rocha-cern | 3 | 1 | UPDATE_IN_PROGRESS | ```
## Labels Enable and disable features, override defaults [Available Labels](http://clouddocs.web.cern.ch/clouddocs/containers/quickstart.html#custom-clusters) ```bash $ openstack coe cluster create --labels kube_tag=v1.13.3 ... ```
## Exercise 2.1 Check label differences between kubernetes and kubernetes-alpha
## Exercise 2.2 How would you get a swarm cluster using binpack as scheduling strategy?
## Kubernetes
## Resources Pod, Service, Volume, Namespace ReplicaSet, Deployment, StatefulSet, DaemonSet Secret, Job
## Deployment Define the application containers and metadata ``` apiVersion: extensions/v1beta1 kind: Deployment metadata: name: aviator labels: app: aviator spec: replicas: 5 template: metadata: labels: app: aviator spec: containers: - name: aviator image: gitlab-registry.cern.ch/cloud-infrastructure/aviator:small imagePullPolicy: IfNotPresent ports: - containerPort: 80 livenessProbe: httpGet: path: / port: 80 readinessProbe: httpGet: path: / port: 80 ``` [deployment.yaml](kubernetes/deployment.yaml)
## Service An internal, load balanced entrypoint to an application ``` apiVersion: v1 kind: Service metadata: name: aviator spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: aviator selector: app: aviator ``` [deployment.yaml](kubernetes/deployment.yaml)
## Ingress External load balancing for the whole cluster ``` apiVersion: extensions/v1beta1 kind: Ingress metadata: name: aviator spec: rules: - host: "*.cern.ch" http: paths: - path: /aviator backend: serviceName: aviator servicePort: 80 ``` [deployment.yaml](kubernetes/deployment.yaml)
## Create Single create command for all manifests ```bash $ kubectl create -f kubernetes/deployment.yaml ``` Recording allows later rollback ```bash $ kubectl create --record -f kubernetes/deployment.yaml ```
## Check Overview of our deployment ```bash $ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE aviator 1 1 1 0 20s ``` Or details of a specific resource ```bash $ kubectl describe deployment/aviator ```
Check all resources ```bash $ kubectl get all ... ``` Check system resources ```bash $ kubectl -n kube-system get all ... ```
Access ```bash $ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE aviator NodePort 10.254.137.51
80:30220/TCP 1m kubernetes ClusterIP 10.254.0.1
443/TCP 1h ``` We defined the Service as a NodePort ```bash $ kubectl get node NAME STATUS ROLES AGE VERSION YOURID-handson-kub-xysqt32qbxap-minion-0 Ready
1h v1.13.3 ``` http://YOURID-handson-kub-xysqt32qbxap-minion-0:30220
## Scale Deployments are backed by Pods and Replica Sets ```bash $ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE aviator 1 1 1 1 7m ``` Which makes it very easy to scale ```bash $ kubectl scale --replicas=3 deployment/aviator ```
## Isolation With namespaces, run multiple instances of one app ```bash $ kubectl create namespace canary $ kubectl -n canary create -f kubernetes/deployment.yaml $ kubectl -n canary get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE aviator 1 1 1 1 20s $ kubectl get deployment --all-namespaces ```
## Live Changes Useful, even if better to track them in manifests ```bash $ kubectl edit deployment aviator ```
## Advanced Scheduling [Selectors](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/), [Taints and Tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) And so much more...
## Monitoring Comes built-in at CERN, just proxy first ```bash $ kubectl proxy Starting to serve on 127.0.0.1:8001 ``` ``` $ kubectl -n kube-system get secret | grep kubernetes-dashboard-token $ kubectl describe secret/kubernetes-dashboard-token-mz88d | grep token ``` [Kubernetes Dashboard](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
## Load Balancing Built-in Ingress controller, accessed the same way ```bash $ kubectl get node NAME STATUS ROLES AGE VERSION YOURID-handson-kub-xysqt32qbxap-minion-0 Ready
1h v1.13.3 $ kubectl label node YOURID-handson-kub-xysqt32qbxap-minion-0 role=ingress ``` http://YOURID-handson-kub-xysqt32qbxap-minion-0/)
## Exercise 4.1 Change the number of replicas of aviator to 10 TIP: look for both 'scale' and 'edit' commands
## Exercise 4.2 Check logs from our aviator TIP: not very different from the docker command
## Exercise 4.3 Delete a pod, see that it a new one is launched TIP: Check the age of all pods to see there's a new one
## Exercise 4.4 Try to get an interactive shell in one of the pods TIP: kubectl --help
## Exercise 4.5 Check if the cluster has an internal DNS TIP: Ping 'aviator' from inside one of the pods
## Application Lifecycle Helm to improve our flying skills Manages application on a kubernetes cluster [QuickStart](https://docs.helm.sh/using_helm/#quickstart-guide)
## Initialize Reusing the previous Kubernetes cluster ```bash $ kubectl create -f helm/tiller-rbac.yaml $ helm init --service-account tiller Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Happy Helming! ```
## Charts Go templates for Kubernetes manifests ```bash spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - containerPort: {{ .Values.service.port }} ``` [deployment.yaml](../helm/aviator/templates/deployment.yaml)
## Values Data for deployment configurations (~ hiera) ```bash replicaCount: 1 image: repository: gitlab-registry.cern.ch/cloud-infrastructure/aviator/base tag: small pullPolicy: IfNotPresent ``` [values.yaml](../helm/aviator/values.yaml)
## Deployment One per application, with versions and history ```bash $ helm install helm/aviator --name=aviator-production --namespace=aviator-production $ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE aviator-production 1 Wed Mar 21 20:31:21 2018 DEPLOYED aviator-0.1.0 aviator-production ```
## Loops Make our aviator loop ```golang $ vim main.go var LOOPS = true ``` Rebuild with a version tag ```bash $ docker build -t gitlab-registry.cern.ch/YOURID/aviator:v2 . $ docker push gitlab-registry.cern.ch/YOURID/aviator:v2 ```
## Validate Check the feature works in a separate instance ```bash $ vim myvalues.yaml image: repository: gitlab-registry.cern.ch/YOURID/aviator tag: v2 $ helm install -f myvalues.yaml --name=aviator-v2 --namespace=aviator-v2 helm/aviator $ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE aviator-production 1 Wed Mar 21 20:31:21 2018 DEPLOYED aviator-0.1.0 aviator-production aviator-v2 1 Wed Mar 21 20:40:09 2018 DEPLOYED aviator-0.1.0 aviator-v2 ``` ``` $ kubectl get node NAME STATUS ROLES AGE VERSION YOURID-handson-kub-xysqt32qbxap-minion-0 Ready
1h v1.13.3 ``` http://YOURID-handson-kub-xysqt32qbxap-minion-0.cern.ch/aviator-v2-aviator/
## Rollout Reviewed and checked, rollout to production ```bash $ helm upgrade -f myvalues.yaml aviator-production helm/aviator/ $ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE aviator-production 2 Wed Mar 21 20:45:32 2018 DEPLOYED aviator-0.1.0 aviator-production aviator-v2 1 Wed Mar 21 20:42:12 2018 DEPLOYED aviator-0.1.0 aviator-v2 $ helm history aviator-production REVISION UPDATED STATUS CHART DESCRIPTION 1 Wed Mar 21 20:31:21 2018 SUPERSEDED aviator-0.1.0 Install complete 2 Wed Mar 21 20:45:32 2018 DEPLOYED aviator-0.1.0 Upgrade complete ``` http://YOURID-handson-kub-xysqt32qbxap-minion-0.cern.ch/aviator-production-aviator/
## Rollback Loops make me dizzy... ```bash $ helm rollback aviator-production 1 $ helm history aviator-production REVISION UPDATED STATUS CHART DESCRIPTION 1 Wed Mar 21 20:31:21 2018 SUPERSEDED aviator-0.1.0 Install complete 2 Wed Mar 21 20:45:32 2018 SUPERSEDED aviator-0.1.0 Upgrade complete 3 Wed Mar 21 20:47:18 2018 DEPLOYED aviator-0.1.0 Rollback to 1 ``` http://YOURID-handson-kub-xysqt32qbxap-minion-0.cern.ch/aviator-production-aviator/
## Exercise 5.1 Check the deployments, pods, ... that were created TIP: use the kubernetes client
## Exercise 5.2 Delete the 'feature' validation deployment from above TIP: using helm, never use kubectl for helm deployments
## Exercise 5.3 Update the number of replicas of our production app
## Exercise 5.4 Check the status information of our production app TIP: helm --help
## Exercise 5.5 Download the current kubernetes manifests of our app TIP: helm --help
## Auto Devops Work in Progress