Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Slides are attached: Kubernetes.pptx

Kubernetes

You can either run all this locally on your machine or on k8s-shell. If you ssh, please setup port forwarding. Please use a random port since others might be trying to use the same port as well. The port has to be greater than 1024 and less than 65535. Just pick something random and remember it

Code Block
ssh -L <port>:localhost:<port> username@k8s-shell

...

aliascommand
kkubectl
kakubectl apply
kgkubectl get

kd

kubectl delete

Nodes (no)

website: https://kubernetes.io/docs/concepts/architecture/nodes/

Time to see how many nodes we have. How do we get the list of the nodes

Code Block
languagebash
titlelist nodes
collapsetrue
> kubectl get nodes
NAME                        STATUS   ROLES               AGE    VERSION
k8s-class-controlplane-01   Ready    controlplane,etcd   128m   v1.24.9
k8s-class-controlplane-02   Ready    controlplane,etcd   89m    v1.24.9
k8s-class-controlplane-03   Ready    controlplane,etcd   86m    v1.24.9
k8s-class-worker-01         Ready    worker              126m   v1.24.9
k8s-class-worker-02         Ready    worker              126m   v1.24.9
k8s-class-worker-03         Ready    worker              90m    v1.24.9

Namespace (ns)

website: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

List all namespaces

Code Block
languagebash
titlelist namespaces
collapsetrue
> kubectl get namespace
NAME                          STATUS   AGE
cattle-fleet-system           Active   128m
cattle-impersonation-system   Active   129m
cattle-system                 Active   129m
cinder-csi                    Active   125m
default                       Active   130m
kube-node-lease               Active   130m
kube-public                   Active   130m
kube-system                   Active   130m
local                         Active   128m
metallb-system                Active   122m
nfs-taiga                     Active   122m
traefik                       Active   122m

...

Code Block
languagebash
titleuse namespace
> kubectl config set-context --current --namespace=kooper
Context "k8s-class" modified.

Pods (po)

Simple yaml file for pods:

website: https://kubernetes.io/docs/concepts/workloads/pods/

Simple yaml file for pods (make sure you delete the namespace: kooper):

Code Block
language
Code Block
languageyml
titlemy first pod
apiVersion: v1
kind: Pod
metadata:
  name: cowsay
  namespace: kooper
spec:
  restartPolicy: Never
  containers:
  - name: cowsay
    image: rancher/cowsay

...

Storage

Storage Classes (sc)

website: https://kubernetes.io/docs/concepts/storage/storage-classes/

List the storage classes available in this cluster

Code Block
languagebash
titlelist storage class
collapsetrue
> kubectl get sc
NAME                   PROVISIONER                                               RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-cinder-sc-delete   cinder.csi.openstack.org                                  Delete          Immediate           true                   18h
csi-cinder-sc-retain   cinder.csi.openstack.org                                  Retain          Immediate           true                   18h
nfs-taiga (default)    cluster.local/nfs-taiga-nfs-subdir-external-provisioner   Retain          Immediate           true                   17h

Persistant Volume (pv)

website: https://kubernetes.io/docs/concepts/storage/volumes/

List all volumes

Code Block
languagebash
titlelist pv
collapsetrue
> kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pvc-4ff2a331-466b-41c4-9c4a-18734c70badd   128Mi      RWO            Retain           Bound    traefik/traefik   nfs-taiga               18h

Persistant Volume Claims (pvc)

website: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

Create a PVCCreate a PVC

Code Block
languageyml
titlePVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kooper-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

...

Mount volume in container (make sure you replace the claimName with your own claim)

Code Block
languageyml
titlePod with PVC
collapsetrue
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-pvc
spec:
  restartPolicy: Never
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: kooper-pvc
  containers:
  - name: ubuntu
    image: ubuntu
    command: ["sleep", "3600"]
    volumeMounts:
    - name: data
      mountPath: /data

...

On k8s-shell you can go to /taiga/<folder> and see the same data

Pods cntd

website: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions

A simple webserver using the previous created pvc, liveness and readiness

Code Block
languageyml
titlenginx deployment
collapsetrue
apiVersion: apps/v1
kind: DeploymentPod
metadata:
  name: nginx-deployment
spec:
  labelsvolumes:
  -  appname: nginxdata
spec:
   replicas persistentVolumeClaim: 3
  selector:
    matchLabelsclaimName: kooper-pvc
  initContainers:
  -  appname: nginx
  template:create-index-html
    image: busybox:1.28
    metadatacommand:
      - labels:sh
      - -c
 app: nginx
    #- copiedecho from before"<html><body>Created $(date)</body></html>">/data/index.html
    specvolumeMounts:
    - name: volumes:
    data
      mountPath: /data
  containers:
  - name: datanginx
    image: nginx:1.14.2
    persistentVolumeClaimports:
    - containerPort: 80
    claimNamelivenessProbe: kooper-pvc
      initContainershttpGet:
      -  nameport: create-index-html80
    readinessProbe:
     image: busyboxhttpGet:1.28
        commandport: 80
      initialDelaySeconds: 5
   - sh
  periodSeconds: 10
    volumeMounts:
   - -c
 name: data
        - echo "<html><body>Created $(date)</body></html>">/data/index.html
        volumeMounts:
        - name: datamountPath: /usr/share/nginx/html

Forward a port so we can check it out

Code Block
languagebash
titleport forward
collapsetrue
> kubectl port-forward pod nginx 8888:80Forwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80

Deployment (deploy)

website: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Take the previous pod and convert to a deployment

Code Block
languageyml
titlenginx deployment
collapsetrue
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    # copied from before
    spec:
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: kooper-pvc
      initContainers:
      - name: create-index-html
        image: busybox:1.28
        command:
          - sh
          - -c
          - echo "<html><body>Created $(date)</body></html>">/data/index.html
        volumeMounts:
        - name: data
          mountPath: /data
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            port: 80
        readinessProbe:
          httpGet:
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html

Scaling the deployment

Code Block
languagebash
titleport forward
collapsetrue
> kubectl scale deployment/nginx --replicas=0deployment.apps/nginx scaled

Service (svc)

website: https://kubernetes.io/docs/concepts/services-networking/service/

Exposing pod(s) with a service. This will act as a load balancer

Code Block
languageyml
titlenginx service
collapsetrue
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

We can check if this work by port forwarding

Ingress (ing)

website: https://kubernetes.io/docs/concepts/services-networking/ingress/

Final step is to map the hostname to the ingress, to the service, to the pod.

Code Block
languageyml
titlenginx ingress
collapsetrue
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
spec:
  rules:
  - host: kooper.k8s-class.ncsa.cloud
    http:
      paths:
      - path: /
        pathType: ImplementationSpecific
        backend:
          service:
            name: nginx
            port:
              number: 80

We can now visit all of our work at https://kooper.k8s-class.ncsa.cloud/

Helm

website: https://helm.sh

Create Chart

Create the helm chart

Code Block
languagebash
titlecreate helm chart
collapsetrue
> helm create nginx
Creating nginx 

What is the version of the helm chart?

Code Block
languageyml
titleversion in chart.yaml
collapsetrue
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

Install Chart

Customize the helm chart using some values

Code Block
languageyml
titlecustomization
collapsetrue
replicaCount: 3

ingress:
  enabled: true
  hosts:
    - host: kooper.k8s-class.ncsa.cloud
      paths:
        - path: /
          pathType: ImplementationSpecific

Check helm template

Code Block
languagebash
titlecreate helm chart
collapsetrue
> helm template . -f custom.yaml
---
# Source: nginx/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
...
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['release-name-nginx:80']
  restartPolicy: Never

Installing helm chart

Code Block
languagebash
titleinstall helm chart
collapsetrue
> helm install nginx . -f custom.yaml
NAME: nginx
LAST DEPLOYED: Sun Jun 25 22:45:54 2023
NAMESPACE: kooper
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  http://kooper.k8s-class.ncsa.cloud/

Listing helm chart

Code Block
languagebash
titlelist helm chart
collapsetrue
> helm ls
NAME 	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART      	APP VERSION
nginx	kooper   	1       	2023-06-25 22:45:54.045250697 -0500 CDT	deployed	nginx-0.1.0	1.16.0

Storage

add storage to the helm chart (need to modify 2 files)

Code Block
languageyml
titlepvc.yaml
collapsetrue
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "nginx.fullname" . }}-data
  labels:
    {{ include "nginx.labels" . | nindent 4 }}
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: {{ .Values.persistence.size | quote }}
  {{- if .Values.persistence.storageClass }}
  storageClassName: {{ .Values.persistence.storageClass }}
  {{- end -}}
{{- end }}

Extend Chart (database)

We will have to do a few steps

  • update the chart to add a depende
  • update values.yaml with some defaults
  • update deployment to wait for the database

Update chart.yaml

Code Block
languageyml
titleChart.yaml
collapsetrue
# Dependencies for chart. Some of the dependencies are only installed if they
# are enabled.
dependencies:
- name: postgresql
  version: ~8.9
  repository:  https://raw.githubusercontent.com/bitnami/charts/pre-2022/bitnami
  condition: postgres.enabled

Update values.yaml with defaults:

Code Block
languageyml
titlevalues.yaml
collapsetrue
# start with the name of the chart (or alias)
postgresql:
  ## enabled should the postgresql chart be installed as well. If not make
  ## sure that the other values are provied to get access to the database.
  enabled: true

  ## postgresqlUsername is the initial user added to the system. 
  # postgresqlUsername: postgres

  ## postgresqlPassword the password for postgresqlUsername
  postgresqlPassword: supersecret

  # need more space
  persistence:
    size: 20Gi

And make sure we have a simple check in the deployment to see if database is ready

(thanks Sara Lambert for this trick, see https://github.com/ncsa/checks)

Code Block
languageyml
titledeployment.yaml
collapsetrue
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
  ...
  template:
    ...
    spec:
      ...
      initContainers:
        - name: check-postgresql
          mountPathimage: /data
"ncsa/checks:1.0.1"
        containers  env:
            - name: nginxPGHOST
           image   value: nginx:1.14.2
{{ .Release.Name }}-postgresql
            - portsname: PGPORT
        - containerPort: 80
    value: "5432"
    livenessProbe:
        - name: httpGet:PGUSER
             port: 80
        readinessProbe:
 value: {{ .Values.postgresql.postgresqlUsername | default "postgres" | quote }}
            - httpGetname: PGPASSWORD
            port  valueFrom: 80
          initialDelaySeconds: 5
     secretKeyRef:
      periodSeconds: 10
        volumeMounts:
     name: {{ .Release.Name }}-postgresql
   - name: data
            mountPath key: /usr/share/nginx/html

kubectl port-forward pod nginx 8888:80

Deployment (deploy)

Take the previous pod and convert to a deployment

Service (svc)

Ingress (ing)

Helm

Page properties
hiddentrue
Related issues
postgresql-password