Slides are attached: Kubernetes.pptx
Kubernetes
You can either run all this locally on your machine or on k8s-shell. If you ssh, please setup port forwarding. Please use a random port since others might be trying to use the same port as well. The port has to be greater than 1024 and less than 65535. Just pick something random and remember it
ssh -L <port>:localhost:<port> username@k8s-shell
If you see bind [127.0.0.1]:<port>: Address already in use
this means you have already something running on your local machine on that port and you will need to change it to something else.
Kubernetes Access
Access to the cluster is controlled using SUSE Rancher. You can login with your NCSA credentials. You should see a list of all the clusters you have access, you want to download the config for k8s-class
export KUBECONFIG=~/k8s-class.yaml
kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.15
Kustomize Version: v4.5.4
Server Version: v1.24.9
We can see that we have access to the cluster (running kubernetes 1.24.9)
Lets also check what other commands we can do
Commands we will use are:
command | description |
---|
apply | Apply a configuration to a resource by file name or stdin |
create | Create a resource from a file or from stdin |
delete | Delete resources by file names, stdin, resources and names, or by resources and label selector |
exec | Execute a command in a container |
get | Display one or many resources |
logs | Print the logs for a container in a pod |
port-forward | Forward one or more local ports to a pod |
api-resources | Print the supported API resources on the server (also shows abbreviations) |
describe | Show details of a specific resource or group of resources |
version | Print the client and server version information |
config | Modify kubeconfig files |
On k8s-shell I added some aliases to minimize the typing
alias | command |
---|
k | kubectl |
ka | kubectl apply |
kg | kubectl get |
kd | kubectl delete |
Nodes (no)
website: https://kubernetes.io/docs/concepts/architecture/nodes/
Time to see how many nodes we have. How do we get the list of the nodes
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-class-controlplane-01 Ready controlplane,etcd 128m v1.24.9
k8s-class-controlplane-02 Ready controlplane,etcd 89m v1.24.9
k8s-class-controlplane-03 Ready controlplane,etcd 86m v1.24.9
k8s-class-worker-01 Ready worker 126m v1.24.9
k8s-class-worker-02 Ready worker 126m v1.24.9
k8s-class-worker-03 Ready worker 90m v1.24.9
Namespace (ns)
website: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
List all namespaces
> kubectl get namespace
NAME STATUS AGE
cattle-fleet-system Active 128m
cattle-impersonation-system Active 129m
cattle-system Active 129m
cinder-csi Active 125m
default Active 130m
kube-node-lease Active 130m
kube-public Active 130m
kube-system Active 130m
local Active 128m
metallb-system Active 122m
nfs-taiga Active 122m
traefik Active 122m
Create a namespace
> kubectl create namespace kooper
namespace/kooper created
Show namespace object in yaml
> kubectl get namespace kooper -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
cattle.io/status: '{"Conditions":[{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2023-06-25T13:35:36Z"},{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2023-06-25T13:35:36Z"}]}'
lifecycle.cattle.io/create.namespace-auth: "true"
creationTimestamp: "2023-06-25T13:35:35Z"
finalizers:
- controller.cattle.io/namespace-auth
labels:
kubernetes.io/metadata.name: kooper
name: kooper
resourceVersion: "296036"
uid: 6c4092ca-4024-4814-a2c0-5739777ea3aa
spec:
finalizers:
- kubernetes
status:
phase: Active
Lets simplify this, and remove some cruft
apiVersion: v1
kind: Namespace
metadata:
name: kooper
Lets make our life easier, and use this namespace by default
> kubectl config set-context --current --namespace=kooper
Context "k8s-class" modified.
Pods (po)
website: https://kubernetes.io/docs/concepts/workloads/pods/
Simple yaml file for pods (make sure you delete the namespace: kooper):
apiVersion: v1
kind: Pod
metadata:
name: cowsay
namespace: kooper
spec:
restartPolicy: Never
containers:
- name: cowsay
image: rancher/cowsay
Check the output (log) of the pods:
> kubectl logs pod/cowsay
_
< >
-
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
Before we can change anything, we first need to delete that pod
> kubectl delete pod/cowsay
pod "cowsay" deleted
Make sure the cow goes Mooooo
apiVersion: v1
kind: Pod
metadata:
name: cowsay
namespace: kooper
spec:
restartPolicy: Never
containers:
- name: cowsay
image: rancher/cowsay
command:
- cowsay
- Mooooo
A pod can be multiple containers
apiVersion: v1
kind: Pod
metadata:
name: cowsay
namespace: kooper
spec:
restartPolicy: Never
containers:
- name: cowsay
image: rancher/cowsay
command:
- cowsay
- Mooooo
- name: whalesay
image: docker/whalesay
command:
- cowsay
- Howdy
Listing the output of the containers
> kubectl logs cowsay
Defaulted container "cowsay" out of: cowsay, whalesay
________
< Mooooo >
--------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
> kubectl logs cowsay -c whalesay
_______
< Howdy >
-------
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
Debugging if somethings goes wrong
> kubectl describe pod cowsay
Name: cowsay
Namespace: kooper
Priority: 0
Node: k8s-class-worker-01/192.168.2.42
Start Time: Sun, 25 Jun 2023 09:52:44 -0500
Labels: <none>
Annotations: <none>
Status: Failed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5s default-scheduler Successfully assigned kooper/cowsay to k8s-class-worker-01
Normal Pulling 5s kubelet Pulling image "rancher/cowsay"
Normal Pulled 5s kubelet Successfully pulled image "rancher/cowsay" in 321.252759ms
Normal Created 5s kubelet Created container cowsay
Warning Failed 5s kubelet Error: failed to start container "cowsay": Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "Mooooo": executable file not found in $PATH: unknown
Storage
Storage Classes (sc)
website: https://kubernetes.io/docs/concepts/storage/storage-classes/
List the storage classes available in this cluster
> kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-cinder-sc-delete cinder.csi.openstack.org Delete Immediate true 18h
csi-cinder-sc-retain cinder.csi.openstack.org Retain Immediate true 18h
nfs-taiga (default) cluster.local/nfs-taiga-nfs-subdir-external-provisioner Retain Immediate true 17h
Persistant Volume (pv)
website: https://kubernetes.io/docs/concepts/storage/volumes/
List all volumes
> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-4ff2a331-466b-41c4-9c4a-18734c70badd 128Mi RWO Retain Bound traefik/traefik nfs-taiga 18h
Persistant Volume Claims (pvc)
website: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
Create a PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kooper-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Apply the PVC and check volumes, why is there no new volume?
> kubectl apply -f pvc1.yaml
persistentvolumeclaim/kooper-pvc created
> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-4ff2a331-466b-41c4-9c4a-18734c70badd 128Mi RWO Retain Bound traefik/traefik nfs-taiga 18h
> kubectl delete -f pvc1.yaml
persistentvolumeclaim "kooper-pvc" deleted
Delete and modify volume claim to create a volume on nfs-taiga
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kooper-pvc
spec:
storageClassName: nfs-taiga
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Mount volume in container (make sure you replace the claimName with your own claim)
apiVersion: v1
kind: Pod
metadata:
name: pod-with-pvc
spec:
restartPolicy: Never
volumes:
- name: data
persistentVolumeClaim:
claimName: kooper-pvc
containers:
- name: ubuntu
image: ubuntu
command: ["sleep", "3600"]
volumeMounts:
- name: data
mountPath: /data
and edit data in volume
> kubectl exec -ti pod-with-pvc -- bash
root@pod-with-pvc:/#
On k8s-shell you can go to /taiga/<folder> and see the same data
Pods cntd
website: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions
A simple webserver using the previous created pvc, liveness and readiness
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: kooper-pvc
initContainers:
- name: create-index-html
image: busybox:1.28
command:
- sh
- -c
- echo "<html><body>Created $(date)</body></html>">/data/index.html
volumeMounts:
- name: data
mountPath: /data
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
livenessProbe:
httpGet:
port: 80
readinessProbe:
httpGet:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
Forward a port so we can check it out
> kubectl port-forward pod nginx 8888:80Forwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80
Deployment (deploy)
website: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Take the previous pod and convert to a deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
# copied from before
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: kooper-pvc
initContainers:
- name: create-index-html
image: busybox:1.28
command:
- sh
- -c
- echo "<html><body>Created $(date)</body></html>">/data/index.html
volumeMounts:
- name: data
mountPath: /data
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
livenessProbe:
httpGet:
port: 80
readinessProbe:
httpGet:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
Scaling the deployment
> kubectl scale deployment/nginx --replicas=0deployment.apps/nginx scaled
Service (svc)
website: https://kubernetes.io/docs/concepts/services-networking/service/
Exposing pod(s) with a service. This will act as a load balancer
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
We can check if this work by port forwarding
Ingress (ing)
website: https://kubernetes.io/docs/concepts/services-networking/ingress/
Final step is to map the hostname to the ingress, to the service, to the pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: kooper.k8s-class.ncsa.cloud
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: nginx
port:
number: 80
We can now visit all of our work at https://kooper.k8s-class.ncsa.cloud/
Helm
website: https://helm.sh
Create Chart
Create the helm chart
> helm create nginx
Creating nginx
What is the version of the helm chart?
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
Install Chart
Customize the helm chart using some values
replicaCount: 3
ingress:
enabled: true
hosts:
- host: kooper.k8s-class.ncsa.cloud
paths:
- path: /
pathType: ImplementationSpecific
Check helm template
> helm template . -f custom.yaml
---
# Source: nginx/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
...
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['release-name-nginx:80']
restartPolicy: Never
Installing helm chart
> helm install nginx . -f custom.yaml
NAME: nginx
LAST DEPLOYED: Sun Jun 25 22:45:54 2023
NAMESPACE: kooper
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
http://kooper.k8s-class.ncsa.cloud/
Listing helm chart
> helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nginx kooper 1 2023-06-25 22:45:54.045250697 -0500 CDT deployed nginx-0.1.0 1.16.0
Storage
add storage to the helm chart (need to modify 2 files)
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "nginx.fullname" . }}-data
labels:
{{ include "nginx.labels" . | nindent 4 }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end -}}
{{- end }}
Extend Chart (database)
We will have to do a few steps
- update the chart to add a depende
- update values.yaml with some defaults
- update deployment to wait for the database
Update chart.yaml
# Dependencies for chart. Some of the dependencies are only installed if they
# are enabled.
dependencies:
- name: postgresql
version: ~8.9
repository: https://raw.githubusercontent.com/bitnami/charts/pre-2022/bitnami
condition: postgres.enabled
Update values.yaml with defaults:
# start with the name of the chart (or alias)
postgresql:
## enabled should the postgresql chart be installed as well. If not make
## sure that the other values are provied to get access to the database.
enabled: true
## postgresqlUsername is the initial user added to the system.
# postgresqlUsername: postgres
## postgresqlPassword the password for postgresqlUsername
postgresqlPassword: supersecret
# need more space
persistence:
size: 20Gi
And make sure we have a simple check in the deployment to see if database is ready
(thanks Sara Lambert for this trick, see https://github.com/ncsa/checks)
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
...
initContainers:
- name: check-postgresql
image: "ncsa/checks:1.0.1"
env:
- name: PGHOST
value: {{ .Release.Name }}-postgresql
- name: PGPORT
value: "5432"
- name: PGUSER
value: {{ .Values.postgresql.postgresqlUsername | default "postgres" | quote }}
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgresql
key: postgresql-password
2 Comments
Chen Wang
Sandeep Puthanveetil Satheesan
Yes, it looks like that's the output.