Kubernetes
Kubernetes Access
Access to the cluster is controlled using SUSE Rancher. You can login with your NCSA credentials. You should see a list of all the clusters you have access, you want to download the config for k8s-class
export KUBECONFIG=~/k8s-class.yaml
kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.15
Kustomize Version: v4.5.4
Server Version: v1.24.9
We can see that we have access to the cluster (running kubernetes 1.24.9)
Lets also check what other commands we can do
Commands we will use are:
command | description |
---|
apply | Apply a configuration to a resource by file name or stdin |
create | Create a resource from a file or from stdin |
delete | Delete resources by file names, stdin, resources and names, or by resources and label selector |
describe | Show details of a specific resource or group of resources |
version | Print the client and server version information |
config | Modify kubeconfig files |
Nodes (no)
Time to see how many nodes we have. How do we get the list of the nodes
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-class-controlplane-01 Ready controlplane,etcd 128m v1.24.9
k8s-class-controlplane-02 Ready controlplane,etcd 89m v1.24.9
k8s-class-controlplane-03 Ready controlplane,etcd 86m v1.24.9
k8s-class-worker-01 Ready worker 126m v1.24.9
k8s-class-worker-02 Ready worker 126m v1.24.9
k8s-class-worker-03 Ready worker 90m v1.24.9
Namespace (ns)
List all namespaces
> kubectl get namespace
NAME STATUS AGE
cattle-fleet-system Active 128m
cattle-impersonation-system Active 129m
cattle-system Active 129m
cinder-csi Active 125m
default Active 130m
kube-node-lease Active 130m
kube-public Active 130m
kube-system Active 130m
local Active 128m
metallb-system Active 122m
nfs-taiga Active 122m
traefik Active 122m
Create a namespace
> kubectl create namespace kooper
namespace/kooper created
Show namespace object in yaml
> kubectl get namespace kooper -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
cattle.io/status: '{"Conditions":[{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2023-06-25T13:35:36Z"},{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2023-06-25T13:35:36Z"}]}'
lifecycle.cattle.io/create.namespace-auth: "true"
creationTimestamp: "2023-06-25T13:35:35Z"
finalizers:
- controller.cattle.io/namespace-auth
labels:
kubernetes.io/metadata.name: kooper
name: kooper
resourceVersion: "296036"
uid: 6c4092ca-4024-4814-a2c0-5739777ea3aa
spec:
finalizers:
- kubernetes
status:
phase: Active
Lets make our life easier, and use this namespace by default
> kubectl config set-context --current --namespace=kooper
Context "k8s-class" modified.
Pods (po)
Simple yaml file for pods:
apiVersion: v1
kind: Pod
metadata:
name: cowsay
namespace: kooper
spec:
restartPolicy: Never
containers:
- name: cowsay
image: rancher/cowsay
Check the output (log) of the pods:
> kubectl logs pod/cowsay
_
< >
-
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
Before we can change anything, we first need to delete that pod
> kubectl delete pod/cowsay
pod "cowsay" deleted
Make sure the cow goes Mooooo
apiVersion: v1
kind: Pod
metadata:
name: cowsay
namespace: kooper
spec:
restartPolicy: Never
containers:
- name: cowsay
image: rancher/cowsay
command:
- cowsay
- Mooooo
A pod can be multiple containers
apiVersion: v1
kind: Pod
metadata:
name: cowsay
namespace: kooper
spec:
restartPolicy: Never
containers:
- name: cowsay
image: rancher/cowsay
command:
- cowsay
- Mooooo
- name: whalesay
image: docker/whalesay
command:
- cowsay
- Howdy
Listing the output of the containers
> kubectl logs cowsay
Defaulted container "cowsay" out of: cowsay, whalesay
________
< Mooooo >
--------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
> kubectl logs cowsay -c whalesay
_______
< Howdy >
-------
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
Debugging if somethings goes wrong
> kubectl describe pod cowsay
Name: cowsay
Namespace: kooper
Priority: 0
Node: k8s-class-worker-01/192.168.2.42
Start Time: Sun, 25 Jun 2023 09:52:44 -0500
Labels: <none>
Annotations: <none>
Status: Failed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5s default-scheduler Successfully assigned kooper/cowsay to k8s-class-worker-01
Normal Pulling 5s kubelet Pulling image "rancher/cowsay"
Normal Pulled 5s kubelet Successfully pulled image "rancher/cowsay" in 321.252759ms
Normal Created 5s kubelet Created container cowsay
Warning Failed 5s kubelet Error: failed to start container "cowsay": Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "Mooooo": executable file not found in $PATH: unknown
Storage
Storage Classes (sc)
List the storage classes available in this cluster
> kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-cinder-sc-delete cinder.csi.openstack.org Delete Immediate true 18h
csi-cinder-sc-retain cinder.csi.openstack.org Retain Immediate true 18h
nfs-taiga (default) cluster.local/nfs-taiga-nfs-subdir-external-provisioner Retain Immediate true 17h
Persistant Volume (pv)
List all volumes
> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-4ff2a331-466b-41c4-9c4a-18734c70badd 128Mi RWO Retain Bound traefik/traefik nfs-taiga 18h
Persistant Volume Claims (pvc)
Create a PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kooper-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Apply the PVC and check volumes, why is there no new volume?
> kubectl apply -f pvc1.yaml
persistentvolumeclaim/kooper-pvc created
> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-4ff2a331-466b-41c4-9c4a-18734c70badd 128Mi RWO Retain Bound traefik/traefik nfs-taiga 18h
> kubectl delete -f pvc1.yaml
persistentvolumeclaim "kooper-pvc" deleted
Delete and modify volume claim to create a volume on nfs-taiga
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kooper-pvc
spec:
storageClassName: nfs-taiga
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Deployment (deploy)
Service (svc)
Ingress (ing)
Helm