[따배쿠] Controller - DaemonSet
DaemonSet
- Node 당 Pod가 한 개씩만 실행되도록 보장
- 로그 수입기, 모니터링 에이전트와 같은 프로그램 실행 시 유용
- Rolling update 기능 또한 포함
DaemonSet Definition
ReplicaSet과의 차이점은, replicas 속성이 없다는 점이다.
DaemonSet은 이미 노드 당 pod 1개만을 보장해주기 때문에, 별도로 replicas를 지정 안해줘도 된다.
어차피 1개만 생성될 거기 때문
예시
# node2 삭제
root@master:~/Getting-Start-Kubernetes/6# kubectl delete nodes node2
node "node2" deleted
root@master:~/Getting-Start-Kubernetes/6# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane 32d v1.30.6
node1 Ready <none> 32d v1.30.6
# daemonset-exam.yaml 파일 작성
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-nginx
spec:
selector:
matchLabels:
app: webui
template:
metadata:
name: nginx-pod
labels:
app: webui
spec:
containers:
- name: nginx-container
image: nginx:1.14
# 생성 및 확인
root@master:~/Getting-Start-Kubernetes/6# kubectl create -f daemonset-exam.yaml
daemonset.apps/daemonset-nginx created
root@master:~/Getting-Start-Kubernetes/6# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-nginx-lf8dn 1/1 Running 0 5s 192.168.166.142 node1 <none> <none>
node1에 pod 1개 실행되고 있는 것을 확인 가능
# node2 다시 연결
[master]
root@master:~/Getting-Start-Kubernetes/6# kubeadm token create
rqqxun.rv7gdprf6tgkzhdq
root@master:~/Getting-Start-Kubernetes/6# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
[토큰값] 23h 2024-12-06T06:59:24Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
worker node가 master node에 join을 하기 위해서는 token이 필요하다.
*kubeadm token create --h [시간] : 시간 만큼의 유효 기간을 가진 토큰 생성
*kubeadm token list : token 목록 확인
[node2]
root@node2:~# kubeadm reset
W1205 07:02:13.876745 240515 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1205 07:03:46.454911 240515 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
*kubectl reset : node 초기화
root@node2:~# kubeadm join 10.100.0.104:6443 --token [토큰값] --discovery-token-ca-cert-hash [블라블라]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 505.314182ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
처음 k8s 클러스터 구축 후,
master node에 worker node를 join 하기 위해 했던 'kubeadm join ~' 명령어를 찾아 토큰값만 바꿔준 후 실행
[master]
root@master:~/Getting-Start-Kubernetes/6# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 32d v1.30.6
node1 Ready <none> 32d v1.30.6
node2 Ready <none> 2m31s v1.30.6
kubectl get nodes 명령어를 통해, 다시 node2가 join 된 것을 확인 가능
# pod 확인
root@master:~/Getting-Start-Kubernetes/6# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-nginx-4xvzr 1/1 Running 0 3m56s 192.168.104.0 node2 <none> <none>
daemonset-nginx-lf8dn 1/1 Running 0 12m 192.168.166.142 node1 <none> <none>
node2가 join 됨과 동시에, daemonSet에 의해 node2에서도 pod 1개가 보장된다.
예시2 (Rolling update)
# daemonset 상태 확인
root@master:~/Getting-Start-Kubernetes/6# kubectl get daemonsets.apps
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset-nginx 2 2 2 2 2 <none> 15m
현재 node1에 pod 1개, node2에 pod 1개가 배포되어 있으므로 현재 상태는 2이다.
# daemonset 수정
root@master:~/Getting-Start-Kubernetes/6# kubectl edit daemonsets.apps daemonset-nginx
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2024-12-05T06:58:26Z"
generation: 1
name: daemonset-nginx
namespace: default
resourceVersion: "240225"
uid: a77cca4a-f15a-4d82-95d7-1b5f551b92d0
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: webui
template:
metadata:
creationTimestamp: null
labels:
app: webui
name: nginx-pod
spec:
containers:
- image: nginx:1.15
imagePullPolicy: IfNotPresent
name: nginx-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 2
desiredNumberScheduled: 2
numberAvailable: 2
numberMisscheduled: 0
numberReady: 2
observedGeneration: 1
updatedNumberScheduled: 2
nginx 이미지 버전을 1.15로 수정한다. (기존 1.14 -> 1.15)
# 확인
root@master:~/Getting-Start-Kubernetes/6# kubectl describe pod daemonset-nginx-tddcd
Name: daemonset-nginx-tddcd
Namespace: default
Priority: 0
Service Account: default
Node: node1/10.100.0.101
Start Time: Thu, 05 Dec 2024 07:17:00 +0000
Labels: app=webui
controller-revision-hash=6654b84fc9
pod-template-generation=2
Annotations: cni.projectcalico.org/containerID: 0d2f67334bf1579957b756545a1ed2e83f533ab34bc98e3d86c7a979c204345d
cni.projectcalico.org/podIP: 192.168.166.139/32
cni.projectcalico.org/podIPs: 192.168.166.139/32
Status: Running
IP: 192.168.166.139
IPs:
IP: 192.168.166.139
Controlled By: DaemonSet/daemonset-nginx
Containers:
nginx-container:
Container ID: containerd://3f3dcc49242fc74560ff45f87a1d35cb17b80fed23bd611e535679c147901eda
Image: nginx:1.15
Image ID: docker.io/library/nginx@sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 05 Dec 2024 07:17:01 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrfbk (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-wrfbk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 113s default-scheduler Successfully assigned default/daemonset-nginx-tddcd to node1
Normal Pulled 112s kubelet Container image "nginx:1.15" already present on machine
Normal Created 112s kubelet Created container nginx-container
Normal Started 112s kubelet Started container nginx-container
kubectl edit 명령어만으로도,
daemonSet이 관리하는 pod들의 rolling update가 정상적으로 진행된 것을 확인 가능
예시3 (rollback)
# rollback
root@master:~/Getting-Start-Kubernetes/6# kubectl rollout undo daemonset daemonset-nginx
daemonset.apps/daemonset-nginx rolled back
# 확인
root@master:~/Getting-Start-Kubernetes/6# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-nginx-ckrkl 1/1 Running 0 5s 192.168.104.6 node2 <none> <none>
daemonset-nginx-phw4p 1/1 Running 0 3s 192.168.166.143 node1 <none> <none>
root@master:~/Getting-Start-Kubernetes/6# kubectl describe pod daemonset-nginx-ckrkl
Name: daemonset-nginx-ckrkl
Namespace: default
Priority: 0
Service Account: default
Node: node2/10.100.0.102
Start Time: Thu, 05 Dec 2024 07:21:28 +0000
Labels: app=webui
controller-revision-hash=5f67dcf6d
pod-template-generation=3
Annotations: cni.projectcalico.org/containerID: 1a7ac32db496c0b7ccb0eb171ab1f2d59df5392c695a9f9fc3948fdd563e696e
cni.projectcalico.org/podIP: 192.168.104.6/32
cni.projectcalico.org/podIPs: 192.168.104.6/32
Status: Running
IP: 192.168.104.6
IPs:
IP: 192.168.104.6
Controlled By: DaemonSet/daemonset-nginx
Containers:
nginx-container:
Container ID: containerd://ed6b503786c830bbd3058e542604e0b48025067452ee2a2f536dd10e1fcfb4ef
Image: nginx:1.14
Image ID: docker.io/library/nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 05 Dec 2024 07:21:29 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5hq7t (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5hq7t:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18s default-scheduler Successfully assigned default/daemonset-nginx-ckrkl to node2
Normal Pulled 17s kubelet Container image "nginx:1.14" already present on machine
Normal Created 17s kubelet Created container nginx-container
Normal Started 17s kubelet Started container nginx-container
정상적으로 nginx:1.14로 rollback 된 것을 확인 가능