@xdev-asia/xdev-knowledge-mcp 1.0.44 → 1.0.45
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/01-kien-truc-cka-kubeadm.md +133 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/02-cluster-upgrade-kubeadm.md +147 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/03-rbac-cka.md +152 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/04-deployments-daemonsets-statefulsets.md +186 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/05-scheduling-taints-affinity.md +163 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/06-services-endpoints-coredns.md +145 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/07-ingress-networkpolicies-cni.md +172 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/04-storage/lessons/08-persistent-volumes-storageclass.md +159 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/09-etcd-backup-restore.md +149 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/10-troubleshooting-nodes.md +153 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/11-troubleshooting-workloads.md +146 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/12-troubleshooting-networking-exam.md +170 -0
- package/content/series/luyen-thi/luyen-thi-cka/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/01-multi-container-pods.md +146 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/02-jobs-cronjobs-resources.md +174 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/03-rolling-updates-rollbacks.md +148 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/04-helm-kustomize.md +181 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/03-app-observability/lessons/05-probes-logging-debugging.md +183 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/06-configmaps-secrets.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/07-securitycontext-pod-security.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/08-resources-qos.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/09-services-ingress.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/10-networkpolicies-exam-strategy.md +236 -0
- package/content/series/luyen-thi/luyen-thi-ckad/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/01-kien-truc-kubernetes.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/02-pods-workloads-controllers.md +142 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/03-services-networking-storage.md +155 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/04-rbac-security.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/05-container-runtimes-oci.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/06-orchestration-patterns.md +147 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/03-cloud-native-architecture/lessons/07-cloud-native-architecture.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/08-observability.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/09-helm-gitops-cicd.md +162 -0
- package/content/series/luyen-thi/luyen-thi-kcna/index.md +1 -1
- package/data/quizzes.json +1059 -0
- package/package.json +1 -1
|
@@ -0,0 +1,133 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d1-l01
|
|
3
|
+
title: 'Bài 1: Kubernetes Architecture & Cluster Components'
|
|
4
|
+
slug: 01-kien-truc-cka-kubeadm
|
|
5
|
+
description: >-
|
|
6
|
+
Control plane và worker node components. kubeadm bootstrap cluster.
|
|
7
|
+
ETCD, API Server, Scheduler, Controller Manager trong môi trường CKA exam.
|
|
8
|
+
duration_minutes: 60
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 1
|
|
12
|
+
section_title: "Domain 1: Cluster Architecture, Installation & Configuration (25%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai1-kubeadm.png" alt="kubeadm Cluster Initialization Sequence và kubeconfig" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="architecture">1. Kubernetes Architecture Review (CKA Focus)</h2>
|
|
22
|
+
|
|
23
|
+
<p>CKA exam yêu cầu bạn có thể troubleshoot cluster components, không chỉ biết lý thuyết.</p>
|
|
24
|
+
|
|
25
|
+
<pre><code class="language-text">Control Plane Node Worker Nodes
|
|
26
|
+
────────────────── ────────────
|
|
27
|
+
kube-apiserver ◄──────────────── kubelet
|
|
28
|
+
etcd kube-proxy
|
|
29
|
+
kube-scheduler container runtime
|
|
30
|
+
controller-manager (containerd)
|
|
31
|
+
cloud-controller-manager (opt)
|
|
32
|
+
|
|
33
|
+
All components communicate via kube-apiserver (only etcd talks directly to API server)</code></pre>
|
|
34
|
+
|
|
35
|
+
<table>
|
|
36
|
+
<thead><tr><th>Component</th><th>Location</th><th>Config / Pod Path</th><th>Troubleshoot</th></tr></thead>
|
|
37
|
+
<tbody>
|
|
38
|
+
<tr><td><strong>kube-apiserver</strong></td><td>Control plane</td><td><code>/etc/kubernetes/manifests/kube-apiserver.yaml</code></td><td>kubectl get pods -n kube-system</td></tr>
|
|
39
|
+
<tr><td><strong>etcd</strong></td><td>Control plane</td><td><code>/etc/kubernetes/manifests/etcd.yaml</code></td><td>etcdctl member list</td></tr>
|
|
40
|
+
<tr><td><strong>kube-scheduler</strong></td><td>Control plane</td><td><code>/etc/kubernetes/manifests/kube-scheduler.yaml</code></td><td>logs in kube-system</td></tr>
|
|
41
|
+
<tr><td><strong>controller-manager</strong></td><td>Control plane</td><td><code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code></td><td>logs in kube-system</td></tr>
|
|
42
|
+
<tr><td><strong>kubelet</strong></td><td>Every node</td><td><code>/var/lib/kubelet/config.yaml</code>, systemd service</td><td>systemctl status kubelet</td></tr>
|
|
43
|
+
</tbody>
|
|
44
|
+
</table>
|
|
45
|
+
|
|
46
|
+
<blockquote><p><strong>Exam tip:</strong> Trên kubeadm cluster, control plane components chạy như <strong>static Pods</strong> (files trong <code>/etc/kubernetes/manifests/</code>). Kubelet tự động start/restart chúng. Khi sửa manifest file, kubelet tự reload Pod — không cần kubectl apply.</p></blockquote>
|
|
47
|
+
|
|
48
|
+
<h2 id="kubeadm">2. kubeadm — Cluster Bootstrap</h2>
|
|
49
|
+
|
|
50
|
+
<pre><code class="language-text"># 1. Init control plane
|
|
51
|
+
kubeadm init --pod-network-cidr=10.244.0.0/16
|
|
52
|
+
|
|
53
|
+
# 2. Setup kubeconfig (sau khi init)
|
|
54
|
+
mkdir -p $HOME/.kube
|
|
55
|
+
cp /etc/kubernetes/admin.conf $HOME/.kube/config
|
|
56
|
+
|
|
57
|
+
# 3. Install CNI plugin (required trước khi nodes Ready)
|
|
58
|
+
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
|
|
59
|
+
|
|
60
|
+
# 4. Join worker nodes (token từ kubeadm init output)
|
|
61
|
+
kubeadm join 192.168.1.10:6443 --token abc.xyz \
|
|
62
|
+
--discovery-token-ca-cert-hash sha256:...</code></pre>
|
|
63
|
+
|
|
64
|
+
<h2 id="kubeconfig">3. kubeconfig & Contexts</h2>
|
|
65
|
+
|
|
66
|
+
<pre><code class="language-text">~/.kube/config structure:
|
|
67
|
+
clusters: → cluster API endpoints
|
|
68
|
+
users: → auth credentials (certificates, tokens)
|
|
69
|
+
contexts: → cluster + user + namespace combo
|
|
70
|
+
current-context → active context
|
|
71
|
+
|
|
72
|
+
kubectl config commands:
|
|
73
|
+
kubectl config get-contexts # List contexts
|
|
74
|
+
kubectl config use-context NAME # Switch context
|
|
75
|
+
kubectl config current-context # Show active
|
|
76
|
+
kubectl config set-context NAME --namespace=prod # Set default ns</code></pre>
|
|
77
|
+
|
|
78
|
+
<blockquote><p><strong>Exam tip:</strong> CKA có nhiều clusters. Câu hỏi đầu tiên luôn là: "switch sang đúng context trước khi làm". Nhớ kiểm tra <code>kubectl config current-context</code>.</p></blockquote>
|
|
79
|
+
|
|
80
|
+
<h2 id="static-pods">4. Static Pods</h2>
|
|
81
|
+
|
|
82
|
+
<p><strong>Static Pods</strong> được kubelet quản lý trực tiếp, không qua API server. Config files nằm trong <code>staticPodPath</code> (thường là <code>/etc/kubernetes/manifests/</code>).</p>
|
|
83
|
+
|
|
84
|
+
<table>
|
|
85
|
+
<thead><tr><th>Static Pod</th><th>Khác Pod thường</th></tr></thead>
|
|
86
|
+
<tbody>
|
|
87
|
+
<tr><td>Kubelet tạo trực tiếp</td><td>Không có ReplicaSet, không có Deployment</td></tr>
|
|
88
|
+
<tr><td>Không thể xóa qua kubectl delete</td><td>Phải xóa file manifest</td></tr>
|
|
89
|
+
<tr><td>Mirror Pod xuất hiện trên API server</td><td>Hiển thị kubectl get pods nhưng read-only</td></tr>
|
|
90
|
+
</tbody>
|
|
91
|
+
</table>
|
|
92
|
+
|
|
93
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
94
|
+
|
|
95
|
+
<table>
|
|
96
|
+
<thead><tr><th>Task</th><th>Command</th></tr></thead>
|
|
97
|
+
<tbody>
|
|
98
|
+
<tr><td>Check control plane health</td><td><code>kubectl get pods -n kube-system</code></td></tr>
|
|
99
|
+
<tr><td>View apiserver config</td><td><code>cat /etc/kubernetes/manifests/kube-apiserver.yaml</code></td></tr>
|
|
100
|
+
<tr><td>Kubelet status</td><td><code>systemctl status kubelet</code></td></tr>
|
|
101
|
+
<tr><td>Kubelet logs</td><td><code>journalctl -u kubelet -n 50</code></td></tr>
|
|
102
|
+
<tr><td>Regenerate join token</td><td><code>kubeadm token create --print-join-command</code></td></tr>
|
|
103
|
+
</tbody>
|
|
104
|
+
</table>
|
|
105
|
+
|
|
106
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
107
|
+
|
|
108
|
+
<p><strong>Q1:</strong> A cluster's kube-apiserver Pod is not running. You check /etc/kubernetes/manifests/kube-apiserver.yaml and find a syntax error. After fixing it, what happens?</p>
|
|
109
|
+
<ul>
|
|
110
|
+
<li>A) You must run kubectl apply -f kube-apiserver.yaml</li>
|
|
111
|
+
<li>B) kubelet automatically detects the file change and restarts the static Pod ✓</li>
|
|
112
|
+
<li>C) kubeadm must be run to restart the control plane</li>
|
|
113
|
+
<li>D) The API server restart requires a full node reboot</li>
|
|
114
|
+
</ul>
|
|
115
|
+
<p><em>Explanation: Static Pods are managed by kubelet, which watches the manifest directory for changes. When the YAML file is fixed, kubelet automatically kills the old Pod and starts a new one.</em></p>
|
|
116
|
+
|
|
117
|
+
<p><strong>Q2:</strong> After running kubeadm init, a cluster admin runs kubectl get nodes and sees the control plane node with status "NotReady". What is the most likely cause?</p>
|
|
118
|
+
<ul>
|
|
119
|
+
<li>A) kubeadm init failed to create etcd</li>
|
|
120
|
+
<li>B) CNI plugin has not been installed ✓</li>
|
|
121
|
+
<li>C) The kubelet service is not running</li>
|
|
122
|
+
<li>D) The cluster lacks worker nodes</li>
|
|
123
|
+
</ul>
|
|
124
|
+
<p><em>Explanation: After kubeadm init, nodes remain NotReady until a CNI (Container Network Interface) plugin is installed. Without CNI, Pod networking doesn't work and nodes cannot report Ready.</em></p>
|
|
125
|
+
|
|
126
|
+
<p><strong>Q3:</strong> An administrator needs to run kubectl commands on a different cluster. What is the fastest way to switch without modifying the current kubeconfig permanently?</p>
|
|
127
|
+
<ul>
|
|
128
|
+
<li>A) Edit ~/.kube/config and change current-context</li>
|
|
129
|
+
<li>B) Use --context flag or kubectl config use-context ✓</li>
|
|
130
|
+
<li>C) Create a new kubeconfig file and delete the old one</li>
|
|
131
|
+
<li>D) Re-run kubeadm init with the target cluster</li>
|
|
132
|
+
</ul>
|
|
133
|
+
<p><em>Explanation: kubectl config use-context TARGET switches the active context. Alternatively, use --context=TARGET on individual commands for per-command switching without changing the default.</em></p>
|
|
@@ -0,0 +1,147 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d1-l02
|
|
3
|
+
title: 'Bài 2: Cluster Upgrade với kubeadm'
|
|
4
|
+
slug: 02-cluster-upgrade-kubeadm
|
|
5
|
+
description: >-
|
|
6
|
+
Upgrade Kubernetes cluster với kubeadm. Node drain, cordon, uncordon.
|
|
7
|
+
Upgrade control plane trước rồi mới worker nodes.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 2
|
|
12
|
+
section_title: "Domain 1: Cluster Architecture, Installation & Configuration (25%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai2-upgrade.png" alt="Kubernetes Cluster Upgrade Sequence — Control Plane first, then Workers" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="upgrade-overview">1. Upgrade Strategy Overview</h2>
|
|
22
|
+
|
|
23
|
+
<p>Kubernetes chỉ hỗ trợ upgrade <strong>1 minor version</strong> một lần (v1.28 → v1.29, không nhảy v1.28 → v1.30). Luôn upgrade control plane trước worker nodes.</p>
|
|
24
|
+
|
|
25
|
+
<pre><code class="language-text">Upgrade sequence:
|
|
26
|
+
1. Upgrade kubeadm (on control plane node)
|
|
27
|
+
2. kubeadm upgrade apply v1.29.0 (control plane components)
|
|
28
|
+
3. Upgrade kubelet + kubectl (on control plane)
|
|
29
|
+
4. For each worker node:
|
|
30
|
+
a. kubectl drain NODE --ignore-daemonsets
|
|
31
|
+
b. Upgrade kubeadm + kubelet + kubectl on node
|
|
32
|
+
c. kubeadm upgrade node
|
|
33
|
+
d. kubectl uncordon NODE</code></pre>
|
|
34
|
+
|
|
35
|
+
<h2 id="drain-cordon">2. Drain, Cordon & Uncordon</h2>
|
|
36
|
+
|
|
37
|
+
<table>
|
|
38
|
+
<thead><tr><th>Command</th><th>Effect</th><th>Dùng khi</th></tr></thead>
|
|
39
|
+
<tbody>
|
|
40
|
+
<tr><td><code>kubectl cordon NODE</code></td><td>Mark node Unschedulable (no new pods)</td><td>Chuẩn bị maintenance</td></tr>
|
|
41
|
+
<tr><td><code>kubectl drain NODE</code></td><td>Cordon + evict tất cả non-DaemonSet pods</td><td>Upgrade / replace node</td></tr>
|
|
42
|
+
<tr><td><code>kubectl uncordon NODE</code></td><td>Mark node Schedulable lại</td><td>Sau khi maintenance xong</td></tr>
|
|
43
|
+
</tbody>
|
|
44
|
+
</table>
|
|
45
|
+
|
|
46
|
+
<pre><code class="language-text">kubectl drain flags thường dùng:
|
|
47
|
+
--ignore-daemonsets # DaemonSet pods không thể evict, ignore them
|
|
48
|
+
--delete-emptydir-data # Evict pods dùng emptyDir volume
|
|
49
|
+
--force # Evict pods không managed by controller</code></pre>
|
|
50
|
+
|
|
51
|
+
<blockquote><p><strong>Exam tip:</strong> <code>kubectl drain</code> mà không có <code>--ignore-daemonsets</code> sẽ fail nếu node có DaemonSet pods. Luôn thêm flag này. Nếu có pods dùng <code>emptyDir</code>, cũng cần <code>--delete-emptydir-data</code>.</p></blockquote>
|
|
52
|
+
|
|
53
|
+
<h2 id="upgrade-steps">3. Chi tiết Upgrade Steps</h2>
|
|
54
|
+
|
|
55
|
+
<pre><code class="language-text"># ====== CONTROL PLANE NODE ======
|
|
56
|
+
|
|
57
|
+
# Step 1: Upgrade kubeadm
|
|
58
|
+
apt-mark unhold kubeadm
|
|
59
|
+
apt-get install -y kubeadm=1.29.0-00
|
|
60
|
+
apt-mark hold kubeadm
|
|
61
|
+
|
|
62
|
+
# Step 2: Verify upgrade plan
|
|
63
|
+
kubeadm upgrade plan
|
|
64
|
+
|
|
65
|
+
# Step 3: Apply upgrade
|
|
66
|
+
kubeadm upgrade apply v1.29.0
|
|
67
|
+
|
|
68
|
+
# Step 4: Upgrade kubelet + kubectl
|
|
69
|
+
apt-mark unhold kubelet kubectl
|
|
70
|
+
apt-get install -y kubelet=1.29.0-00 kubectl=1.29.0-00
|
|
71
|
+
apt-mark hold kubelet kubectl
|
|
72
|
+
systemctl daemon-reload && systemctl restart kubelet
|
|
73
|
+
|
|
74
|
+
# ====== WORKER NODE ======
|
|
75
|
+
(SSH vào worker node)
|
|
76
|
+
|
|
77
|
+
# Step 1: Drain node from control plane
|
|
78
|
+
kubectl drain worker-1 --ignore-daemonsets --delete-emptydir-data
|
|
79
|
+
|
|
80
|
+
# Step 2: Upgrade packages on worker
|
|
81
|
+
apt-mark unhold kubeadm kubelet kubectl
|
|
82
|
+
apt-get install -y kubeadm=1.29.0-00 kubelet=1.29.0-00 kubectl=1.29.0-00
|
|
83
|
+
|
|
84
|
+
# Step 3: Upgrade node config
|
|
85
|
+
kubeadm upgrade node
|
|
86
|
+
|
|
87
|
+
# Step 4: Restart kubelet
|
|
88
|
+
systemctl daemon-reload && systemctl restart kubelet
|
|
89
|
+
|
|
90
|
+
# Step 5: Uncordon from control plane
|
|
91
|
+
kubectl uncordon worker-1</code></pre>
|
|
92
|
+
|
|
93
|
+
<h2 id="version-skew">4. Version Skew Policy</h2>
|
|
94
|
+
|
|
95
|
+
<table>
|
|
96
|
+
<thead><tr><th>Component</th><th>Allowed skew vs kube-apiserver</th></tr></thead>
|
|
97
|
+
<tbody>
|
|
98
|
+
<tr><td>kube-apiserver</td><td>Must be same version as other control plane</td></tr>
|
|
99
|
+
<tr><td>kubelet</td><td>Can be 2 minor versions older</td></tr>
|
|
100
|
+
<tr><td>kubectl</td><td>±1 minor version from apiserver</td></tr>
|
|
101
|
+
<tr><td>kube-scheduler</td><td>Must match apiserver version</td></tr>
|
|
102
|
+
</tbody>
|
|
103
|
+
</table>
|
|
104
|
+
|
|
105
|
+
<blockquote><p><strong>Exam tip:</strong> Worker node kubelet có thể chạy phiên bản cũ hơn trong quá trình upgrade. Đây là lý do upgrade từng node một mà cluster vẫn hoạt động.</p></blockquote>
|
|
106
|
+
|
|
107
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
108
|
+
|
|
109
|
+
<table>
|
|
110
|
+
<thead><tr><th>Task</th><th>Command</th></tr></thead>
|
|
111
|
+
<tbody>
|
|
112
|
+
<tr><td>Check upgrade plan</td><td><code>kubeadm upgrade plan</code></td></tr>
|
|
113
|
+
<tr><td>Apply control plane upgrade</td><td><code>kubeadm upgrade apply v1.XX.0</code></td></tr>
|
|
114
|
+
<tr><td>Drain node (safe)</td><td><code>kubectl drain NODE --ignore-daemonsets --delete-emptydir-data</code></td></tr>
|
|
115
|
+
<tr><td>Mark node schedulable</td><td><code>kubectl uncordon NODE</code></td></tr>
|
|
116
|
+
<tr><td>Check node versions</td><td><code>kubectl get nodes -o wide</code></td></tr>
|
|
117
|
+
</tbody>
|
|
118
|
+
</table>
|
|
119
|
+
|
|
120
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
121
|
+
|
|
122
|
+
<p><strong>Q1:</strong> You need to upgrade a worker node. Before running the upgrade commands on the node, what step must you perform from the control plane?</p>
|
|
123
|
+
<ul>
|
|
124
|
+
<li>A) kubectl cordon the node to prevent new Pod scheduling</li>
|
|
125
|
+
<li>B) kubectl drain the node to evict running Pods ✓</li>
|
|
126
|
+
<li>C) kubectl delete the node and re-add it after upgrade</li>
|
|
127
|
+
<li>D) Run kubeadm upgrade plan to verify compatibility</li>
|
|
128
|
+
</ul>
|
|
129
|
+
<p><em>Explanation: kubectl drain both cordons (marks unschedulable) AND evicts all Pods gracefully. This ensures the node has no running workloads before maintenance begins. kubeadm upgrade plan is run on control plane, not required per-worker-node.</em></p>
|
|
130
|
+
|
|
131
|
+
<p><strong>Q2:</strong> The kubectl drain command fails with "cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet". What flag resolves this?</p>
|
|
132
|
+
<ul>
|
|
133
|
+
<li>A) --ignore-daemonsets</li>
|
|
134
|
+
<li>B) --delete-emptydir-data</li>
|
|
135
|
+
<li>C) --force ✓</li>
|
|
136
|
+
<li>D) --grace-period=0</li>
|
|
137
|
+
</ul>
|
|
138
|
+
<p><em>Explanation: --force is required to evict Pods that are not managed by any controller. Without a controller, the Pod won't be rescheduled elsewhere, so kubectl warns you and requires --force to confirm you accept potential data loss.</em></p>
|
|
139
|
+
|
|
140
|
+
<p><strong>Q3:</strong> What is the maximum kubelet version skew allowed compared to the kube-apiserver?</p>
|
|
141
|
+
<ul>
|
|
142
|
+
<li>A) ±1 minor version</li>
|
|
143
|
+
<li>B) 2 minor versions older ✓</li>
|
|
144
|
+
<li>C) Any version</li>
|
|
145
|
+
<li>D) Must be identical version</li>
|
|
146
|
+
</ul>
|
|
147
|
+
<p><em>Explanation: Per Kubernetes version skew policy, kubelet can be at most 2 minor versions older than kube-apiserver. This allows rolling upgrades where nodes are upgraded one at a time while the control plane is already on the new version.</em></p>
|
|
@@ -0,0 +1,152 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d1-l03
|
|
3
|
+
title: 'Bài 3: RBAC & Authorization'
|
|
4
|
+
slug: 03-rbac-cka
|
|
5
|
+
description: >-
|
|
6
|
+
RBAC in-depth cho CKA. Tạo Roles, ClusterRoles, RoleBindings. ServiceAccounts.
|
|
7
|
+
Kiểm tra quyền với kubectl auth can-i. Certificate-based authentication.
|
|
8
|
+
duration_minutes: 60
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 3
|
|
12
|
+
section_title: "Domain 1: Cluster Architecture, Installation & Configuration (25%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai3-rbac-cka.png" alt="RBAC Hands-on — ServiceAccount, RoleBinding, kubectl auth can-i" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="rbac-review">1. RBAC Concepts (CKA Depth)</h2>
|
|
22
|
+
|
|
23
|
+
<p>CKA yêu cầu hands-on: tạo RBAC resources bằng kubectl, verify permissions, và debug access issues.</p>
|
|
24
|
+
|
|
25
|
+
<pre><code class="language-text">RBAC objects:
|
|
26
|
+
Role → namespaced permissions
|
|
27
|
+
ClusterRole → cluster-wide permissions
|
|
28
|
+
RoleBinding → bind Role or ClusterRole to Subject (in a namespace)
|
|
29
|
+
ClusterRoleBinding → bind ClusterRole to Subject (cluster-wide)
|
|
30
|
+
|
|
31
|
+
Subject types:
|
|
32
|
+
- User (string, không có K8s object)
|
|
33
|
+
- Group (string)
|
|
34
|
+
- ServiceAccount (K8s object: namespace/name)</code></pre>
|
|
35
|
+
|
|
36
|
+
<h2 id="create-rbac">2. Tạo RBAC Imperatively</h2>
|
|
37
|
+
|
|
38
|
+
<pre><code class="language-text"># Tạo Role (namespaced)
|
|
39
|
+
kubectl create role pod-reader \
|
|
40
|
+
--verb=get,list,watch \
|
|
41
|
+
--resource=pods \
|
|
42
|
+
--namespace=default
|
|
43
|
+
|
|
44
|
+
# Tạo RoleBinding
|
|
45
|
+
kubectl create rolebinding read-pods \
|
|
46
|
+
--role=pod-reader \
|
|
47
|
+
--user=jane \
|
|
48
|
+
--namespace=default
|
|
49
|
+
|
|
50
|
+
# Tạo ClusterRole
|
|
51
|
+
kubectl create clusterrole secret-reader \
|
|
52
|
+
--verb=get,list \
|
|
53
|
+
--resource=secrets
|
|
54
|
+
|
|
55
|
+
# Tạo ClusterRoleBinding
|
|
56
|
+
kubectl create clusterrolebinding read-secrets \
|
|
57
|
+
--clusterrole=secret-reader \
|
|
58
|
+
--user=jane
|
|
59
|
+
|
|
60
|
+
# Bind ClusterRole trong 1 namespace (dùng RoleBinding!)
|
|
61
|
+
kubectl create rolebinding read-secrets-dev \
|
|
62
|
+
--clusterrole=secret-reader \
|
|
63
|
+
--user=jane \
|
|
64
|
+
--namespace=dev</code></pre>
|
|
65
|
+
|
|
66
|
+
<blockquote><p><strong>Exam tip:</strong> Dùng <code>--dry-run=client -o yaml</code> để generate YAML rồi edit. Nhanh hơn viết tay. Ví dụ: <code>kubectl create role myrole --verb=get --resource=pods --dry-run=client -o yaml > role.yaml</code></p></blockquote>
|
|
67
|
+
|
|
68
|
+
<h2 id="serviceaccounts">3. ServiceAccounts trong CKA</h2>
|
|
69
|
+
|
|
70
|
+
<pre><code class="language-text"># Tạo ServiceAccount
|
|
71
|
+
kubectl create serviceaccount monitoring-sa -n default
|
|
72
|
+
|
|
73
|
+
# Bind permissions
|
|
74
|
+
kubectl create clusterrole metrics-reader \
|
|
75
|
+
--verb=get,list,watch \
|
|
76
|
+
--resource=pods,nodes
|
|
77
|
+
|
|
78
|
+
kubectl create clusterrolebinding monitoring-binding \
|
|
79
|
+
--clusterrole=metrics-reader \
|
|
80
|
+
--serviceaccount=default:monitoring-sa
|
|
81
|
+
|
|
82
|
+
# Sử dụng SA trong Pod spec
|
|
83
|
+
spec:
|
|
84
|
+
serviceAccountName: monitoring-sa</code></pre>
|
|
85
|
+
|
|
86
|
+
<h2 id="verify-permissions">4. Verify Permissions — kubectl auth can-i</h2>
|
|
87
|
+
|
|
88
|
+
<pre><code class="language-text"># Kiểm tra quyền của user hiện tại
|
|
89
|
+
kubectl auth can-i get pods
|
|
90
|
+
kubectl auth can-i delete pods --namespace=production
|
|
91
|
+
kubectl auth can-i '*' '*' # Check all
|
|
92
|
+
|
|
93
|
+
# Kiểm tra qua user khác (--as)
|
|
94
|
+
kubectl auth can-i get pods --as=jane
|
|
95
|
+
kubectl auth can-i get pods --as=jane --namespace=dev
|
|
96
|
+
kubectl auth can-i get secrets --as=system:serviceaccount:default:monitoring-sa</code></pre>
|
|
97
|
+
|
|
98
|
+
<h2 id="certificate-auth">5. Certificate-Based Authentication</h2>
|
|
99
|
+
|
|
100
|
+
<pre><code class="language-text">Create user with client certificate:
|
|
101
|
+
1. Generate key: openssl genrsa -out alice.key 2048
|
|
102
|
+
2. Create CSR: openssl req -new -key alice.key -out alice.csr -subj "/CN=alice/O=developers"
|
|
103
|
+
3. Sign with K8s CA:
|
|
104
|
+
# Create CertificateSigningRequest object
|
|
105
|
+
kubectl apply -f alice-csr.yaml
|
|
106
|
+
kubectl certificate approve alice
|
|
107
|
+
4. Get signed cert: kubectl get csr alice -o jsonpath='{.status.certificate}' | base64 -d > alice.crt
|
|
108
|
+
5. Add to kubeconfig</code></pre>
|
|
109
|
+
|
|
110
|
+
<blockquote><p><strong>Exam tip:</strong> CKA thường cho task "create user với certificate và bind RBAC". Nhớ quy trình: generate key → CSR → approve CSR → extract cert → kubeconfig. Dùng <code>kubectl auth can-i</code> để verify.</p></blockquote>
|
|
111
|
+
|
|
112
|
+
<h2 id="cheatsheet">6. Cheat Sheet</h2>
|
|
113
|
+
|
|
114
|
+
<table>
|
|
115
|
+
<thead><tr><th>Task</th><th>Command</th></tr></thead>
|
|
116
|
+
<tbody>
|
|
117
|
+
<tr><td>Check what user can do</td><td><code>kubectl auth can-i --list --as=user</code></td></tr>
|
|
118
|
+
<tr><td>Check specific permission</td><td><code>kubectl auth can-i get secrets --as=user -n ns</code></td></tr>
|
|
119
|
+
<tr><td>Create serviceaccount</td><td><code>kubectl create sa SA-NAME -n NAMESPACE</code></td></tr>
|
|
120
|
+
<tr><td>Role binding to SA</td><td><code>--serviceaccount=ns:sa-name</code></td></tr>
|
|
121
|
+
<tr><td>Approve cert request</td><td><code>kubectl certificate approve NAME</code></td></tr>
|
|
122
|
+
</tbody>
|
|
123
|
+
</table>
|
|
124
|
+
|
|
125
|
+
<h2 id="practice">7. Practice Questions</h2>
|
|
126
|
+
|
|
127
|
+
<p><strong>Q1:</strong> A developer needs read-only access to all Pods and Services across the entire cluster. Which RBAC approach is most appropriate?</p>
|
|
128
|
+
<ul>
|
|
129
|
+
<li>A) Create a Role with get/list in the default namespace</li>
|
|
130
|
+
<li>B) Create a ClusterRole with get/list on pods and services, then ClusterRoleBinding ✓</li>
|
|
131
|
+
<li>C) Create a Role in each namespace</li>
|
|
132
|
+
<li>D) Grant the developer cluster-admin access</li>
|
|
133
|
+
</ul>
|
|
134
|
+
<p><em>Explanation: For cluster-wide access, use ClusterRole (defines permissions) + ClusterRoleBinding (grants cluster-wide). Creating Roles in each namespace is tedious and error-prone. cluster-admin is too broad.</em></p>
|
|
135
|
+
|
|
136
|
+
<p><strong>Q2:</strong> After creating a ServiceAccount and RoleBinding for a monitoring application, you need to verify the SA can list pods in the "monitoring" namespace. Which command does this?</p>
|
|
137
|
+
<ul>
|
|
138
|
+
<li>A) kubectl get rolebinding -n monitoring</li>
|
|
139
|
+
<li>B) kubectl describe serviceaccount monitoring-sa</li>
|
|
140
|
+
<li>C) kubectl auth can-i list pods --as=system:serviceaccount:monitoring:monitoring-sa -n monitoring ✓</li>
|
|
141
|
+
<li>D) kubectl auth check serviceaccount monitoring-sa</li>
|
|
142
|
+
</ul>
|
|
143
|
+
<p><em>Explanation: kubectl auth can-i with --as=system:serviceaccount:NAMESPACE:NAME impersonates the ServiceAccount. This verifies the exact access path (SA → binding → role) rather than just inspecting the objects.</em></p>
|
|
144
|
+
|
|
145
|
+
<p><strong>Q3:</strong> A ClusterRole named "pod-manager" exists. You want user "alice" to use this ClusterRole but ONLY within the "staging" namespace. What should you create?</p>
|
|
146
|
+
<ul>
|
|
147
|
+
<li>A) ClusterRoleBinding for alice → pod-manager</li>
|
|
148
|
+
<li>B) RoleBinding in staging namespace for alice → pod-manager ✓</li>
|
|
149
|
+
<li>C) A new Role in staging with the same permissions as pod-manager</li>
|
|
150
|
+
<li>D) A new ClusterRole scoped to the staging namespace</li>
|
|
151
|
+
</ul>
|
|
152
|
+
<p><em>Explanation: RoleBinding can reference a ClusterRole but constrains it to the binding's namespace. This reuses the ClusterRole definition without granting cluster-wide access. ClusterRoleBinding would grant access to all namespaces.</em></p>
|
|
@@ -0,0 +1,186 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d2-l04
|
|
3
|
+
title: 'Bài 4: Deployments, DaemonSets & StatefulSets'
|
|
4
|
+
slug: 04-deployments-daemonsets-statefulsets
|
|
5
|
+
description: >-
|
|
6
|
+
Hands-on với Deployments (rolling update, rollback), DaemonSets, StatefulSets.
|
|
7
|
+
Resource requests, limits, và horizontal scaling cho CKA exam.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 4
|
|
12
|
+
section_title: "Domain 2: Workloads & Scheduling (15%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai4-workloads.png" alt="Deployment Rolling Update Mechanism — ReplicaSets và rollback" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="deployments">1. Deployments — Rolling Updates & Rollbacks</h2>
|
|
22
|
+
|
|
23
|
+
<pre><code class="language-text"># Tạo deployment
|
|
24
|
+
kubectl create deployment nginx --image=nginx:1.20 --replicas=3
|
|
25
|
+
|
|
26
|
+
# Scale
|
|
27
|
+
kubectl scale deployment nginx --replicas=5
|
|
28
|
+
|
|
29
|
+
# Update image (triggers rolling update)
|
|
30
|
+
kubectl set image deployment/nginx nginx=nginx:1.21
|
|
31
|
+
|
|
32
|
+
# Monitor rollout
|
|
33
|
+
kubectl rollout status deployment/nginx
|
|
34
|
+
kubectl rollout history deployment/nginx
|
|
35
|
+
|
|
36
|
+
# Rollback to previous version
|
|
37
|
+
kubectl rollout undo deployment/nginx
|
|
38
|
+
kubectl rollout undo deployment/nginx --to-revision=2</code></pre>
|
|
39
|
+
|
|
40
|
+
<table>
|
|
41
|
+
<thead><tr><th>Rollout Strategy</th><th>Key Fields</th><th>Behavior</th></tr></thead>
|
|
42
|
+
<tbody>
|
|
43
|
+
<tr><td><strong>RollingUpdate</strong> (default)</td><td>maxUnavailable, maxSurge</td><td>Gradual, zero-downtime trên nhiều replicas</td></tr>
|
|
44
|
+
<tr><td><strong>Recreate</strong></td><td>Không có</td><td>Kill all old → deploy new (có downtime)</td></tr>
|
|
45
|
+
</tbody>
|
|
46
|
+
</table>
|
|
47
|
+
|
|
48
|
+
<pre><code class="language-text">RollingUpdate settings:
|
|
49
|
+
spec:
|
|
50
|
+
strategy:
|
|
51
|
+
type: RollingUpdate
|
|
52
|
+
rollingUpdate:
|
|
53
|
+
maxUnavailable: 1 # Max pods down trong update
|
|
54
|
+
maxSurge: 1 # Max extra pods có thể chạy</code></pre>
|
|
55
|
+
|
|
56
|
+
<blockquote><p><strong>Exam tip:</strong> Để rollout history có chú thích, dùng <code>--record</code> hoặc set annotation trong <code>kubernetes.io/change-cause</code>. Khi cần rollback đến revision cụ thể: <code>kubectl rollout undo deployment/name --to-revision=3</code></p></blockquote>
|
|
57
|
+
|
|
58
|
+
<h2 id="horizontal-scaling">2. HPA (Horizontal Pod Autoscaler)</h2>
|
|
59
|
+
|
|
60
|
+
<pre><code class="language-text"># Tạo HPA
|
|
61
|
+
kubectl autoscale deployment nginx --cpu-percent=70 --min=2 --max=10
|
|
62
|
+
|
|
63
|
+
# Hoặc YAML
|
|
64
|
+
apiVersion: autoscaling/v2
|
|
65
|
+
kind: HorizontalPodAutoscaler
|
|
66
|
+
metadata:
|
|
67
|
+
name: nginx-hpa
|
|
68
|
+
spec:
|
|
69
|
+
scaleTargetRef:
|
|
70
|
+
apiVersion: apps/v1
|
|
71
|
+
kind: Deployment
|
|
72
|
+
name: nginx
|
|
73
|
+
minReplicas: 2
|
|
74
|
+
maxReplicas: 10
|
|
75
|
+
metrics:
|
|
76
|
+
- type: Resource
|
|
77
|
+
resource:
|
|
78
|
+
name: cpu
|
|
79
|
+
target:
|
|
80
|
+
type: Utilization
|
|
81
|
+
averageUtilization: 70</code></pre>
|
|
82
|
+
|
|
83
|
+
<h2 id="daemonset">3. DaemonSet Operations</h2>
|
|
84
|
+
|
|
85
|
+
<pre><code class="language-text">DaemonSet update strategies:
|
|
86
|
+
spec:
|
|
87
|
+
updateStrategy:
|
|
88
|
+
type: RollingUpdate # Gradual update per node
|
|
89
|
+
# OR
|
|
90
|
+
type: OnDelete # Manual: update only when Pod deleted
|
|
91
|
+
|
|
92
|
+
# View DaemonSet
|
|
93
|
+
kubectl get daemonset -n kube-system
|
|
94
|
+
kubectl get daemonset fluentd -o yaml
|
|
95
|
+
|
|
96
|
+
# DaemonSet trên specific nodes (tolerations)
|
|
97
|
+
spec:
|
|
98
|
+
template:
|
|
99
|
+
spec:
|
|
100
|
+
tolerations:
|
|
101
|
+
- key: node-role.kubernetes.io/control-plane
|
|
102
|
+
effect: NoSchedule # Deploy trên control plane nodes</code></pre>
|
|
103
|
+
|
|
104
|
+
<h2 id="statefulset">4. StatefulSet Operations</h2>
|
|
105
|
+
|
|
106
|
+
<pre><code class="language-text">apiVersion: apps/v1
|
|
107
|
+
kind: StatefulSet
|
|
108
|
+
metadata:
|
|
109
|
+
name: web
|
|
110
|
+
spec:
|
|
111
|
+
serviceName: "web" # Headless Service required
|
|
112
|
+
replicas: 3
|
|
113
|
+
selector:
|
|
114
|
+
matchLabels:
|
|
115
|
+
app: web
|
|
116
|
+
template:
|
|
117
|
+
spec:
|
|
118
|
+
containers:
|
|
119
|
+
- name: nginx
|
|
120
|
+
image: nginx:1.21
|
|
121
|
+
volumeMounts:
|
|
122
|
+
- name: www
|
|
123
|
+
mountPath: /usr/share/nginx/html
|
|
124
|
+
volumeClaimTemplates: # Each pod gets own PVC
|
|
125
|
+
- metadata:
|
|
126
|
+
name: www
|
|
127
|
+
spec:
|
|
128
|
+
accessModes: ["ReadWriteOnce"]
|
|
129
|
+
resources:
|
|
130
|
+
requests:
|
|
131
|
+
storage: 1Gi
|
|
132
|
+
|
|
133
|
+
---
|
|
134
|
+
# Headless Service (clusterIP: None)
|
|
135
|
+
apiVersion: v1
|
|
136
|
+
kind: Service
|
|
137
|
+
metadata:
|
|
138
|
+
name: web
|
|
139
|
+
spec:
|
|
140
|
+
clusterIP: None # Headless!
|
|
141
|
+
selector:
|
|
142
|
+
app: web</code></pre>
|
|
143
|
+
|
|
144
|
+
<blockquote><p><strong>Exam tip:</strong> StatefulSet yêu cầu <strong>Headless Service</strong> (clusterIP: None). Điều này tạo DNS entries cho từng Pod: <code>web-0.web.default.svc.cluster.local</code>. Nếu thiếu headless service, StatefulSet hoạt động nhưng pod DNS sẽ không hoạt động.</p></blockquote>
|
|
145
|
+
|
|
146
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
147
|
+
|
|
148
|
+
<table>
|
|
149
|
+
<thead><tr><th>Task</th><th>Command</th></tr></thead>
|
|
150
|
+
<tbody>
|
|
151
|
+
<tr><td>Update image</td><td><code>kubectl set image deploy/NAME CONTAINER=IMAGE</code></td></tr>
|
|
152
|
+
<tr><td>Rollback</td><td><code>kubectl rollout undo deploy/NAME</code></td></tr>
|
|
153
|
+
<tr><td>Rollout status</td><td><code>kubectl rollout status deploy/NAME</code></td></tr>
|
|
154
|
+
<tr><td>Pause rollout</td><td><code>kubectl rollout pause deploy/NAME</code></td></tr>
|
|
155
|
+
<tr><td>Resume rollout</td><td><code>kubectl rollout resume deploy/NAME</code></td></tr>
|
|
156
|
+
</tbody>
|
|
157
|
+
</table>
|
|
158
|
+
|
|
159
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
160
|
+
|
|
161
|
+
<p><strong>Q1:</strong> A Deployment was updated 3 times. The current version (revision 3) is causing issues. How do you revert to revision 1?</p>
|
|
162
|
+
<ul>
|
|
163
|
+
<li>A) kubectl rollout undo deployment/app</li>
|
|
164
|
+
<li>B) kubectl rollout undo deployment/app --to-revision=1 ✓</li>
|
|
165
|
+
<li>C) kubectl set image deployment/app app=old-image</li>
|
|
166
|
+
<li>D) kubectl delete deployment/app and recreate it</li>
|
|
167
|
+
</ul>
|
|
168
|
+
<p><em>Explanation: --to-revision flag specifies which historical revision to roll back to. Without it, rollout undo goes to the previous revision (n-1). kubectl rollout history shows all revisions and their CHANGE-CAUSE annotations.</em></p>
|
|
169
|
+
|
|
170
|
+
<p><strong>Q2:</strong> A StatefulSet named "kafka" is deployed but Pods cannot resolve each other's DNS names. What is likely missing?</p>
|
|
171
|
+
<ul>
|
|
172
|
+
<li>A) The StatefulSet needs a Deployment alongside it</li>
|
|
173
|
+
<li>B) A Headless Service (clusterIP: None) with matching selector is required ✓</li>
|
|
174
|
+
<li>C) The namespace needs a NetworkPolicy allowing DNS</li>
|
|
175
|
+
<li>D) Each Pod needs a separate Service</li>
|
|
176
|
+
</ul>
|
|
177
|
+
<p><em>Explanation: StatefulSets require a Headless Service (clusterIP: None) to create DNS records for individual Pods (pod-name.service.namespace.svc.cluster.local). Without it, stable network identities don't work.</em></p>
|
|
178
|
+
|
|
179
|
+
<p><strong>Q3:</strong> You need to update a DaemonSet but want to control which nodes update first. Which updateStrategy should you use?</p>
|
|
180
|
+
<ul>
|
|
181
|
+
<li>A) RollingUpdate with maxUnavailable: 1</li>
|
|
182
|
+
<li>B) Recreate strategy</li>
|
|
183
|
+
<li>C) OnDelete — manually delete Pods node by node ✓</li>
|
|
184
|
+
<li>D) DaemonSets cannot be updated without recreating</li>
|
|
185
|
+
</ul>
|
|
186
|
+
<p><em>Explanation: OnDelete strategy only updates a DaemonSet Pod when you manually delete it. This gives full control over update order and timing. RollingUpdate would automatically update nodes following Kubernetes' own ordering.</em></p>
|