@xdev-asia/xdev-knowledge-mcp 1.0.44 → 1.0.45
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/01-kien-truc-cka-kubeadm.md +133 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/02-cluster-upgrade-kubeadm.md +147 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/03-rbac-cka.md +152 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/04-deployments-daemonsets-statefulsets.md +186 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/05-scheduling-taints-affinity.md +163 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/06-services-endpoints-coredns.md +145 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/07-ingress-networkpolicies-cni.md +172 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/04-storage/lessons/08-persistent-volumes-storageclass.md +159 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/09-etcd-backup-restore.md +149 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/10-troubleshooting-nodes.md +153 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/11-troubleshooting-workloads.md +146 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/12-troubleshooting-networking-exam.md +170 -0
- package/content/series/luyen-thi/luyen-thi-cka/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/01-multi-container-pods.md +146 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/02-jobs-cronjobs-resources.md +174 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/03-rolling-updates-rollbacks.md +148 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/04-helm-kustomize.md +181 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/03-app-observability/lessons/05-probes-logging-debugging.md +183 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/06-configmaps-secrets.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/07-securitycontext-pod-security.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/08-resources-qos.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/09-services-ingress.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/10-networkpolicies-exam-strategy.md +236 -0
- package/content/series/luyen-thi/luyen-thi-ckad/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/01-kien-truc-kubernetes.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/02-pods-workloads-controllers.md +142 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/03-services-networking-storage.md +155 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/04-rbac-security.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/05-container-runtimes-oci.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/06-orchestration-patterns.md +147 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/03-cloud-native-architecture/lessons/07-cloud-native-architecture.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/08-observability.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/09-helm-gitops-cicd.md +162 -0
- package/content/series/luyen-thi/luyen-thi-kcna/index.md +1 -1
- package/data/quizzes.json +1059 -0
- package/package.json +1 -1
|
@@ -0,0 +1,149 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d5-l09
|
|
3
|
+
title: 'Bài 9: etcd Backup & Restore'
|
|
4
|
+
slug: 09-etcd-backup-restore
|
|
5
|
+
description: >-
|
|
6
|
+
etcd backup với etcdctl snapshot. Restore cluster từ backup. TLS certificates
|
|
7
|
+
cho etcd. Critical CKA exam task — cần thành thạo hoàn toàn.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 9
|
|
12
|
+
section_title: "Domain 5: Troubleshooting (30%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai9-etcd.png" alt="etcd Backup và Restore Procedure" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="etcd-overview">1. etcd — Tổng quan</h2>
|
|
22
|
+
|
|
23
|
+
<p><strong>etcd</strong> là distributed key-value store lưu trữ toàn bộ cluster state: Pods, Services, Secrets, ConfigMaps, Nodes. Mất etcd = mất toàn bộ cluster.</p>
|
|
24
|
+
|
|
25
|
+
<pre><code class="language-text">etcd info từ kube-apiserver manifest:
|
|
26
|
+
cat /etc/kubernetes/manifests/etcd.yaml
|
|
27
|
+
|
|
28
|
+
Key paths:
|
|
29
|
+
--data-dir=/var/lib/etcd # Data directory
|
|
30
|
+
--cert-file=/etc/kubernetes/pki/etcd/server.crt
|
|
31
|
+
--key-file=/etc/kubernetes/pki/etcd/server.key
|
|
32
|
+
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
|
|
33
|
+
--listen-client-urls=https://127.0.0.1:2379</code></pre>
|
|
34
|
+
|
|
35
|
+
<h2 id="etcdctl-setup">2. etcdctl Setup</h2>
|
|
36
|
+
|
|
37
|
+
<pre><code class="language-text"># Set API version (always use v3)
|
|
38
|
+
export ETCDCTL_API=3
|
|
39
|
+
|
|
40
|
+
# Find etcd certs
|
|
41
|
+
ls /etc/kubernetes/pki/etcd/
|
|
42
|
+
# ca.crt, server.crt, server.key, healthcheck-client.*
|
|
43
|
+
|
|
44
|
+
# Test connection
|
|
45
|
+
etcdctl member list \
|
|
46
|
+
--endpoints=https://127.0.0.1:2379 \
|
|
47
|
+
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
|
48
|
+
--cert=/etc/kubernetes/pki/etcd/server.crt \
|
|
49
|
+
--key=/etc/kubernetes/pki/etcd/server.key</code></pre>
|
|
50
|
+
|
|
51
|
+
<blockquote><p><strong>Exam tip:</strong> Phải set <code>ETCDCTL_API=3</code> trước khi dùng etcdctl. API v2 dùng lệnh khác và không tương thích. Trên exam, nếu quên certs path: <code>cat /etc/kubernetes/manifests/etcd.yaml | grep cert</code> hoặc <code>kubectl describe pod etcd -n kube-system</code>.</p></blockquote>
|
|
52
|
+
|
|
53
|
+
<h2 id="backup">3. Backup etcd</h2>
|
|
54
|
+
|
|
55
|
+
<pre><code class="language-text">ETCDCTL_API=3 etcdctl snapshot save /opt/etcd-backup.db \
|
|
56
|
+
--endpoints=https://127.0.0.1:2379 \
|
|
57
|
+
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
|
58
|
+
--cert=/etc/kubernetes/pki/etcd/server.crt \
|
|
59
|
+
--key=/etc/kubernetes/pki/etcd/server.key
|
|
60
|
+
|
|
61
|
+
# Verify backup
|
|
62
|
+
ETCDCTL_API=3 etcdctl snapshot status /opt/etcd-backup.db \
|
|
63
|
+
--write-out=table
|
|
64
|
+
|
|
65
|
+
# Output:
|
|
66
|
+
+----------+----------+------------+------------+
|
|
67
|
+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
|
|
68
|
+
+----------+----------+------------+------------+
|
|
69
|
+
| abcdef12 | 12345 | 1234 | 4.5 MB |
|
|
70
|
+
+----------+----------+------------+------------+</code></pre>
|
|
71
|
+
|
|
72
|
+
<h2 id="restore">4. Restore etcd</h2>
|
|
73
|
+
|
|
74
|
+
<pre><code class="language-text"># Step 1: Restore to new data directory
|
|
75
|
+
ETCDCTL_API=3 etcdctl snapshot restore /opt/etcd-backup.db \
|
|
76
|
+
--data-dir=/var/lib/etcd-restore
|
|
77
|
+
|
|
78
|
+
# Step 2: Update etcd manifest to use new data dir
|
|
79
|
+
vi /etc/kubernetes/manifests/etcd.yaml
|
|
80
|
+
|
|
81
|
+
# Change --data-dir và hostPath volume:
|
|
82
|
+
spec:
|
|
83
|
+
containers:
|
|
84
|
+
- command:
|
|
85
|
+
- --data-dir=/var/lib/etcd-restore # Changed
|
|
86
|
+
volumes:
|
|
87
|
+
- hostPath:
|
|
88
|
+
path: /var/lib/etcd-restore # Changed
|
|
89
|
+
type: DirectoryOrCreate
|
|
90
|
+
name: etcd-data
|
|
91
|
+
|
|
92
|
+
# Step 3: kubelet detects manifest change → restarts etcd
|
|
93
|
+
# Wait for etcd to restart (may take 2-3 min)
|
|
94
|
+
kubectl get pods -n kube-system | grep etcd</code></pre>
|
|
95
|
+
|
|
96
|
+
<blockquote><p><strong>Exam tip:</strong> Sau khi restore, cần đợi toàn bộ control plane restart và sync. Có thể cần restart: <code>systemctl restart kubelet</code>. Nếu API server không lên, xem logs: <code>crictl logs $(crictl ps -a --name kube-apiserver -q)</code>.</p></blockquote>
|
|
97
|
+
|
|
98
|
+
<h2 id="cheatsheet">5. Cheat Sheet — etcd Backup/Restore</h2>
|
|
99
|
+
|
|
100
|
+
<pre><code class="language-text"># BACKUP (4 required flags):
|
|
101
|
+
ETCDCTL_API=3 etcdctl snapshot save BACKUP_PATH \
|
|
102
|
+
--endpoints=https://127.0.0.1:2379 \
|
|
103
|
+
--cacert=CA_CERT \
|
|
104
|
+
--cert=SERVER_CERT \
|
|
105
|
+
--key=SERVER_KEY
|
|
106
|
+
|
|
107
|
+
# RESTORE (minimal):
|
|
108
|
+
ETCDCTL_API=3 etcdctl snapshot restore BACKUP_PATH \
|
|
109
|
+
--data-dir=NEW_DATA_DIR
|
|
110
|
+
|
|
111
|
+
# Then update /etc/kubernetes/manifests/etcd.yaml → data-dir + volume path</code></pre>
|
|
112
|
+
|
|
113
|
+
<table>
|
|
114
|
+
<thead><tr><th>Cert File</th><th>Path</th><th>Flag</th></tr></thead>
|
|
115
|
+
<tbody>
|
|
116
|
+
<tr><td>CA cert</td><td>/etc/kubernetes/pki/etcd/ca.crt</td><td>--cacert</td></tr>
|
|
117
|
+
<tr><td>Server cert</td><td>/etc/kubernetes/pki/etcd/server.crt</td><td>--cert</td></tr>
|
|
118
|
+
<tr><td>Server key</td><td>/etc/kubernetes/pki/etcd/server.key</td><td>--key</td></tr>
|
|
119
|
+
</tbody>
|
|
120
|
+
</table>
|
|
121
|
+
|
|
122
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
123
|
+
|
|
124
|
+
<p><strong>Q1:</strong> You perform an etcd snapshot restore to /var/lib/etcd-new. The cluster does not recover. What step is most likely missing?</p>
|
|
125
|
+
<ul>
|
|
126
|
+
<li>A) You need to re-run kubeadm init</li>
|
|
127
|
+
<li>B) The etcd static Pod manifest data-dir and volume path must be updated to point to the new directory ✓</li>
|
|
128
|
+
<li>C) etctdl restore must be run with --force flag</li>
|
|
129
|
+
<li>D) The kube-apiserver certificate must be rotated</li>
|
|
130
|
+
</ul>
|
|
131
|
+
<p><em>Explanation: After restoring to a new directory, etcd's static Pod manifest (/etc/kubernetes/manifests/etcd.yaml) must be updated: change --data-dir flag AND the hostPath volume path to the new directory. Otherwise, etcd still reads the old (broken) data directory.</em></p>
|
|
132
|
+
|
|
133
|
+
<p><strong>Q2:</strong> What environment variable must be set to use etcdctl v3 API commands?</p>
|
|
134
|
+
<ul>
|
|
135
|
+
<li>A) ETCD_VERSION=3</li>
|
|
136
|
+
<li>B) ETCDCTL_API=3 ✓</li>
|
|
137
|
+
<li>C) KUBECONFIG=/etc/kubernetes/etcd.conf</li>
|
|
138
|
+
<li>D) ETCD_ENDPOINT=localhost:2379</li>
|
|
139
|
+
</ul>
|
|
140
|
+
<p><em>Explanation: ETCDCTL_API=3 enables v3 API commands (snapshot save, snapshot restore). Without it, etcdctl defaults to v2, which uses different command syntax and is incompatible with etcd v3 clusters (which all Kubernetes clusters use).</em></p>
|
|
141
|
+
|
|
142
|
+
<p><strong>Q3:</strong> Which of the following contains the TLS certificates required for etcdctl to communicate with the etcd server?</p>
|
|
143
|
+
<ul>
|
|
144
|
+
<li>A) /etc/kubernetes/pki/apiserver*.crt</li>
|
|
145
|
+
<li>B) /etc/kubernetes/pki/etcd/ directory ✓</li>
|
|
146
|
+
<li>C) ~/.kube/config</li>
|
|
147
|
+
<li>D) /var/lib/etcd/certs/</li>
|
|
148
|
+
</ul>
|
|
149
|
+
<p><em>Explanation: etcd certificates are stored in /etc/kubernetes/pki/etcd/. Important files: ca.crt (CA), server.crt and server.key (for etcdctl). These paths are also defined in the etcd static Pod manifest.</em></p>
|
|
@@ -0,0 +1,153 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d5-l10
|
|
3
|
+
title: 'Bài 10: Troubleshooting Nodes'
|
|
4
|
+
slug: 10-troubleshooting-nodes
|
|
5
|
+
description: >-
|
|
6
|
+
Debug node NotReady: kubelet, container runtime, certificates. Node conditions,
|
|
7
|
+
resource pressure, disk pressure. Systematic troubleshooting approach.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 10
|
|
12
|
+
section_title: "Domain 5: Troubleshooting (30%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai10-node-debug.png" alt="Node Troubleshooting Decision Tree — NotReady debug workflow" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="node-conditions">1. Node Conditions</h2>
|
|
22
|
+
|
|
23
|
+
<pre><code class="language-text">kubectl describe node node1 | grep -A20 Conditions
|
|
24
|
+
|
|
25
|
+
Normal state:
|
|
26
|
+
Type Status
|
|
27
|
+
---- ------
|
|
28
|
+
MemoryPressure False ← OK (True = low memory)
|
|
29
|
+
DiskPressure False ← OK (True = low disk)
|
|
30
|
+
PIDPressure False ← OK (True = too many processes)
|
|
31
|
+
Ready True ← Node is healthy
|
|
32
|
+
|
|
33
|
+
Problem states:
|
|
34
|
+
Ready False → kubelet not working
|
|
35
|
+
Ready Unknown → Node unreachable (network issue)</code></pre>
|
|
36
|
+
|
|
37
|
+
<h2 id="troubleshoot-not-ready">2. Troubleshoot NotReady Node</h2>
|
|
38
|
+
|
|
39
|
+
<pre><code class="language-text">Systematic approach — run in order:
|
|
40
|
+
|
|
41
|
+
1. Check node status
|
|
42
|
+
kubectl get nodes
|
|
43
|
+
kubectl describe node NODE_NAME | tail -40
|
|
44
|
+
|
|
45
|
+
2. SSH to node
|
|
46
|
+
ssh node1
|
|
47
|
+
|
|
48
|
+
3. Check kubelet service
|
|
49
|
+
systemctl status kubelet
|
|
50
|
+
journalctl -u kubelet -n 50 --no-pager
|
|
51
|
+
|
|
52
|
+
4. Check container runtime
|
|
53
|
+
systemctl status containerd
|
|
54
|
+
crictl ps # List running containers
|
|
55
|
+
crictl pods # List pod sandboxes
|
|
56
|
+
|
|
57
|
+
5. Check certificates (common issue after cluster age)
|
|
58
|
+
ls /var/lib/kubelet/pki/
|
|
59
|
+
openssl x509 -in /var/lib/kubelet/pki/kubelet.crt -noout -dates
|
|
60
|
+
|
|
61
|
+
6. Restart services if needed
|
|
62
|
+
systemctl restart kubelet
|
|
63
|
+
systemctl restart containerd</code></pre>
|
|
64
|
+
|
|
65
|
+
<blockquote><p><strong>Exam tip:</strong> Quy trình debug NotReady: <code>kubectl describe node</code> → SSH → <code>systemctl status kubelet</code> → <code>journalctl -u kubelet</code>. Hầu hết lỗi: kubelet stopped, wrong API server address, hoặc certificate expired.</p></blockquote>
|
|
66
|
+
|
|
67
|
+
<h2 id="common-node-issues">3. Common Node Issues</h2>
|
|
68
|
+
|
|
69
|
+
<table>
|
|
70
|
+
<thead><tr><th>Symptom</th><th>Nguyên nhân</th><th>Fix</th></tr></thead>
|
|
71
|
+
<tbody>
|
|
72
|
+
<tr><td>Node NotReady</td><td>kubelet crashed</td><td>systemctl restart kubelet</td></tr>
|
|
73
|
+
<tr><td>Node Unknown</td><td>Network partition</td><td>Check node network, firewall</td></tr>
|
|
74
|
+
<tr><td>MemoryPressure: True</td><td>Memory thiếu</td><td>Evict pods, scale node</td></tr>
|
|
75
|
+
<tr><td>DiskPressure: True</td><td>Disk đầy</td><td>Clean /var/log, /tmp, unused images</td></tr>
|
|
76
|
+
<tr><td>Pods stuck Terminating</td><td>Node unreachable</td><td>kubectl delete pod --force --grace-period=0</td></tr>
|
|
77
|
+
</tbody>
|
|
78
|
+
</table>
|
|
79
|
+
|
|
80
|
+
<h2 id="kubelet-config">4. kubelet Configuration</h2>
|
|
81
|
+
|
|
82
|
+
<pre><code class="language-text"># kubelet config locations
|
|
83
|
+
/var/lib/kubelet/config.yaml # Main config
|
|
84
|
+
/etc/kubernetes/kubelet.conf # kubeconfig (how kubelet connects to API server)
|
|
85
|
+
/var/lib/kubelet/kubeconfig # Alternative path
|
|
86
|
+
|
|
87
|
+
# Common kubelet config issues:
|
|
88
|
+
# Wrong apiserver address
|
|
89
|
+
cat /etc/kubernetes/kubelet.conf | grep server
|
|
90
|
+
|
|
91
|
+
# Wrong cluster DNS
|
|
92
|
+
cat /var/lib/kubelet/config.yaml | grep clusterDNS
|
|
93
|
+
|
|
94
|
+
# Check kubelet's certificate
|
|
95
|
+
cat /var/lib/kubelet/config.yaml | grep client-certificate</code></pre>
|
|
96
|
+
|
|
97
|
+
<h2 id="node-cleanup">5. Node Image & Disk Cleanup</h2>
|
|
98
|
+
|
|
99
|
+
<pre><code class="language-text"># Check disk usage
|
|
100
|
+
df -h
|
|
101
|
+
du -sh /var/log/*
|
|
102
|
+
du -sh /var/lib/containerd
|
|
103
|
+
|
|
104
|
+
# Clean unused container images
|
|
105
|
+
crictl rmi --prune
|
|
106
|
+
|
|
107
|
+
# Remove old logs
|
|
108
|
+
find /var/log/pods -mtime +7 -delete
|
|
109
|
+
|
|
110
|
+
# Check PID pressure
|
|
111
|
+
ps aux | wc -l</code></pre>
|
|
112
|
+
|
|
113
|
+
<h2 id="cheatsheet">6. Cheat Sheet</h2>
|
|
114
|
+
|
|
115
|
+
<table>
|
|
116
|
+
<thead><tr><th>Task</th><th>Command</th></tr></thead>
|
|
117
|
+
<tbody>
|
|
118
|
+
<tr><td>Node health summary</td><td><code>kubectl describe node NAME</code></td></tr>
|
|
119
|
+
<tr><td>Kubelet status</td><td><code>systemctl status kubelet</code></td></tr>
|
|
120
|
+
<tr><td>Kubelet logs</td><td><code>journalctl -u kubelet -n 100</code></td></tr>
|
|
121
|
+
<tr><td>Running containers on node</td><td><code>crictl ps</code></td></tr>
|
|
122
|
+
<tr><td>Force delete stuck pod</td><td><code>kubectl delete pod NAME --force --grace-period=0</code></td></tr>
|
|
123
|
+
</tbody>
|
|
124
|
+
</table>
|
|
125
|
+
|
|
126
|
+
<h2 id="practice">7. Practice Questions</h2>
|
|
127
|
+
|
|
128
|
+
<p><strong>Q1:</strong> A node shows "Ready: Unknown" status. Which of the following is most likely causing this?</p>
|
|
129
|
+
<ul>
|
|
130
|
+
<li>A) The kubelet process crashed on the node</li>
|
|
131
|
+
<li>B) The node cannot be reached by the control plane (network issue) ✓</li>
|
|
132
|
+
<li>C) All Pods on the node are OOM-killed</li>
|
|
133
|
+
<li>D) The node has insufficient CPU resources</li>
|
|
134
|
+
</ul>
|
|
135
|
+
<p><em>Explanation: Ready: Unknown means the API server hasn't received a heartbeat from the kubelet recently. This typically indicates node is unreachable (network partition, node powered off). Ready: False means kubelet is reachable but reports a problem.</em></p>
|
|
136
|
+
|
|
137
|
+
<p><strong>Q2:</strong> After SSH-ing to a NotReady node, you run "systemctl status kubelet" and see "Active: failed". What should you check next?</p>
|
|
138
|
+
<ul>
|
|
139
|
+
<li>A) kubectl get pods -n kube-system</li>
|
|
140
|
+
<li>B) journalctl -u kubelet -n 50 to read the error logs ✓</li>
|
|
141
|
+
<li>C) Delete and recreate the node</li>
|
|
142
|
+
<li>D) Run kubeadm reset on the node</li>
|
|
143
|
+
</ul>
|
|
144
|
+
<p><em>Explanation: When kubelet fails, journalctl shows the detailed error: certificate issues, wrong API server URL, missing /var/lib/kubelet/config.yaml, etc. This is always the first diagnostic step after confirming kubelet is down.</em></p>
|
|
145
|
+
|
|
146
|
+
<p><strong>Q3:</strong> A node is reporting DiskPressure: True. What is the immediate effect on workloads?</p>
|
|
147
|
+
<ul>
|
|
148
|
+
<li>A) All Pods are immediately deleted</li>
|
|
149
|
+
<li>B) The node is marked unschedulable and BestEffort/Burstable Pods are evicted ✓</li>
|
|
150
|
+
<li>C) Only new Pod scheduling is prevented</li>
|
|
151
|
+
<li>D) The kubelet service stops</li>
|
|
152
|
+
</ul>
|
|
153
|
+
<p><em>Explanation: Under disk pressure, Kubernetes triggers pod eviction starting with BestEffort (no requests/limits), then Burstable. Guaranteed Pods are last to be evicted. The node is also tainted to prevent new scheduling.</em></p>
|
|
@@ -0,0 +1,146 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d5-l11
|
|
3
|
+
title: 'Bài 11: Troubleshooting Workloads'
|
|
4
|
+
slug: 11-troubleshooting-workloads
|
|
5
|
+
description: >-
|
|
6
|
+
Debug Pods: CrashLoopBackOff, ImagePullBackOff, Pending. Troubleshoot
|
|
7
|
+
Deployments và Services. Systematic kubectl debugging workflow.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 11
|
|
12
|
+
section_title: "Domain 5: Troubleshooting (30%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai11-workload-debug.png" alt="Pod Troubleshooting Workflow — CrashLoopBackOff, ImagePullBackOff, OOMKilled" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="pod-debug-workflow">1. Pod Debug Workflow</h2>
|
|
22
|
+
|
|
23
|
+
<pre><code class="language-text">Systematic Pod Troubleshooting:
|
|
24
|
+
|
|
25
|
+
kubectl get pod POD_NAME
|
|
26
|
+
│
|
|
27
|
+
├── Pending → Node issues or PVC not bound
|
|
28
|
+
├── Running but not working → Check logs, exec
|
|
29
|
+
├── CrashLoopBackOff → App crashing
|
|
30
|
+
├── ImagePullBackOff → Image or registry issue
|
|
31
|
+
└── Error → Start/init failure
|
|
32
|
+
|
|
33
|
+
For any issue → next step:
|
|
34
|
+
kubectl describe pod POD_NAME
|
|
35
|
+
(read Events section at bottom!)
|
|
36
|
+
|
|
37
|
+
For logs:
|
|
38
|
+
kubectl logs POD_NAME
|
|
39
|
+
kubectl logs POD_NAME --previous (after crash)
|
|
40
|
+
kubectl logs POD_NAME -c CONTAINER (multi-container)</code></pre>
|
|
41
|
+
|
|
42
|
+
<h2 id="pod-issues">2. Common Pod Issues</h2>
|
|
43
|
+
|
|
44
|
+
<table>
|
|
45
|
+
<thead><tr><th>State</th><th>Nguyên nhân</th><th>Debug</th></tr></thead>
|
|
46
|
+
<tbody>
|
|
47
|
+
<tr><td><strong>Pending</strong></td><td>Không schedule được</td><td>describe pod → Events: Insufficient CPU/memory hoặc No nodes match affinity</td></tr>
|
|
48
|
+
<tr><td><strong>ImagePullBackOff</strong></td><td>Image không tồn tại / registry auth</td><td>Check image name typo, imagePullSecrets</td></tr>
|
|
49
|
+
<tr><td><strong>CrashLoopBackOff</strong></td><td>App crash liên tục</td><td>kubectl logs --previous, check app exit code</td></tr>
|
|
50
|
+
<tr><td><strong>OOMKilled</strong></td><td>Vượt memory limit</td><td>kubectl describe pod → Container Reason: OOMKilled</td></tr>
|
|
51
|
+
<tr><td><strong>CreateContainerError</strong></td><td>Volume mount, ConfigMap, Secret không tồn tại</td><td>describe pod Events</td></tr>
|
|
52
|
+
</tbody>
|
|
53
|
+
</table>
|
|
54
|
+
|
|
55
|
+
<blockquote><p><strong>Exam tip:</strong> <code>kubectl describe pod</code> Events section là nơi quan trọng nhất để debug. CKA tasks thường yêu cầu bạn fix một broken pod — thường là typo trong image name, sai ConfigMap name, hoặc Port conflict.</p></blockquote>
|
|
56
|
+
|
|
57
|
+
<h2 id="exec-debug">3. Exec & Debug</h2>
|
|
58
|
+
|
|
59
|
+
<pre><code class="language-text"># Exec into running container
|
|
60
|
+
kubectl exec -it POD_NAME -- /bin/sh
|
|
61
|
+
kubectl exec -it POD_NAME -c CONTAINER_NAME -- bash
|
|
62
|
+
|
|
63
|
+
# Debug with ephemeral container (v1.23+)
|
|
64
|
+
kubectl debug -it POD_NAME --image=busybox --target=app
|
|
65
|
+
|
|
66
|
+
# Copy files from/to pod
|
|
67
|
+
kubectl cp POD_NAME:/var/log/app.log ./app.log
|
|
68
|
+
kubectl cp ./config.yaml POD_NAME:/tmp/config.yaml
|
|
69
|
+
|
|
70
|
+
# Port-forward for quick testing
|
|
71
|
+
kubectl port-forward pod/POD_NAME 8080:80
|
|
72
|
+
kubectl port-forward svc/SERVICE_NAME 8080:80</code></pre>
|
|
73
|
+
|
|
74
|
+
<h2 id="deployment-debug">4. Deployment Issues</h2>
|
|
75
|
+
|
|
76
|
+
<pre><code class="language-text"># Check deployment status
|
|
77
|
+
kubectl rollout status deployment/myapp
|
|
78
|
+
kubectl get replicaset -l app=myapp # Check RS history
|
|
79
|
+
|
|
80
|
+
# Pod template issue: deployment creates RS but pods fail
|
|
81
|
+
kubectl describe replicaset RS_NAME # Check pod template errors
|
|
82
|
+
|
|
83
|
+
# Deployment stuck in progress?
|
|
84
|
+
kubectl describe deployment myapp | grep -A5 Conditions
|
|
85
|
+
|
|
86
|
+
# Check events at deployment level
|
|
87
|
+
kubectl get events --field-selector involvedObject.name=myapp --sort-by='.lastTimestamp'</code></pre>
|
|
88
|
+
|
|
89
|
+
<h2 id="service-debug">5. Service Connectivity Debug</h2>
|
|
90
|
+
|
|
91
|
+
<pre><code class="language-text">Debug service connectivity:
|
|
92
|
+
|
|
93
|
+
1. Check endpoints
|
|
94
|
+
kubectl get endpoints SERVICE_NAME
|
|
95
|
+
→ Empty: selector mismatch
|
|
96
|
+
|
|
97
|
+
2. Test from within cluster
|
|
98
|
+
kubectl run test --image=busybox --rm -it -- wget -O- http://SERVICE_NAME:PORT
|
|
99
|
+
|
|
100
|
+
3. Check kube-proxy
|
|
101
|
+
kubectl get pods -n kube-system -l k8s-app=kube-proxy
|
|
102
|
+
|
|
103
|
+
4. Check iptables (on node)
|
|
104
|
+
iptables -t nat -L KUBE-SERVICES | grep SERVICE_NAME</code></pre>
|
|
105
|
+
|
|
106
|
+
<h2 id="cheatsheet">6. Cheat Sheet</h2>
|
|
107
|
+
|
|
108
|
+
<table>
|
|
109
|
+
<thead><tr><th>Task</th><th>Command</th></tr></thead>
|
|
110
|
+
<tbody>
|
|
111
|
+
<tr><td>Previous container logs</td><td><code>kubectl logs POD --previous</code></td></tr>
|
|
112
|
+
<tr><td>All events in namespace</td><td><code>kubectl get events --sort-by='.lastTimestamp'</code></td></tr>
|
|
113
|
+
<tr><td>Quick connectivity test</td><td><code>kubectl run test --image=busybox --rm -it -- wget -qO- URL</code></td></tr>
|
|
114
|
+
<tr><td>Check pod exit code</td><td><code>kubectl describe pod | grep Exit Code</code></td></tr>
|
|
115
|
+
<tr><td>Multi-container logs</td><td><code>kubectl logs POD -c CONTAINER</code></td></tr>
|
|
116
|
+
</tbody>
|
|
117
|
+
</table>
|
|
118
|
+
|
|
119
|
+
<h2 id="practice">7. Practice Questions</h2>
|
|
120
|
+
|
|
121
|
+
<p><strong>Q1:</strong> A Pod is in CrashLoopBackOff. The application log shows "Error: failed to connect to database at localhost:5432". What is the issue?</p>
|
|
122
|
+
<ul>
|
|
123
|
+
<li>A) The database Service is misconfigured</li>
|
|
124
|
+
<li>B) The app uses localhost to reach the database, but sidecar containers don't have a database running ✓</li>
|
|
125
|
+
<li>C) The Pod lacks sufficient memory</li>
|
|
126
|
+
<li>D) The database password in the Secret is incorrect</li>
|
|
127
|
+
</ul>
|
|
128
|
+
<p><em>Explanation: Pods share a network namespace, so "localhost" within a Pod only reaches other containers in the SAME Pod. If the database is in a separate Pod, the app should use the Service DNS name (e.g., pg-service.namespace.svc.cluster.local), not localhost.</em></p>
|
|
129
|
+
|
|
130
|
+
<p><strong>Q2:</strong> A Deployment's Pods are stuck in ImagePullBackOff. The image name is "mycompany/private-app:1.2". What should you verify first?</p>
|
|
131
|
+
<ul>
|
|
132
|
+
<li>A) The image exists on Docker Hub with exact tag</li>
|
|
133
|
+
<li>B) The Deployment has an imagePullSecrets referencing registry credentials ✓</li>
|
|
134
|
+
<li>C) The node has enough disk space</li>
|
|
135
|
+
<li>D) The Service is correctly configured</li>
|
|
136
|
+
</ul>
|
|
137
|
+
<p><em>Explanation: Private registries require authentication. The Pod must have an imagePullSecrets field referencing a Secret with registry credentials (type: kubernetes.io/dockerconfigjson). Also verify the image name and tag are correct.</em></p>
|
|
138
|
+
|
|
139
|
+
<p><strong>Q3:</strong> You run "kubectl get endpoints myservice" and the result shows "<none>". What is the most likely problem?</p>
|
|
140
|
+
<ul>
|
|
141
|
+
<li>A) The Service port is wrong</li>
|
|
142
|
+
<li>B) No Pods with labels matching the Service selector are in Ready state ✓</li>
|
|
143
|
+
<li>C) The Ingress is misconfigured</li>
|
|
144
|
+
<li>D) kube-proxy is not running</li>
|
|
145
|
+
</ul>
|
|
146
|
+
<p><em>Explanation: Endpoints are populated when Pods match the Service selector AND are Ready. Common causes: label mismatch (typo in selector); all Pods are Pending/CrashLooping so not Ready; wrong namespace. Check: kubectl get pods -l APP=LABEL --show-labels.</em></p>
|
|
@@ -0,0 +1,170 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d5-l12
|
|
3
|
+
title: 'Bài 12: Troubleshooting Networking & Exam Strategy'
|
|
4
|
+
slug: 12-troubleshooting-networking-exam
|
|
5
|
+
description: >-
|
|
6
|
+
Debug network connectivity. DNS issues, Service không reach được. Cluster
|
|
7
|
+
networking flow. CKA exam tips, time management và command shortcuts.
|
|
8
|
+
duration_minutes: 60
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 12
|
|
12
|
+
section_title: "Domain 5: Troubleshooting (30%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai12-network-debug.png" alt="Network Troubleshooting Layers — Layer-by-layer debug approach" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="network-debug">1. Network Troubleshooting Workflow</h2>
|
|
22
|
+
|
|
23
|
+
<pre><code class="language-text">Network connectivity issue:
|
|
24
|
+
Pod A cannot reach Pod B (or Service)
|
|
25
|
+
|
|
26
|
+
Layer-by-layer debug:
|
|
27
|
+
|
|
28
|
+
1. Same node, same namespace?
|
|
29
|
+
kubectl exec pod-a -- ping POD_B_IP
|
|
30
|
+
|
|
31
|
+
2. Different node?
|
|
32
|
+
kubectl get pod pod-a pod-b -o wide # Check node placement
|
|
33
|
+
|
|
34
|
+
3. Via Service name (DNS)?
|
|
35
|
+
kubectl exec pod-a -- nslookup my-service
|
|
36
|
+
kubectl exec pod-a -- wget -qO- http://my-service:8080
|
|
37
|
+
|
|
38
|
+
4. NetworkPolicy blocking?
|
|
39
|
+
kubectl get networkpolicy -n NAMESPACE
|
|
40
|
+
|
|
41
|
+
5. kube-proxy working?
|
|
42
|
+
kubectl get pods -n kube-system -l k8s-app=kube-proxy</code></pre>
|
|
43
|
+
|
|
44
|
+
<h2 id="network-flow">2. Full Network Flow Diagram</h2>
|
|
45
|
+
|
|
46
|
+
<pre><code class="language-text">Client Pod ──(routes to)──► Service ClusterIP (iptables/ipvs)
|
|
47
|
+
│
|
|
48
|
+
kube-proxy routes to one of:
|
|
49
|
+
Pod IP 1 | Pod IP 2 | Pod IP 3
|
|
50
|
+
│
|
|
51
|
+
CNI (Calico/Flannel) routes to
|
|
52
|
+
correct node if cross-node
|
|
53
|
+
│
|
|
54
|
+
Container receives on containerPort</code></pre>
|
|
55
|
+
|
|
56
|
+
<table>
|
|
57
|
+
<thead><tr><th>Component Fails</th><th>Symptom</th><th>Fix</th></tr></thead>
|
|
58
|
+
<tbody>
|
|
59
|
+
<tr><td>CoreDNS</td><td>DNS resolution fails</td><td>Restart CoreDNS pods</td></tr>
|
|
60
|
+
<tr><td>kube-proxy</td><td>Service IPs unreachable</td><td>Restart kube-proxy DaemonSet</td></tr>
|
|
61
|
+
<tr><td>CNI plugin</td><td>Cross-node pod comms fail</td><td>Reinstall CNI or check CNI pods</td></tr>
|
|
62
|
+
<tr><td>NetworkPolicy</td><td>Specific traffic blocked</td><td>Review/delete blocking policies</td></tr>
|
|
63
|
+
</tbody>
|
|
64
|
+
</table>
|
|
65
|
+
|
|
66
|
+
<blockquote><p><strong>Exam tip:</strong> Khi debug networking, bắt đầu từ trong Pod (IP reachable?) → Service (DNS + Endpoints?) → Node (CNI routing?) → NetworkPolicy. Đừng nhảy thẳng vào kube-proxy nếu chưa test DNS.</p></blockquote>
|
|
67
|
+
|
|
68
|
+
<h2 id="exam-strategy">3. CKA Exam Strategy</h2>
|
|
69
|
+
|
|
70
|
+
<table>
|
|
71
|
+
<thead><tr><th>Tip</th><th>Chi tiết</th></tr></thead>
|
|
72
|
+
<tbody>
|
|
73
|
+
<tr><td><strong>Switch context ngay</strong></td><td>Mỗi câu chỉ định cluster → <code>kubectl config use-context CLUSTER</code></td></tr>
|
|
74
|
+
<tr><td><strong>Use --dry-run</strong></td><td><code>kubectl create deploy --dry-run=client -o yaml > file.yaml</code> để gen YAML</td></tr>
|
|
75
|
+
<tr><td><strong>Use explain</strong></td><td><code>kubectl explain pod.spec.containers.resources</code> cho field help</td></tr>
|
|
76
|
+
<tr><td><strong>Bookmark fast</strong></td><td>Dùng Kubernetes docs search khi cần YAML template</td></tr>
|
|
77
|
+
<tr><td><strong>Skip hard tasks</strong></td><td>Mark và return, dễ trước (30% troubleshooting = most marks)</td></tr>
|
|
78
|
+
<tr><td><strong>Verify sau khi làm</strong></td><td><code>kubectl get/describe</code> để xác nhận changes worked</td></tr>
|
|
79
|
+
</tbody>
|
|
80
|
+
</table>
|
|
81
|
+
|
|
82
|
+
<h2 id="kubectl-shortcuts">4. Essential kubectl Shortcuts</h2>
|
|
83
|
+
|
|
84
|
+
<pre><code class="language-text"># Aliases để tiết kiệm thời jan
|
|
85
|
+
alias k=kubectl
|
|
86
|
+
alias kgp='kubectl get pods'
|
|
87
|
+
alias kgs='kubectl get svc'
|
|
88
|
+
alias kns='kubectl config set-context --current --namespace'
|
|
89
|
+
|
|
90
|
+
# Resource short names
|
|
91
|
+
po = pods
|
|
92
|
+
svc = services
|
|
93
|
+
deploy = deployments
|
|
94
|
+
ns = namespaces
|
|
95
|
+
cm = configmaps
|
|
96
|
+
pvc = persistentvolumeclaims
|
|
97
|
+
pv = persistentvolumes
|
|
98
|
+
rs = replicasets
|
|
99
|
+
sa = serviceaccounts
|
|
100
|
+
no = nodes
|
|
101
|
+
|
|
102
|
+
# Most-used flags
|
|
103
|
+
-n NAMESPACE --namespace
|
|
104
|
+
-o wide wider output (IP, Node)
|
|
105
|
+
-o yaml full YAML output
|
|
106
|
+
-o jsonpath extract specific field
|
|
107
|
+
--all-namespaces / -A search all namespaces</code></pre>
|
|
108
|
+
|
|
109
|
+
<h2 id="exam-commands">5. Must-Know Commands for CKA</h2>
|
|
110
|
+
|
|
111
|
+
<pre><code class="language-text"># Generate YAML with dry-run
|
|
112
|
+
kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
|
|
113
|
+
kubectl create deployment myapp --image=myapp --replicas=3 --dry-run=client -o yaml
|
|
114
|
+
|
|
115
|
+
# Extract field
|
|
116
|
+
kubectl get node NODENAME -o jsonpath='{.status.capacity.cpu}'
|
|
117
|
+
kubectl get pod PODNAME -o jsonpath='{.status.podIP}'
|
|
118
|
+
|
|
119
|
+
# Sort by field
|
|
120
|
+
kubectl get events --sort-by='.lastTimestamp'
|
|
121
|
+
kubectl get pods --sort-by='.status.startTime'
|
|
122
|
+
|
|
123
|
+
# Watch resources
|
|
124
|
+
kubectl get pods -w
|
|
125
|
+
|
|
126
|
+
# All namespaces
|
|
127
|
+
kubectl get pods -A
|
|
128
|
+
kubectl get pods -A | grep CrashLoop</code></pre>
|
|
129
|
+
|
|
130
|
+
<h2 id="cheatsheet">6. CKA Quick Reference</h2>
|
|
131
|
+
|
|
132
|
+
<table>
|
|
133
|
+
<thead><tr><th>Domain (Weight)</th><th>Key Topics</th></tr></thead>
|
|
134
|
+
<tbody>
|
|
135
|
+
<tr><td>Cluster Architecture (25%)</td><td>kubeadm, static pods, kubeconfig, RBAC</td></tr>
|
|
136
|
+
<tr><td>Workloads (15%)</td><td>Deployments, rollout, DaemonSet, StatefulSet, scheduling</td></tr>
|
|
137
|
+
<tr><td>Services & Networking (20%)</td><td>Services, Ingress, NetworkPolicy, DNS</td></tr>
|
|
138
|
+
<tr><td>Storage (10%)</td><td>PV, PVC, StorageClass, volume mounts</td></tr>
|
|
139
|
+
<tr><td><strong>Troubleshooting (30%)</strong></td><td>Node, workload, network debug — HIGHEST WEIGHT</td></tr>
|
|
140
|
+
</tbody>
|
|
141
|
+
</table>
|
|
142
|
+
|
|
143
|
+
<h2 id="practice">7. Practice Questions</h2>
|
|
144
|
+
|
|
145
|
+
<p><strong>Q1:</strong> A Pod successfully pings another Pod's IP but cannot reach it via Service name. DNS resolution fails. CoreDNS Pods are running. What should you check?</p>
|
|
146
|
+
<ul>
|
|
147
|
+
<li>A) kube-proxy configuration</li>
|
|
148
|
+
<li>B) The Pod's /etc/resolv.conf nameserver entry ✓</li>
|
|
149
|
+
<li>C) The node's iptables rules</li>
|
|
150
|
+
<li>D) The Service's targetPort</li>
|
|
151
|
+
</ul>
|
|
152
|
+
<p><em>Explanation: If IP works but DNS doesn't, the Pod isn't using the CoreDNS server. Check /etc/resolv.conf inside the Pod — it should show the kube-dns ClusterIP as nameserver. If not, the Pod's dnsPolicy or dnsConfig may be overriding the default.</em></p>
|
|
153
|
+
|
|
154
|
+
<p><strong>Q2:</strong> During the CKA exam, the first thing you always do when starting a new question is:</p>
|
|
155
|
+
<ul>
|
|
156
|
+
<li>A) Read Kubernetes documentation for the topic</li>
|
|
157
|
+
<li>B) Switch to the correct cluster context using kubectl config use-context ✓</li>
|
|
158
|
+
<li>C) Create a backup of the current cluster state</li>
|
|
159
|
+
<li>D) Check existing resources in the cluster</li>
|
|
160
|
+
</ul>
|
|
161
|
+
<p><em>Explanation: CKA uses multiple clusters. Each question specifies a cluster. Always switch context first — working on the wrong cluster will fail the task even if executed perfectly. This is the #1 exam mistake.</em></p>
|
|
162
|
+
|
|
163
|
+
<p><strong>Q3:</strong> You need to expose a Deployment "webapp" on port 80 to external traffic using a NodePort Service. Which is the fastest approach?</p>
|
|
164
|
+
<ul>
|
|
165
|
+
<li>A) Write a Service YAML and kubectl apply it</li>
|
|
166
|
+
<li>B) kubectl expose deployment webapp --type=NodePort --port=80 ✓</li>
|
|
167
|
+
<li>C) kubectl create service nodeport webapp --tcp=80:80</li>
|
|
168
|
+
<li>D) Edit the Deployment YAML to add a hostPort</li>
|
|
169
|
+
</ul>
|
|
170
|
+
<p><em>Explanation: kubectl expose is the fastest — it creates a Service targeting the Deployment's Pods using the same selector. It requires no YAML editing. Option C works but doesn't use the Deployment's existing selector.</em></p>
|
|
@@ -8,7 +8,7 @@ description: >-
|
|
|
8
8
|
Services & Networking (20%), Workloads & Scheduling (15%), Storage (10%).
|
|
9
9
|
12 bài học kèm bài tập thực hành terminal.
|
|
10
10
|
|
|
11
|
-
featured_image:
|
|
11
|
+
featured_image: images/blog/luyen-thi-cka-banner.png
|
|
12
12
|
level: intermediate
|
|
13
13
|
duration_hours: 35
|
|
14
14
|
lesson_count: 12
|