@xdev-asia/xdev-knowledge-mcp 1.0.44 → 1.0.45
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/01-kien-truc-cka-kubeadm.md +133 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/02-cluster-upgrade-kubeadm.md +147 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/03-rbac-cka.md +152 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/04-deployments-daemonsets-statefulsets.md +186 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/05-scheduling-taints-affinity.md +163 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/06-services-endpoints-coredns.md +145 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/07-ingress-networkpolicies-cni.md +172 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/04-storage/lessons/08-persistent-volumes-storageclass.md +159 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/09-etcd-backup-restore.md +149 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/10-troubleshooting-nodes.md +153 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/11-troubleshooting-workloads.md +146 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/12-troubleshooting-networking-exam.md +170 -0
- package/content/series/luyen-thi/luyen-thi-cka/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/01-multi-container-pods.md +146 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/02-jobs-cronjobs-resources.md +174 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/03-rolling-updates-rollbacks.md +148 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/04-helm-kustomize.md +181 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/03-app-observability/lessons/05-probes-logging-debugging.md +183 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/06-configmaps-secrets.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/07-securitycontext-pod-security.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/08-resources-qos.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/09-services-ingress.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/10-networkpolicies-exam-strategy.md +236 -0
- package/content/series/luyen-thi/luyen-thi-ckad/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/01-kien-truc-kubernetes.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/02-pods-workloads-controllers.md +142 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/03-services-networking-storage.md +155 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/04-rbac-security.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/05-container-runtimes-oci.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/06-orchestration-patterns.md +147 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/03-cloud-native-architecture/lessons/07-cloud-native-architecture.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/08-observability.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/09-helm-gitops-cicd.md +162 -0
- package/content/series/luyen-thi/luyen-thi-kcna/index.md +1 -1
- package/data/quizzes.json +1059 -0
- package/package.json +1 -1
|
@@ -0,0 +1,163 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d2-l05
|
|
3
|
+
title: 'Bài 5: Scheduling — Taints, Tolerations & Affinity'
|
|
4
|
+
slug: 05-scheduling-taints-affinity
|
|
5
|
+
description: >-
|
|
6
|
+
Node scheduling chi tiết: Taints và Tolerations, Node Affinity, Pod Affinity,
|
|
7
|
+
Priority, resource requests trong scheduling. Hands-on CKA tasks.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 5
|
|
12
|
+
section_title: "Domain 2: Workloads & Scheduling (15%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai5-scheduling.png" alt="Taints, Tolerations và Node Affinity trong Kubernetes" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="taints-tolerations">1. Taints & Tolerations</h2>
|
|
22
|
+
|
|
23
|
+
<pre><code class="language-text"># Add taint to node
|
|
24
|
+
kubectl taint nodes node1 gpu=true:NoSchedule
|
|
25
|
+
kubectl taint nodes node1 gpu=true:PreferNoSchedule
|
|
26
|
+
kubectl taint nodes node1 gpu=true:NoExecute
|
|
27
|
+
|
|
28
|
+
# Remove taint
|
|
29
|
+
kubectl taint nodes node1 gpu=true:NoSchedule-
|
|
30
|
+
|
|
31
|
+
# View taints on node
|
|
32
|
+
kubectl describe node node1 | grep -A5 Taints</code></pre>
|
|
33
|
+
|
|
34
|
+
<table>
|
|
35
|
+
<thead><tr><th>Taint Effect</th><th>Behavior</th></tr></thead>
|
|
36
|
+
<tbody>
|
|
37
|
+
<tr><td><strong>NoSchedule</strong></td><td>Pods không có matching toleration sẽ không được schedule</td></tr>
|
|
38
|
+
<tr><td><strong>PreferNoSchedule</strong></td><td>Scheduler cố tránh, nhưng không bắt buộc</td></tr>
|
|
39
|
+
<tr><td><strong>NoExecute</strong></td><td>Evict existing pods + không schedule mới (có thể set tolerationSeconds)</td></tr>
|
|
40
|
+
</tbody>
|
|
41
|
+
</table>
|
|
42
|
+
|
|
43
|
+
<pre><code class="language-text"># Pod toleration
|
|
44
|
+
spec:
|
|
45
|
+
tolerations:
|
|
46
|
+
- key: "gpu"
|
|
47
|
+
operator: "Equal"
|
|
48
|
+
value: "true"
|
|
49
|
+
effect: "NoSchedule"
|
|
50
|
+
# OR tolerate all taints on a node:
|
|
51
|
+
- operator: "Exists"</code></pre>
|
|
52
|
+
|
|
53
|
+
<blockquote><p><strong>Exam tip:</strong> Taints/Tolerations = <strong>repulsion</strong> (node pushes pods away, pod tolerates). Node Affinity = <strong>attraction</strong> (pod prefers/requires certain nodes). Thường phải dùng kết hợp cả hai để đảm bảo pods chỉ chạy trên nodes mong muốn.</p></blockquote>
|
|
54
|
+
|
|
55
|
+
<h2 id="node-affinity">2. Node Affinity</h2>
|
|
56
|
+
|
|
57
|
+
<pre><code class="language-text">spec:
|
|
58
|
+
affinity:
|
|
59
|
+
nodeAffinity:
|
|
60
|
+
# HARD rule: Pod MUST be on matching node
|
|
61
|
+
requiredDuringSchedulingIgnoredDuringExecution:
|
|
62
|
+
nodeSelectorTerms:
|
|
63
|
+
- matchExpressions:
|
|
64
|
+
- key: disktype
|
|
65
|
+
operator: In
|
|
66
|
+
values: [ssd, nvme]
|
|
67
|
+
# SOFT rule: prefer but not required
|
|
68
|
+
preferredDuringSchedulingIgnoredDuringExecution:
|
|
69
|
+
- weight: 100
|
|
70
|
+
preference:
|
|
71
|
+
matchExpressions:
|
|
72
|
+
- key: zone
|
|
73
|
+
operator: In
|
|
74
|
+
values: [us-east-1a]</code></pre>
|
|
75
|
+
|
|
76
|
+
<table>
|
|
77
|
+
<thead><tr><th>Affinity Type</th><th>Scheduling</th><th>Running</th></tr></thead>
|
|
78
|
+
<tbody>
|
|
79
|
+
<tr><td>requiredDuringScheduling<strong>IgnoredDuring</strong>Execution</td><td>Hard requirement</td><td>Pod stays even if node label removed</td></tr>
|
|
80
|
+
<tr><td>preferredDuringScheduling<strong>IgnoredDuring</strong>Execution</td><td>Best effort</td><td>Pod stays even if node label removed</td></tr>
|
|
81
|
+
<tr><td>requiredDuringScheduling<strong>RequiredDuring</strong>Execution (future)</td><td>Hard</td><td>Evict if node no longer matches</td></tr>
|
|
82
|
+
</tbody>
|
|
83
|
+
</table>
|
|
84
|
+
|
|
85
|
+
<h2 id="pod-affinity">3. Pod Affinity & Anti-Affinity</h2>
|
|
86
|
+
|
|
87
|
+
<pre><code class="language-text"># Pod anti-affinity: spread pods across nodes
|
|
88
|
+
spec:
|
|
89
|
+
affinity:
|
|
90
|
+
podAntiAffinity:
|
|
91
|
+
requiredDuringSchedulingIgnoredDuringExecution:
|
|
92
|
+
- labelSelector:
|
|
93
|
+
matchLabels:
|
|
94
|
+
app: frontend
|
|
95
|
+
topologyKey: kubernetes.io/hostname # 1 pod per node
|
|
96
|
+
|
|
97
|
+
# Pod affinity: co-locate pods (e.g., app + cache on same node)
|
|
98
|
+
spec:
|
|
99
|
+
affinity:
|
|
100
|
+
podAffinity:
|
|
101
|
+
preferredDuringSchedulingIgnoredDuringExecution:
|
|
102
|
+
- weight: 100
|
|
103
|
+
podAffinityTerm:
|
|
104
|
+
labelSelector:
|
|
105
|
+
matchLabels:
|
|
106
|
+
app: redis
|
|
107
|
+
topologyKey: kubernetes.io/hostname</code></pre>
|
|
108
|
+
|
|
109
|
+
<h2 id="node-selector">4. NodeSelector (Simple)</h2>
|
|
110
|
+
|
|
111
|
+
<pre><code class="language-text"># Label node
|
|
112
|
+
kubectl label nodes node1 disktype=ssd
|
|
113
|
+
|
|
114
|
+
# Use in pod spec
|
|
115
|
+
spec:
|
|
116
|
+
nodeSelector:
|
|
117
|
+
disktype: ssd
|
|
118
|
+
|
|
119
|
+
# Schedule pod on specific node
|
|
120
|
+
spec:
|
|
121
|
+
nodeName: node1 # Bypass scheduler entirely</code></pre>
|
|
122
|
+
|
|
123
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
124
|
+
|
|
125
|
+
<table>
|
|
126
|
+
<thead><tr><th>Need</th><th>Solution</th></tr></thead>
|
|
127
|
+
<tbody>
|
|
128
|
+
<tr><td>Node dành riêng cho GPU</td><td>Taint node + Pod toleration</td></tr>
|
|
129
|
+
<tr><td>Pod phải chạy trên SSD nodes</td><td>Node affinity (required) hoặc nodeSelector</td></tr>
|
|
130
|
+
<tr><td>Spread pods across nodes</td><td>Pod anti-affinity (required, topologyKey: hostname)</td></tr>
|
|
131
|
+
<tr><td>Co-locate app + cache</td><td>Pod affinity (preferred, topologyKey: hostname)</td></tr>
|
|
132
|
+
<tr><td>Schedule trên specific node</td><td>spec.nodeName hoặc nodeSelector</td></tr>
|
|
133
|
+
</tbody>
|
|
134
|
+
</table>
|
|
135
|
+
|
|
136
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
137
|
+
|
|
138
|
+
<p><strong>Q1:</strong> A node has been tainted with <code>dedicated=database:NoExecute</code>. A running Pod without tolerations is on this node. What happens?</p>
|
|
139
|
+
<ul>
|
|
140
|
+
<li>A) The Pod continues running; NoExecute only affects new Pods</li>
|
|
141
|
+
<li>B) The Pod is evicted immediately ✓</li>
|
|
142
|
+
<li>C) The Pod is evicted after 5 minutes</li>
|
|
143
|
+
<li>D) The Pod gets an error but continues running</li>
|
|
144
|
+
</ul>
|
|
145
|
+
<p><em>Explanation: NoExecute evicts existing Pods that don't tolerate the taint. The eviction is immediate unless the Pod has a toleration with tolerationSeconds (which allows it to remain for that duration before eviction).</em></p>
|
|
146
|
+
|
|
147
|
+
<p><strong>Q2:</strong> You want Pods of "frontend" Deployment to never run on the same node as each other. Which configuration achieves this?</p>
|
|
148
|
+
<ul>
|
|
149
|
+
<li>A) Node affinity with required rule</li>
|
|
150
|
+
<li>B) Taint each node after the first frontend Pod runs</li>
|
|
151
|
+
<li>C) Pod anti-affinity with required rule and topologyKey: kubernetes.io/hostname ✓</li>
|
|
152
|
+
<li>D) Use DaemonSet instead of Deployment</li>
|
|
153
|
+
</ul>
|
|
154
|
+
<p><em>Explanation: Pod anti-affinity with requiredDuringScheduling and topologyKey of hostname ensures no two Pods with matching labels land on the same node. This is the preferred way to spread Pods for high availability.</em></p>
|
|
155
|
+
|
|
156
|
+
<p><strong>Q3:</strong> A Pod has nodeAffinity with "requiredDuringSchedulingIgnoredDuringExecution" targeting nodes with label <code>zone=east</code>. After scheduling, the label is removed from the node. What happens to the running Pod?</p>
|
|
157
|
+
<ul>
|
|
158
|
+
<li>A) The Pod is immediately evicted</li>
|
|
159
|
+
<li>B) The Pod continues running ✓</li>
|
|
160
|
+
<li>C) The Pod restarts on a matching node</li>
|
|
161
|
+
<li>D) The Pod enters Pending state</li>
|
|
162
|
+
</ul>
|
|
163
|
+
<p><em>Explanation: "IgnoredDuringExecution" means the affinity rule only applies at scheduling time. Once running, removing the label doesn't affect the Pod. Future replacements (after crash/update) would fail to schedule if no matching node exists.</em></p>
|
|
@@ -0,0 +1,145 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d3-l06
|
|
3
|
+
title: 'Bài 6: Services, Endpoints & CoreDNS'
|
|
4
|
+
slug: 06-services-endpoints-coredns
|
|
5
|
+
description: >-
|
|
6
|
+
Service types sâu hơn, Endpoints object, kube-proxy modes. CoreDNS
|
|
7
|
+
configuration và troubleshooting DNS. ExternalName, headless services.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 6
|
|
12
|
+
section_title: "Domain 3: Services & Networking (20%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai6-coredns.png" alt="CoreDNS, kube-proxy và Service Discovery trong Kubernetes" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="services-deep">1. Services & Endpoints</h2>
|
|
22
|
+
|
|
23
|
+
<p>Khi Service được tạo, Kubernetes tự động tạo <strong>Endpoints</strong> object chứa danh sách IPs của Pods matching selector.</p>
|
|
24
|
+
|
|
25
|
+
<pre><code class="language-text"># Service → Endpoints → Pods
|
|
26
|
+
kubectl get service my-app # Virtual IP (ClusterIP)
|
|
27
|
+
kubectl get endpoints my-app # List: 10.244.1.2:80, 10.244.1.3:80
|
|
28
|
+
kubectl describe endpoints my-app # Detailed
|
|
29
|
+
|
|
30
|
+
# Nếu Endpoints rỗng → service selector không match pod labels
|
|
31
|
+
# Debug: so sánh service selector vs pod labels
|
|
32
|
+
kubectl get svc my-app -o jsonpath='{.spec.selector}'
|
|
33
|
+
kubectl get pods --show-labels | grep app=my-app</code></pre>
|
|
34
|
+
|
|
35
|
+
<h2 id="coredns">2. CoreDNS Configuration</h2>
|
|
36
|
+
|
|
37
|
+
<pre><code class="language-text"># CoreDNS runs as Deployment in kube-system
|
|
38
|
+
kubectl get pods -n kube-system -l k8s-app=kube-dns
|
|
39
|
+
kubectl get configmap coredns -n kube-system -o yaml
|
|
40
|
+
|
|
41
|
+
# Default Corefile:
|
|
42
|
+
.:53 {
|
|
43
|
+
errors
|
|
44
|
+
health { lameduck 5s }
|
|
45
|
+
ready
|
|
46
|
+
kubernetes cluster.local in-addr.arpa ip6.arpa { # cluster domain
|
|
47
|
+
pods insecure
|
|
48
|
+
fallthrough in-addr.arpa ip6.arpa
|
|
49
|
+
}
|
|
50
|
+
prometheus :9153
|
|
51
|
+
forward . /etc/resolv.conf # Forward non-cluster queries to upstream
|
|
52
|
+
cache 30
|
|
53
|
+
loop
|
|
54
|
+
reload
|
|
55
|
+
loadbalance
|
|
56
|
+
}</code></pre>
|
|
57
|
+
|
|
58
|
+
<blockquote><p><strong>Exam tip:</strong> CoreDNS troubleshooting: 1) Check CoreDNS Pods running, 2) Check kube-dns Service in kube-system, 3) Run <code>kubectl exec -it pod -- nslookup kubernetes</code> to test DNS from inside pod, 4) Check Pod's <code>/etc/resolv.conf</code> points to kube-dns cluster IP.</p></blockquote>
|
|
59
|
+
|
|
60
|
+
<h2 id="kube-proxy">3. kube-proxy Modes</h2>
|
|
61
|
+
|
|
62
|
+
<table>
|
|
63
|
+
<thead><tr><th>Mode</th><th>Mechanism</th><th>Performance</th></tr></thead>
|
|
64
|
+
<tbody>
|
|
65
|
+
<tr><td><strong>iptables</strong> (default)</td><td>Linux iptables rules, random pod selection</td><td>Good, O(n) rules</td></tr>
|
|
66
|
+
<tr><td><strong>IPVS</strong></td><td>Linux IPVS (kernel, hash-based)</td><td>Better for large clusters</td></tr>
|
|
67
|
+
<tr><td><strong>userspace</strong> (deprecated)</td><td>User-space proxy</td><td>Slow, legacy</td></tr>
|
|
68
|
+
</tbody>
|
|
69
|
+
</table>
|
|
70
|
+
|
|
71
|
+
<h2 id="headless-services">4. Headless Services</h2>
|
|
72
|
+
|
|
73
|
+
<pre><code class="language-text"># Headless: clusterIP: None
|
|
74
|
+
# DNS returns Pod IPs directly (không qua virtual IP)
|
|
75
|
+
apiVersion: v1
|
|
76
|
+
kind: Service
|
|
77
|
+
metadata:
|
|
78
|
+
name: mysql-headless
|
|
79
|
+
spec:
|
|
80
|
+
clusterIP: None
|
|
81
|
+
selector:
|
|
82
|
+
app: mysql
|
|
83
|
+
ports:
|
|
84
|
+
- port: 3306
|
|
85
|
+
|
|
86
|
+
# DNS behavior:
|
|
87
|
+
# mysql-headless → multiple A records (one per Pod IP)
|
|
88
|
+
# mysql-0.mysql-headless → specific Pod IP (StatefulSet)</code></pre>
|
|
89
|
+
|
|
90
|
+
<h2 id="debug-dns">5. DNS Troubleshooting Commands</h2>
|
|
91
|
+
|
|
92
|
+
<pre><code class="language-text"># Test DNS từ trong pod
|
|
93
|
+
kubectl run dns-test --image=busybox --rm -it -- nslookup kubernetes
|
|
94
|
+
kubectl run dns-test --image=busybox --rm -it -- nslookup my-service.namespace
|
|
95
|
+
|
|
96
|
+
# Check resolv.conf trong pod
|
|
97
|
+
kubectl exec -it my-pod -- cat /etc/resolv.conf
|
|
98
|
+
# Should show: nameserver 10.96.0.10 (kube-dns service IP)
|
|
99
|
+
|
|
100
|
+
# Check CoreDNS logs
|
|
101
|
+
kubectl logs -n kube-system -l k8s-app=kube-dns
|
|
102
|
+
|
|
103
|
+
# Check kube-dns service
|
|
104
|
+
kubectl get svc -n kube-system kube-dns</code></pre>
|
|
105
|
+
|
|
106
|
+
<h2 id="cheatsheet">6. Cheat Sheet</h2>
|
|
107
|
+
|
|
108
|
+
<table>
|
|
109
|
+
<thead><tr><th>Problem</th><th>Check</th></tr></thead>
|
|
110
|
+
<tbody>
|
|
111
|
+
<tr><td>Service không thể reach pods</td><td>kubectl get endpoints NAME</td></tr>
|
|
112
|
+
<tr><td>Endpoints empty</td><td>Service selector vs Pod labels mismatch</td></tr>
|
|
113
|
+
<tr><td>Pod không resolve DNS</td><td>/etc/resolv.conf + CoreDNS pods status</td></tr>
|
|
114
|
+
<tr><td>StatefulSet pod DNS</td><td>Cần headless service cùng tên với serviceName</td></tr>
|
|
115
|
+
</tbody>
|
|
116
|
+
</table>
|
|
117
|
+
|
|
118
|
+
<h2 id="practice">7. Practice Questions</h2>
|
|
119
|
+
|
|
120
|
+
<p><strong>Q1:</strong> A Service is created but traffic never reaches the Pods. The Endpoints object for the Service shows "no endpoints". What is the most likely cause?</p>
|
|
121
|
+
<ul>
|
|
122
|
+
<li>A) The Service port doesn't match the container port</li>
|
|
123
|
+
<li>B) The Service selector labels don't match the Pod labels ✓</li>
|
|
124
|
+
<li>C) A NetworkPolicy is blocking traffic</li>
|
|
125
|
+
<li>D) The Pods are in a different cluster</li>
|
|
126
|
+
</ul>
|
|
127
|
+
<p><em>Explanation: Empty Endpoints means the Service can't find any matching Pods. This is caused by a label selector mismatch. Verify with: kubectl get svc myapp -o jsonpath='{.spec.selector}' and compare with kubectl get pods --show-labels.</em></p>
|
|
128
|
+
|
|
129
|
+
<p><strong>Q2:</strong> A Pod running in namespace "frontend" needs to reach a Service "payments" in namespace "backend". Which DNS name is correct?</p>
|
|
130
|
+
<ul>
|
|
131
|
+
<li>A) payments</li>
|
|
132
|
+
<li>B) payments.backend</li>
|
|
133
|
+
<li>C) payments.backend.svc.cluster.local ✓</li>
|
|
134
|
+
<li>D) backend.payments.cluster.local</li>
|
|
135
|
+
</ul>
|
|
136
|
+
<p><em>Explanation: Cross-namespace DNS requires the full namespace: {service}.{namespace}.svc.cluster.local. Short names only work within the same namespace. Both B and C work, but C is the most explicit and reliable form.</em></p>
|
|
137
|
+
|
|
138
|
+
<p><strong>Q3:</strong> CoreDNS is not responding. What sequence of steps should you follow to diagnose?</p>
|
|
139
|
+
<ul>
|
|
140
|
+
<li>A) Restart the entire cluster</li>
|
|
141
|
+
<li>B) Check CoreDNS Pods running, check kube-dns Service ClusterIP, test from inside a Pod with nslookup ✓</li>
|
|
142
|
+
<li>C) Reinstall kube-proxy</li>
|
|
143
|
+
<li>D) Recreate the kube-system namespace</li>
|
|
144
|
+
</ul>
|
|
145
|
+
<p><em>Explanation: Systematic DNS debugging: (1) kubectl get pods -n kube-system -l k8s-app=kube-dns, (2) verify kube-dns Service has ClusterIP, (3) check Pod's /etc/resolv.conf points to that IP, (4) run nslookup from a test pod.</em></p>
|
|
@@ -0,0 +1,172 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d3-l07
|
|
3
|
+
title: 'Bài 7: Ingress, Network Policies & CNI'
|
|
4
|
+
slug: 07-ingress-networkpolicies-cni
|
|
5
|
+
description: >-
|
|
6
|
+
Ingress resources và controllers. Network Policies isolate traffic.
|
|
7
|
+
CNI plugins: Flannel, Calico, Cilium. Troubleshoot Pod networking.
|
|
8
|
+
duration_minutes: 60
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 7
|
|
12
|
+
section_title: "Domain 3: Services & Networking (20%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai7-ingress-network.png" alt="Ingress Routing và NetworkPolicy — L7 routing và network segmentation" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="ingress">1. Ingress</h2>
|
|
22
|
+
|
|
23
|
+
<p><strong>Ingress</strong> cung cấp HTTP/HTTPS routing vào cluster. Cần <strong>Ingress Controller</strong> (nginx-ingress, Traefik, ALB) để xử lý Ingress resources.</p>
|
|
24
|
+
|
|
25
|
+
<pre><code class="language-text">Internet
|
|
26
|
+
│
|
|
27
|
+
[Ingress Controller] (nginx Pod, port 80/443)
|
|
28
|
+
│
|
|
29
|
+
├── /api/* ──────────────► Service: api-svc:8080
|
|
30
|
+
├── /web/* ──────────────► Service: web-svc:80
|
|
31
|
+
└── shop.example.com ───► Service: shop-svc:3000</code></pre>
|
|
32
|
+
|
|
33
|
+
<pre><code class="language-text">apiVersion: networking.k8s.io/v1
|
|
34
|
+
kind: Ingress
|
|
35
|
+
metadata:
|
|
36
|
+
name: app-ingress
|
|
37
|
+
annotations:
|
|
38
|
+
nginx.ingress.kubernetes.io/rewrite-target: /
|
|
39
|
+
spec:
|
|
40
|
+
ingressClassName: nginx
|
|
41
|
+
rules:
|
|
42
|
+
- host: api.example.com
|
|
43
|
+
http:
|
|
44
|
+
paths:
|
|
45
|
+
- path: /
|
|
46
|
+
pathType: Prefix
|
|
47
|
+
backend:
|
|
48
|
+
service:
|
|
49
|
+
name: api-svc
|
|
50
|
+
port:
|
|
51
|
+
number: 8080
|
|
52
|
+
tls:
|
|
53
|
+
- hosts:
|
|
54
|
+
- api.example.com
|
|
55
|
+
secretName: tls-secret # TLS cert stored in Secret</code></pre>
|
|
56
|
+
|
|
57
|
+
<blockquote><p><strong>Exam tip:</strong> Ingress không hoạt động nếu không có <strong>Ingress Controller</strong>. CKA exam thường đã có controller cài sẵn. Kiểm tra <code>kubectl get ingressclass</code> để xem class name cần dùng trong <code>spec.ingressClassName</code>.</p></blockquote>
|
|
58
|
+
|
|
59
|
+
<h2 id="network-policies">2. Network Policies — CKA Depth</h2>
|
|
60
|
+
|
|
61
|
+
<pre><code class="language-text">apiVersion: networking.k8s.io/v1
|
|
62
|
+
kind: NetworkPolicy
|
|
63
|
+
metadata:
|
|
64
|
+
name: allow-frontend-to-backend
|
|
65
|
+
namespace: app
|
|
66
|
+
spec:
|
|
67
|
+
podSelector:
|
|
68
|
+
matchLabels:
|
|
69
|
+
app: backend # Apply to backend pods
|
|
70
|
+
policyTypes:
|
|
71
|
+
- Ingress # Control incoming traffic
|
|
72
|
+
ingress:
|
|
73
|
+
- from:
|
|
74
|
+
- podSelector:
|
|
75
|
+
matchLabels:
|
|
76
|
+
app: frontend # Only from frontend pods
|
|
77
|
+
ports:
|
|
78
|
+
- protocol: TCP
|
|
79
|
+
port: 8080
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
# Deny all ingress (default deny)
|
|
83
|
+
apiVersion: networking.k8s.io/v1
|
|
84
|
+
kind: NetworkPolicy
|
|
85
|
+
metadata:
|
|
86
|
+
name: deny-all
|
|
87
|
+
spec:
|
|
88
|
+
podSelector: {} # Select ALL pods
|
|
89
|
+
policyTypes:
|
|
90
|
+
- Ingress # No ingress rules = block all ingress</code></pre>
|
|
91
|
+
|
|
92
|
+
<table>
|
|
93
|
+
<thead><tr><th>Selector</th><th>Ý nghĩa</th></tr></thead>
|
|
94
|
+
<tbody>
|
|
95
|
+
<tr><td><code>podSelector: {}</code></td><td>Select tất cả Pods trong namespace</td></tr>
|
|
96
|
+
<tr><td><code>namespaceSelector</code></td><td>Cho phép traffic từ specific namespace</td></tr>
|
|
97
|
+
<tr><td><code>ipBlock</code></td><td>Cho phép traffic từ CIDR range</td></tr>
|
|
98
|
+
</tbody>
|
|
99
|
+
</table>
|
|
100
|
+
|
|
101
|
+
<h2 id="cni">3. CNI Plugins</h2>
|
|
102
|
+
|
|
103
|
+
<table>
|
|
104
|
+
<thead><tr><th>CNI</th><th>Network Policy?</th><th>Đặc điểm</th></tr></thead>
|
|
105
|
+
<tbody>
|
|
106
|
+
<tr><td><strong>Flannel</strong></td><td>Không</td><td>Simple overlay, VXLAN/host-gw</td></tr>
|
|
107
|
+
<tr><td><strong>Calico</strong></td><td>Có</td><td>BGP routing, high performance, enterprise</td></tr>
|
|
108
|
+
<tr><td><strong>Cilium</strong></td><td>Có (eBPF)</td><td>eBPF-based, L7 policies, observability</td></tr>
|
|
109
|
+
<tr><td><strong>Weave Net</strong></td><td>Có</td><td>Simple, mesh network</td></tr>
|
|
110
|
+
</tbody>
|
|
111
|
+
</table>
|
|
112
|
+
|
|
113
|
+
<blockquote><p><strong>Exam tip:</strong> Calico và Cilium hỗ trợ NetworkPolicy. Flannel không. Nếu exam hỏi "configure network policies" → phải có Calico/Cilium. Trên kubeadm lab, Calico là lựa chọn phổ biến nhất.</p></blockquote>
|
|
114
|
+
|
|
115
|
+
<h2 id="pod-network-debug">4. Pod Network Troubleshooting</h2>
|
|
116
|
+
|
|
117
|
+
<pre><code class="language-text"># Test pod-to-pod connectivity
|
|
118
|
+
kubectl exec -it pod1 -- ping 10.244.1.5
|
|
119
|
+
kubectl exec -it pod1 -- curl http://pod2:8080
|
|
120
|
+
|
|
121
|
+
# Check pod's IP
|
|
122
|
+
kubectl get pod pod1 -o jsonpath='{.status.podIP}'
|
|
123
|
+
|
|
124
|
+
# Check CNI config on node
|
|
125
|
+
ls /etc/cni/net.d/
|
|
126
|
+
cat /etc/cni/net.d/10-calico.conflist
|
|
127
|
+
|
|
128
|
+
# Check kube-proxy rules
|
|
129
|
+
kubectl get pod -n kube-system -l k8s-app=kube-proxy
|
|
130
|
+
iptables -t nat -L KUBE-SERVICES | head -20</code></pre>
|
|
131
|
+
|
|
132
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
133
|
+
|
|
134
|
+
<table>
|
|
135
|
+
<thead><tr><th>Task</th><th>Command/Object</th></tr></thead>
|
|
136
|
+
<tbody>
|
|
137
|
+
<tr><td>HTTP routing into cluster</td><td><strong>Ingress</strong> (+ IngressClass)</td></tr>
|
|
138
|
+
<tr><td>Block all traffic to a Pod</td><td>NetworkPolicy with empty ingress: []</td></tr>
|
|
139
|
+
<tr><td>Allow traffic from namespace</td><td>NetworkPolicy with namespaceSelector</td></tr>
|
|
140
|
+
<tr><td>Check CNI plugins</td><td><code>ls /etc/cni/net.d/</code></td></tr>
|
|
141
|
+
<tr><td>Check ingressclass</td><td><code>kubectl get ingressclass</code></td></tr>
|
|
142
|
+
</tbody>
|
|
143
|
+
</table>
|
|
144
|
+
|
|
145
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
146
|
+
|
|
147
|
+
<p><strong>Q1:</strong> You create an Ingress resource but it has no ADDRESS and traffic doesn't route. What is the most likely cause?</p>
|
|
148
|
+
<ul>
|
|
149
|
+
<li>A) The Service type must be LoadBalancer</li>
|
|
150
|
+
<li>B) No Ingress Controller is installed in the cluster ✓</li>
|
|
151
|
+
<li>C) The Ingress must be in the kube-system namespace</li>
|
|
152
|
+
<li>D) The IngressClass must be set to "default"</li>
|
|
153
|
+
</ul>
|
|
154
|
+
<p><em>Explanation: An Ingress resource is just configuration — it has no effect without an Ingress Controller (nginx, Traefik, etc.) to implement it. The controller watches Ingress objects and configures the actual proxy.</em></p>
|
|
155
|
+
|
|
156
|
+
<p><strong>Q2:</strong> A NetworkPolicy selects Pods with label app=database and specifies policyTypes: [Ingress]. No ingress rules are defined. What is the effect?</p>
|
|
157
|
+
<ul>
|
|
158
|
+
<li>A) All traffic is allowed (no rules = allow all)</li>
|
|
159
|
+
<li>B) All ingress traffic to database Pods is blocked ✓</li>
|
|
160
|
+
<li>C) Only egress traffic is affected</li>
|
|
161
|
+
<li>D) The policy is invalid and has no effect</li>
|
|
162
|
+
</ul>
|
|
163
|
+
<p><em>Explanation: A NetworkPolicy with policyTypes: [Ingress] but empty ingress rules acts as a default deny for all ingress traffic to the selected Pods. This is a common way to implement default-deny policies.</em></p>
|
|
164
|
+
|
|
165
|
+
<p><strong>Q3:</strong> Which CNI plugin should you choose if you need both pod networking AND NetworkPolicy enforcement?</p>
|
|
166
|
+
<ul>
|
|
167
|
+
<li>A) Flannel</li>
|
|
168
|
+
<li>B) Calico ✓</li>
|
|
169
|
+
<li>C) CoreDNS</li>
|
|
170
|
+
<li>D) kube-proxy</li>
|
|
171
|
+
</ul>
|
|
172
|
+
<p><em>Explanation: Flannel provides only overlay networking without NetworkPolicy support. Calico (and Cilium, Weave) implement the NetworkPolicy API. CoreDNS is DNS, and kube-proxy handles Service load balancing — they're not CNI plugins.</em></p>
|
|
@@ -0,0 +1,159 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: cka-d4-l08
|
|
3
|
+
title: 'Bài 8: Persistent Volumes, PVCs & StorageClass'
|
|
4
|
+
slug: 08-persistent-volumes-storageclass
|
|
5
|
+
description: >-
|
|
6
|
+
PersistentVolume, PersistentVolumeClaim, StorageClass hands-on. Dynamic
|
|
7
|
+
provisioning, reclaim policies, volume modes và access modes cho CKA.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 8
|
|
12
|
+
section_title: "Domain 4: Storage (10%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-cka-series-001
|
|
15
|
+
title: 'Luyện thi CKA — Certified Kubernetes Administrator'
|
|
16
|
+
slug: luyen-thi-cka
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-cka-bai8-storage.png" alt="Persistent Volumes, PVCs và StorageClass — Static và Dynamic Provisioning" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="pv-pvc">1. PV & PVC Lifecycle</h2>
|
|
22
|
+
|
|
23
|
+
<pre><code class="language-text">Static Provisioning:
|
|
24
|
+
Admin creates PV → Developer creates PVC → K8s binds PVC to matching PV
|
|
25
|
+
|
|
26
|
+
Dynamic Provisioning:
|
|
27
|
+
Developer creates PVC (with storageClassName) → StorageClass auto-creates PV
|
|
28
|
+
|
|
29
|
+
Binding criteria:
|
|
30
|
+
✓ accessMode match
|
|
31
|
+
✓ storage size: PV >= PVC requested
|
|
32
|
+
✓ storageClass match (or "")
|
|
33
|
+
✓ volumeMode match</code></pre>
|
|
34
|
+
|
|
35
|
+
<h2 id="create-pv">2. Tạo PV và PVC</h2>
|
|
36
|
+
|
|
37
|
+
<pre><code class="language-text"># PersistentVolume (static, hostPath cho lab)
|
|
38
|
+
apiVersion: v1
|
|
39
|
+
kind: PersistentVolume
|
|
40
|
+
metadata:
|
|
41
|
+
name: pv-data
|
|
42
|
+
spec:
|
|
43
|
+
capacity:
|
|
44
|
+
storage: 5Gi
|
|
45
|
+
accessModes:
|
|
46
|
+
- ReadWriteOnce
|
|
47
|
+
persistentVolumeReclaimPolicy: Retain
|
|
48
|
+
storageClassName: manual
|
|
49
|
+
hostPath:
|
|
50
|
+
path: /mnt/data # Lab only; use NFS/EBS in production
|
|
51
|
+
|
|
52
|
+
---
|
|
53
|
+
# PersistentVolumeClaim
|
|
54
|
+
apiVersion: v1
|
|
55
|
+
kind: PersistentVolumeClaim
|
|
56
|
+
metadata:
|
|
57
|
+
name: pvc-data
|
|
58
|
+
spec:
|
|
59
|
+
accessModes:
|
|
60
|
+
- ReadWriteOnce
|
|
61
|
+
resources:
|
|
62
|
+
requests:
|
|
63
|
+
storage: 2Gi
|
|
64
|
+
storageClassName: manual</code></pre>
|
|
65
|
+
|
|
66
|
+
<table>
|
|
67
|
+
<thead><tr><th>Reclaim Policy</th><th>Hành vi khi PVC deleted</th></tr></thead>
|
|
68
|
+
<tbody>
|
|
69
|
+
<tr><td><strong>Retain</strong></td><td>PV giữ data, phase = Released (admin phải xóa thủ công)</td></tr>
|
|
70
|
+
<tr><td><strong>Delete</strong></td><td>PV và storage backend bị xóa tự động</td></tr>
|
|
71
|
+
<tr><td><strong>Recycle</strong> (deprecated)</td><td>Xóa files, PV sẵn sàng cho PVC mới</td></tr>
|
|
72
|
+
</tbody>
|
|
73
|
+
</table>
|
|
74
|
+
|
|
75
|
+
<blockquote><p><strong>Exam tip:</strong> PV phase lifecycle: <strong>Available</strong> (không có PVC) → <strong>Bound</strong> (PVC bound) → <strong>Released</strong> (PVC deleted, Retain policy) → <strong>Failed</strong>. PV Released không thể bind PVC mới cho đến khi admin manually edit PV để remove <code>.spec.claimRef</code>.</p></blockquote>
|
|
76
|
+
|
|
77
|
+
<h2 id="storageclass">3. StorageClass</h2>
|
|
78
|
+
|
|
79
|
+
<pre><code class="language-text">apiVersion: storage.k8s.io/v1
|
|
80
|
+
kind: StorageClass
|
|
81
|
+
metadata:
|
|
82
|
+
name: fast
|
|
83
|
+
annotations:
|
|
84
|
+
storageclass.kubernetes.io/is-default-class: "true" # Default SC
|
|
85
|
+
provisioner: kubernetes.io/aws-ebs
|
|
86
|
+
parameters:
|
|
87
|
+
type: gp3
|
|
88
|
+
iopsPerGB: "10"
|
|
89
|
+
reclaimPolicy: Delete
|
|
90
|
+
allowVolumeExpansion: true
|
|
91
|
+
volumeBindingMode: WaitForFirstConsumer # Delay until Pod scheduled</code></pre>
|
|
92
|
+
|
|
93
|
+
<table>
|
|
94
|
+
<thead><tr><th>volumeBindingMode</th><th>Behavior</th></tr></thead>
|
|
95
|
+
<tbody>
|
|
96
|
+
<tr><td><strong>Immediate</strong></td><td>PV provisioned immediately khi PVC tạo</td></tr>
|
|
97
|
+
<tr><td><strong>WaitForFirstConsumer</strong></td><td>Delay until Pod scheduled (ensures same zone)</td></tr>
|
|
98
|
+
</tbody>
|
|
99
|
+
</table>
|
|
100
|
+
|
|
101
|
+
<h2 id="pod-with-pvc">4. Pod dùng PVC</h2>
|
|
102
|
+
|
|
103
|
+
<pre><code class="language-text">apiVersion: v1
|
|
104
|
+
kind: Pod
|
|
105
|
+
metadata:
|
|
106
|
+
name: app-pod
|
|
107
|
+
spec:
|
|
108
|
+
containers:
|
|
109
|
+
- name: app
|
|
110
|
+
image: nginx
|
|
111
|
+
volumeMounts:
|
|
112
|
+
- name: data-volume
|
|
113
|
+
mountPath: /var/data
|
|
114
|
+
volumes:
|
|
115
|
+
- name: data-volume
|
|
116
|
+
persistentVolumeClaim:
|
|
117
|
+
claimName: pvc-data # Reference PVC name</code></pre>
|
|
118
|
+
|
|
119
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
120
|
+
|
|
121
|
+
<table>
|
|
122
|
+
<thead><tr><th>Task</th><th>Command</th></tr></thead>
|
|
123
|
+
<tbody>
|
|
124
|
+
<tr><td>List PVs</td><td><code>kubectl get pv</code></td></tr>
|
|
125
|
+
<tr><td>List PVCs</td><td><code>kubectl get pvc -n NAMESPACE</code></td></tr>
|
|
126
|
+
<tr><td>PVC status</td><td><code>kubectl describe pvc NAME</code></td></tr>
|
|
127
|
+
<tr><td>List StorageClasses</td><td><code>kubectl get storageclass</code></td></tr>
|
|
128
|
+
<tr><td>Expand PVC</td><td>Edit PVC spec.resources.requests.storage (SC must allow)</td></tr>
|
|
129
|
+
</tbody>
|
|
130
|
+
</table>
|
|
131
|
+
|
|
132
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
133
|
+
|
|
134
|
+
<p><strong>Q1:</strong> A PVC is stuck in "Pending" state. A PV with 10Gi exists. The PVC requests 5Gi with ReadWriteMany, but the PV only supports ReadWriteOnce. What happens?</p>
|
|
135
|
+
<ul>
|
|
136
|
+
<li>A) The PVC binds to the PV and uses RWO</li>
|
|
137
|
+
<li>B) The PVC remains Pending — no PV matches the requested access mode ✓</li>
|
|
138
|
+
<li>C) Kubernetes auto-converts the PV access mode</li>
|
|
139
|
+
<li>D) The Pod using the PVC starts but cannot write</li>
|
|
140
|
+
</ul>
|
|
141
|
+
<p><em>Explanation: PVC binding requires all criteria to match: storage size (PV >= PVC), access mode, storageClass, and volumeMode. If the PV only supports RWO but PVC needs RWX, they won't bind.</em></p>
|
|
142
|
+
|
|
143
|
+
<p><strong>Q2:</strong> A PVC bound to a PV is deleted. The PV has reclaimPolicy: Retain. What is the PV's state?</p>
|
|
144
|
+
<ul>
|
|
145
|
+
<li>A) The PV is deleted</li>
|
|
146
|
+
<li>B) The PV transitions to Available and can be reused</li>
|
|
147
|
+
<li>C) The PV transitions to Released — data is preserved but PV can't be rebound automatically ✓</li>
|
|
148
|
+
<li>D) The PV is immediately provisioned for another PVC</li>
|
|
149
|
+
</ul>
|
|
150
|
+
<p><em>Explanation: Retain policy preserves the PV and its data. The PV enters Released state. To reuse it, an admin must manually delete the old claimRef (kubectl patch pv myPV -p '{"spec":{"claimRef":null}}') to return it to Available.</em></p>
|
|
151
|
+
|
|
152
|
+
<p><strong>Q3:</strong> In a cloud environment, WaitForFirstConsumer volumeBindingMode is recommended over Immediate. Why?</p>
|
|
153
|
+
<ul>
|
|
154
|
+
<li>A) It reduces storage costs</li>
|
|
155
|
+
<li>B) It ensures the volume is provisioned in the same availability zone as the scheduled Pod ✓</li>
|
|
156
|
+
<li>C) It allows multiple Pods to share the volume</li>
|
|
157
|
+
<li>D) It speeds up Pod startup time</li>
|
|
158
|
+
</ul>
|
|
159
|
+
<p><em>Explanation: With Immediate, the PV might be provisioned in zone-a while the Pod schedules to zone-b (EBS volumes are zone-specific). WaitForFirstConsumer delays provisioning until the scheduler determines which node/zone will host the Pod.</em></p>
|