@xdev-asia/xdev-knowledge-mcp 1.0.44 → 1.0.46
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/01-kien-truc-cka-kubeadm.md +133 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/02-cluster-upgrade-kubeadm.md +147 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/01-cluster-architecture/lessons/03-rbac-cka.md +152 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/04-deployments-daemonsets-statefulsets.md +186 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/02-workloads-scheduling/lessons/05-scheduling-taints-affinity.md +163 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/06-services-endpoints-coredns.md +145 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/03-services-networking/lessons/07-ingress-networkpolicies-cni.md +172 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/04-storage/lessons/08-persistent-volumes-storageclass.md +159 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/09-etcd-backup-restore.md +149 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/10-troubleshooting-nodes.md +153 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/11-troubleshooting-workloads.md +146 -0
- package/content/series/luyen-thi/luyen-thi-cka/chapters/05-troubleshooting/lessons/12-troubleshooting-networking-exam.md +170 -0
- package/content/series/luyen-thi/luyen-thi-cka/index.md +7 -7
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/01-multi-container-pods.md +146 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/01-app-design-build/lessons/02-jobs-cronjobs-resources.md +174 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/03-rolling-updates-rollbacks.md +148 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/02-app-deployment/lessons/04-helm-kustomize.md +181 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/03-app-observability/lessons/05-probes-logging-debugging.md +183 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/06-configmaps-secrets.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/07-securitycontext-pod-security.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/04-app-environment-config/lessons/08-resources-qos.md +168 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/09-services-ingress.md +182 -0
- package/content/series/luyen-thi/luyen-thi-ckad/chapters/05-services-networking/lessons/10-networkpolicies-exam-strategy.md +236 -0
- package/content/series/luyen-thi/luyen-thi-ckad/index.md +7 -7
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/01-kien-truc-kubernetes.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/02-pods-workloads-controllers.md +142 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/03-services-networking-storage.md +155 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/01-kubernetes-fundamentals/lessons/04-rbac-security.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/05-container-runtimes-oci.md +137 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/02-container-orchestration/lessons/06-orchestration-patterns.md +147 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/03-cloud-native-architecture/lessons/07-cloud-native-architecture.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/08-observability.md +143 -0
- package/content/series/luyen-thi/luyen-thi-kcna/chapters/04-observability-delivery/lessons/09-helm-gitops-cicd.md +162 -0
- package/content/series/luyen-thi/luyen-thi-kcna/index.md +1 -1
- package/data/quizzes.json +1059 -0
- package/package.json +1 -1
|
@@ -0,0 +1,137 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: kcna-d2-l05
|
|
3
|
+
title: 'Bài 5: Container Runtimes & OCI Standards'
|
|
4
|
+
slug: 05-container-runtimes-oci
|
|
5
|
+
description: >-
|
|
6
|
+
OCI (Open Container Initiative), container runtime interface (CRI).
|
|
7
|
+
Docker, containerd, CRI-O. Image layers, registries và image lifecycle.
|
|
8
|
+
duration_minutes: 50
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 5
|
|
12
|
+
section_title: "Domain 2: Container Orchestration (22%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-kcna-series-001
|
|
15
|
+
title: 'Luyện thi KCNA — Kubernetes and Cloud Native Associate'
|
|
16
|
+
slug: luyen-thi-kcna
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-kcna-bai5-oci-runtimes.png" alt="OCI Container Runtime Stack — CRI, containerd, runc" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="oci">1. OCI — Open Container Initiative</h2>
|
|
22
|
+
|
|
23
|
+
<p><strong>OCI</strong> là tổ chức mở (thuộc Linux Foundation) định nghĩa các chuẩn mở cho containers:</p>
|
|
24
|
+
|
|
25
|
+
<table>
|
|
26
|
+
<thead><tr><th>Specification</th><th>Định nghĩa</th><th>Ví dụ implement</th></tr></thead>
|
|
27
|
+
<tbody>
|
|
28
|
+
<tr><td><strong>OCI Image Spec</strong></td><td>Định dạng container image (layers, manifest)</td><td>Docker image, OCI image</td></tr>
|
|
29
|
+
<tr><td><strong>OCI Runtime Spec</strong></td><td>Cách chạy container từ image (lifecycle, filesystem)</td><td>runc, crun, kata-containers</td></tr>
|
|
30
|
+
<tr><td><strong>OCI Distribution Spec</strong></td><td>API để push/pull image từ registry</td><td>DockerHub, ECR, GCR</td></tr>
|
|
31
|
+
</tbody>
|
|
32
|
+
</table>
|
|
33
|
+
|
|
34
|
+
<blockquote><p><strong>Exam tip:</strong> OCI standards đảm bảo <strong>interoperability</strong>: image build bằng Docker có thể chạy với containerd hoặc CRI-O mà không cần thay đổi. KCNA thường hỏi về vai trò của OCI trong cloud native ecosystem.</p></blockquote>
|
|
35
|
+
|
|
36
|
+
<h2 id="container-runtime">2. Container Runtime Interface (CRI)</h2>
|
|
37
|
+
|
|
38
|
+
<p>Kubernetes không giao tiếp trực tiếp với Docker hay containerd. Thay vào đó, kubelet dùng <strong>CRI (Container Runtime Interface)</strong> — một gRPC API chuẩn.</p>
|
|
39
|
+
|
|
40
|
+
<pre><code class="language-text">Kubernetes Architecture (Runtime Layer):
|
|
41
|
+
|
|
42
|
+
kubelet
|
|
43
|
+
│ CRI (gRPC)
|
|
44
|
+
├─── containerd ─── runc ─── container
|
|
45
|
+
├─── CRI-O ─── runc ─── container
|
|
46
|
+
└─── (Docker) ─── (deprecated v1.24+)
|
|
47
|
+
|
|
48
|
+
OCI Runtime (runc, crun):
|
|
49
|
+
- Đọc OCI runtime bundle
|
|
50
|
+
- Gọi Linux kernel (namespaces, cgroups)
|
|
51
|
+
- Tạo container process</code></pre>
|
|
52
|
+
|
|
53
|
+
<h2 id="runtimes-comparison">3. Container Runtimes Comparison</h2>
|
|
54
|
+
|
|
55
|
+
<table>
|
|
56
|
+
<thead><tr><th>Runtime</th><th>Loại</th><th>Đặc điểm</th><th>Dùng trong</th></tr></thead>
|
|
57
|
+
<tbody>
|
|
58
|
+
<tr><td><strong>containerd</strong></td><td>High-level (CRI)</td><td>Nhẹ, stable, CNCF graduated</td><td>Default Kubernetes 1.24+</td></tr>
|
|
59
|
+
<tr><td><strong>CRI-O</strong></td><td>High-level (CRI)</td><td>Tối ưu cho Kubernetes, lightweight</td><td>OpenShift, Kubernetes</td></tr>
|
|
60
|
+
<tr><td><strong>Docker Engine</strong></td><td>High-level (non-CRI)</td><td>Deprecated từ K8s 1.24 (dùng dockershim)</td><td>Dev environments</td></tr>
|
|
61
|
+
<tr><td><strong>runc</strong></td><td>Low-level (OCI)</td><td>Reference OCI implementation</td><td>Backend của containerd/CRI-O</td></tr>
|
|
62
|
+
<tr><td><strong>gVisor (runsc)</strong></td><td>Low-level (sandbox)</td><td>Security sandbox, intercepts syscalls</td><td>GKE sandbox, untrusted workloads</td></tr>
|
|
63
|
+
<tr><td><strong>Kata Containers</strong></td><td>Low-level (VM-based)</td><td>VM isolation per container</td><td>Multi-tenant, high security</td></tr>
|
|
64
|
+
</tbody>
|
|
65
|
+
</table>
|
|
66
|
+
|
|
67
|
+
<blockquote><p><strong>Exam tip:</strong> Docker bị deprecated như Kubernetes runtime từ v1.24, nhưng Docker images (OCI-compatible) vẫn chạy được trên containerd/CRI-O. "Docker deprecated" ≠ "Docker images deprecated".</p></blockquote>
|
|
68
|
+
|
|
69
|
+
<h2 id="image-layers">4. Container Image Layers</h2>
|
|
70
|
+
|
|
71
|
+
<pre><code class="language-text">Layer architecture:
|
|
72
|
+
┌──────────────────────────────┐
|
|
73
|
+
│ Layer 4: App code (5 MB) │ ← Writeable (container layer)
|
|
74
|
+
├──────────────────────────────┤
|
|
75
|
+
│ Layer 3: npm packages │ ← Read-only
|
|
76
|
+
├──────────────────────────────┤
|
|
77
|
+
│ Layer 2: Node.js runtime │ ← Read-only
|
|
78
|
+
├──────────────────────────────┤
|
|
79
|
+
│ Layer 1: Ubuntu base image │ ← Read-only (shared across images)
|
|
80
|
+
└──────────────────────────────┘
|
|
81
|
+
|
|
82
|
+
Cache benefit: nếu Layer 1-2 giống nhau, chỉ download Layer 3-4</code></pre>
|
|
83
|
+
|
|
84
|
+
<h2 id="registries">5. Container Registries</h2>
|
|
85
|
+
|
|
86
|
+
<table>
|
|
87
|
+
<thead><tr><th>Registry</th><th>Provider</th><th>Đặc điểm</th></tr></thead>
|
|
88
|
+
<tbody>
|
|
89
|
+
<tr><td>Docker Hub</td><td>Docker Inc.</td><td>Public default, rate-limited pulls</td></tr>
|
|
90
|
+
<tr><td>ECR (Elastic Container Registry)</td><td>AWS</td><td>Private, IAM integrated</td></tr>
|
|
91
|
+
<tr><td>GCR / Artifact Registry</td><td>GCP</td><td>Private, Workload Identity</td></tr>
|
|
92
|
+
<tr><td>GHCR (GitHub Container Registry)</td><td>GitHub</td><td>Package-linked, Actions CI</td></tr>
|
|
93
|
+
<tr><td>Harbor</td><td>CNCF (open source)</td><td>Self-hosted, vulnerability scanning</td></tr>
|
|
94
|
+
</tbody>
|
|
95
|
+
</table>
|
|
96
|
+
|
|
97
|
+
<h2 id="cheatsheet">6. Cheat Sheet</h2>
|
|
98
|
+
|
|
99
|
+
<table>
|
|
100
|
+
<thead><tr><th>Câu hỏi exam</th><th>Đáp án</th></tr></thead>
|
|
101
|
+
<tbody>
|
|
102
|
+
<tr><td>OCI định nghĩa chuẩn gì?</td><td>Image Spec, Runtime Spec, Distribution Spec</td></tr>
|
|
103
|
+
<tr><td>Default runtime K8s 1.24+?</td><td><strong>containerd</strong></td></tr>
|
|
104
|
+
<tr><td>CRI là gì?</td><td>Container Runtime Interface — gRPC API giữa kubelet và runtime</td></tr>
|
|
105
|
+
<tr><td>Docker deprecated trong K8s?</td><td><strong>Từ v1.24</strong> (dockershim removed)</td></tr>
|
|
106
|
+
<tr><td>Runtime cho untrusted workloads?</td><td><strong>gVisor</strong> hoặc <strong>Kata Containers</strong></td></tr>
|
|
107
|
+
</tbody>
|
|
108
|
+
</table>
|
|
109
|
+
|
|
110
|
+
<h2 id="practice">7. Practice Questions</h2>
|
|
111
|
+
|
|
112
|
+
<p><strong>Q1:</strong> A Kubernetes cluster uses containerd as the container runtime. A developer pushes a Docker image to Docker Hub. Can this image run on the cluster?</p>
|
|
113
|
+
<ul>
|
|
114
|
+
<li>A) No, Docker images are incompatible with containerd</li>
|
|
115
|
+
<li>B) Yes, because Docker images follow OCI Image Spec and are compatible ✓</li>
|
|
116
|
+
<li>C) Only if the cluster installs a Docker compatibility shim</li>
|
|
117
|
+
<li>D) No, containerd only supports images from CNCF registries</li>
|
|
118
|
+
</ul>
|
|
119
|
+
<p><em>Explanation: Docker images follow the OCI Image Specification, making them interoperable with any OCI-compliant runtime including containerd and CRI-O. The "Docker deprecated" refers to the runtime, not the image format.</em></p>
|
|
120
|
+
|
|
121
|
+
<p><strong>Q2:</strong> What is the primary purpose of the Container Runtime Interface (CRI)?</p>
|
|
122
|
+
<ul>
|
|
123
|
+
<li>A) Define image layer formats</li>
|
|
124
|
+
<li>B) Provide a gRPC API for kubelet to communicate with container runtimes ✓</li>
|
|
125
|
+
<li>C) Manage container image distribution between registries</li>
|
|
126
|
+
<li>D) Schedule containers across cluster nodes</li>
|
|
127
|
+
</ul>
|
|
128
|
+
<p><em>Explanation: CRI gives kubelet a stable API to interact with different runtimes (containerd, CRI-O) without knowing implementation details. This decoupling enables switching runtimes without changing kubelet code.</em></p>
|
|
129
|
+
|
|
130
|
+
<p><strong>Q3:</strong> Which container runtime provides VM-level isolation per container for high-security multi-tenant workloads?</p>
|
|
131
|
+
<ul>
|
|
132
|
+
<li>A) containerd</li>
|
|
133
|
+
<li>B) CRI-O</li>
|
|
134
|
+
<li>C) Kata Containers ✓</li>
|
|
135
|
+
<li>D) runc</li>
|
|
136
|
+
</ul>
|
|
137
|
+
<p><em>Explanation: Kata Containers runs each container inside a lightweight VM, providing stronger isolation than standard Linux namespace-based containers. gVisor provides user-space isolation via syscall interception, also strong but different approach.</em></p>
|
|
@@ -0,0 +1,147 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: kcna-d2-l06
|
|
3
|
+
title: 'Bài 6: Container Orchestration Patterns'
|
|
4
|
+
slug: 06-orchestration-patterns
|
|
5
|
+
description: >-
|
|
6
|
+
Scheduling, auto-scaling (HPA, VPA, Cluster Autoscaler), resource requests
|
|
7
|
+
và limits, namespaces, multi-tenancy và Kubernetes upgrade strategies.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 6
|
|
12
|
+
section_title: "Domain 2: Container Orchestration (22%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-kcna-series-001
|
|
15
|
+
title: 'Luyện thi KCNA — Kubernetes and Cloud Native Associate'
|
|
16
|
+
slug: luyen-thi-kcna
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-kcna-bai6-scheduling.png" alt="Kubernetes Scheduling Pipeline và Autoscaling (HPA, VPA, Cluster Autoscaler)" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="scheduling">1. Kubernetes Scheduling</h2>
|
|
22
|
+
|
|
23
|
+
<p>Khi Pod được tạo, <strong>kube-scheduler</strong> chọn node phù hợp qua 2 bước:</p>
|
|
24
|
+
|
|
25
|
+
<pre><code class="language-text">Scheduling Pipeline:
|
|
26
|
+
New Pod
|
|
27
|
+
│
|
|
28
|
+
▼
|
|
29
|
+
1. FILTERING: Loại bỏ nodes không đủ điều kiện
|
|
30
|
+
- Không đủ CPU/Memory
|
|
31
|
+
- Taint không match Toleration
|
|
32
|
+
- Node Affinity không match
|
|
33
|
+
│
|
|
34
|
+
▼
|
|
35
|
+
2. SCORING: Chấm điểm nodes còn lại
|
|
36
|
+
- Resource balance
|
|
37
|
+
- Affinity preferences
|
|
38
|
+
│
|
|
39
|
+
▼
|
|
40
|
+
Bind Pod → Highest score Node</code></pre>
|
|
41
|
+
|
|
42
|
+
<table>
|
|
43
|
+
<thead><tr><th>Cơ chế</th><th>Mục đích</th><th>Ví dụ</th></tr></thead>
|
|
44
|
+
<tbody>
|
|
45
|
+
<tr><td><strong>NodeSelector</strong></td><td>Schedule Pod lên node có label</td><td><code>disktype: ssd</code></td></tr>
|
|
46
|
+
<tr><td><strong>Affinity/Anti-affinity</strong></td><td>Preferred/required node rules</td><td>Prefer zone-a, avoid same node as another pod</td></tr>
|
|
47
|
+
<tr><td><strong>Taints & Tolerations</strong></td><td>Repel Pods trừ khi Pod có Toleration</td><td>Node dành riêng cho GPU workloads</td></tr>
|
|
48
|
+
<tr><td><strong>Resource requests</strong></td><td>Minimum CPU/memory để schedule</td><td>requests.cpu: 500m</td></tr>
|
|
49
|
+
</tbody>
|
|
50
|
+
</table>
|
|
51
|
+
|
|
52
|
+
<blockquote><p><strong>Exam tip:</strong> <strong>Taints</strong> áp lên Node (repel pods). <strong>Tolerations</strong> áp lên Pod (accept taint). Taint có effect: <strong>NoSchedule</strong> (không schedule mới), <strong>PreferNoSchedule</strong> (ưu tiên không schedule), <strong>NoExecute</strong> (evict pods đang chạy).</p></blockquote>
|
|
53
|
+
|
|
54
|
+
<h2 id="resources">2. Resource Requests & Limits</h2>
|
|
55
|
+
|
|
56
|
+
<table>
|
|
57
|
+
<thead><tr><th>Setting</th><th>Ảnh hưởng đến</th><th>Nếu vượt quá</th></tr></thead>
|
|
58
|
+
<tbody>
|
|
59
|
+
<tr><td><strong>requests.cpu</strong></td><td>Scheduling (scheduler dùng để chọn node)</td><td>Throttled (không bị kill)</td></tr>
|
|
60
|
+
<tr><td><strong>limits.cpu</strong></td><td>Cgroups CPU quota</td><td>CPU throttled</td></tr>
|
|
61
|
+
<tr><td><strong>requests.memory</strong></td><td>Scheduling</td><td>OOM Kill nếu vượt limit</td></tr>
|
|
62
|
+
<tr><td><strong>limits.memory</strong></td><td>Cgroups memory limit</td><td>Container bị <strong>OOM Kill</strong></td></tr>
|
|
63
|
+
</tbody>
|
|
64
|
+
</table>
|
|
65
|
+
|
|
66
|
+
<pre><code class="language-text">QoS Classes:
|
|
67
|
+
Guaranteed: requests == limits (best quality, last to be evicted)
|
|
68
|
+
Burstable: requests < limits (middle)
|
|
69
|
+
BestEffort: no requests, no limits (first to be evicted)</code></pre>
|
|
70
|
+
|
|
71
|
+
<h2 id="autoscaling">3. Auto-scaling</h2>
|
|
72
|
+
|
|
73
|
+
<table>
|
|
74
|
+
<thead><tr><th>Scaler</th><th>Scale gì</th><th>Metric</th></tr></thead>
|
|
75
|
+
<tbody>
|
|
76
|
+
<tr><td><strong>HPA</strong> (Horizontal Pod Autoscaler)</td><td>Số lượng Pod replicas</td><td>CPU%, Memory%, custom metrics</td></tr>
|
|
77
|
+
<tr><td><strong>VPA</strong> (Vertical Pod Autoscaler)</td><td>CPU/Memory requests của Pod</td><td>Actual usage history</td></tr>
|
|
78
|
+
<tr><td><strong>Cluster Autoscaler</strong></td><td>Số lượng nodes trong cluster</td><td>Pending Pods (unschedulable)</td></tr>
|
|
79
|
+
<tr><td><strong>KEDA</strong></td><td>Số replicas (to 0)</td><td>Event-driven (queue depth, Kafka)</td></tr>
|
|
80
|
+
</tbody>
|
|
81
|
+
</table>
|
|
82
|
+
|
|
83
|
+
<pre><code class="language-text">HPA integration:
|
|
84
|
+
metrics-server → kubelet → Node/Pod metrics
|
|
85
|
+
↓
|
|
86
|
+
HPA controller (checks every 15s)
|
|
87
|
+
↓
|
|
88
|
+
Scale up: replicas++ (traffic spike)
|
|
89
|
+
Scale down: replicas-- (traffic drops, 5 min cooldown)</code></pre>
|
|
90
|
+
|
|
91
|
+
<blockquote><p><strong>Exam tip:</strong> HPA cần <strong>metrics-server</strong> để hoạt động. VPA và HPA có thể conflict khi cùng manage một Deployment — không nên dùng cùng lúc trên cùng resource (trừ KEDA với nhiều dimension).</p></blockquote>
|
|
92
|
+
|
|
93
|
+
<h2 id="namespaces">4. Namespaces & Multi-tenancy</h2>
|
|
94
|
+
|
|
95
|
+
<p><strong>Namespaces</strong> cung cấp virtual cluster isolation: scope RBAC, ResourceQuota, NetworkPolicy, và DNS resolution.</p>
|
|
96
|
+
|
|
97
|
+
<table>
|
|
98
|
+
<thead><tr><th>Namespace</th><th>Purpose</th><th>Ghi chú</th></tr></thead>
|
|
99
|
+
<tbody>
|
|
100
|
+
<tr><td><code>default</code></td><td>Objects không chỉ định namespace</td><td>Dùng trong dev, không dùng prod</td></tr>
|
|
101
|
+
<tr><td><code>kube-system</code></td><td>Kubernetes system components</td><td>CoreDNS, kube-proxy, metrics-server</td></tr>
|
|
102
|
+
<tr><td><code>kube-public</code></td><td>Public, readable by all</td><td>Cluster info ConfigMap</td></tr>
|
|
103
|
+
<tr><td><code>kube-node-lease</code></td><td>Node heartbeat leases</td><td>Kubelet heartbeat performance</td></tr>
|
|
104
|
+
</tbody>
|
|
105
|
+
</table>
|
|
106
|
+
|
|
107
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
108
|
+
|
|
109
|
+
<table>
|
|
110
|
+
<thead><tr><th>Câu hỏi exam</th><th>Đáp án</th></tr></thead>
|
|
111
|
+
<tbody>
|
|
112
|
+
<tr><td>Scale số Pod dựa trên CPU?</td><td><strong>HPA</strong></td></tr>
|
|
113
|
+
<tr><td>Scale số Node trong cluster?</td><td><strong>Cluster Autoscaler</strong></td></tr>
|
|
114
|
+
<tr><td>Node dành riêng cho GPU, dùng gì?</td><td><strong>Taint</strong> + Pod <strong>Toleration</strong></td></tr>
|
|
115
|
+
<tr><td>Container bị OOM Kill, do gì?</td><td>Vượt <strong>limits.memory</strong></td></tr>
|
|
116
|
+
<tr><td>QoS class nào bị evict đầu tiên?</td><td><strong>BestEffort</strong></td></tr>
|
|
117
|
+
</tbody>
|
|
118
|
+
</table>
|
|
119
|
+
|
|
120
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
121
|
+
|
|
122
|
+
<p><strong>Q1:</strong> A node is tainted with <code>key=gpu:NoSchedule</code>. Which Pods can be scheduled on this node?</p>
|
|
123
|
+
<ul>
|
|
124
|
+
<li>A) Any Pod in the cluster</li>
|
|
125
|
+
<li>B) Pods with matching Toleration for the taint ✓</li>
|
|
126
|
+
<li>C) Pods in the kube-system namespace only</li>
|
|
127
|
+
<li>D) Pods created by cluster administrators only</li>
|
|
128
|
+
</ul>
|
|
129
|
+
<p><em>Explanation: NoSchedule taint prevents any new Pod from being scheduled on the node UNLESS the Pod specifies a matching toleration. Existing Pods are not evicted (use NoExecute for that).</em></p>
|
|
130
|
+
|
|
131
|
+
<p><strong>Q2:</strong> An application's Pods keep getting OOM-killed during traffic spikes. What is the most appropriate solution?</p>
|
|
132
|
+
<ul>
|
|
133
|
+
<li>A) Increase Pod CPU requests</li>
|
|
134
|
+
<li>B) Configure HPA to scale based on memory usage ✓</li>
|
|
135
|
+
<li>C) Move the app to a new namespace</li>
|
|
136
|
+
<li>D) Use a StatefulSet instead of Deployment</li>
|
|
137
|
+
</ul>
|
|
138
|
+
<p><em>Explanation: OOM kills mean memory demand exceeds limits. HPA scaling out (more Pod replicas) distributes the load, reducing per-pod memory pressure. Alternatively, increase memory limits or use VPA.</em></p>
|
|
139
|
+
|
|
140
|
+
<p><strong>Q3:</strong> Which Kubernetes component provides CPU and memory metrics that HPA uses for scaling decisions?</p>
|
|
141
|
+
<ul>
|
|
142
|
+
<li>A) kube-proxy</li>
|
|
143
|
+
<li>B) kube-scheduler</li>
|
|
144
|
+
<li>C) metrics-server ✓</li>
|
|
145
|
+
<li>D) etcd</li>
|
|
146
|
+
</ul>
|
|
147
|
+
<p><em>Explanation: metrics-server is an optional cluster add-on that collects resource metrics (CPU, memory) from kubelets. The HPA controller queries the metrics API exposed by metrics-server to make scaling decisions.</em></p>
|
|
@@ -0,0 +1,143 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: kcna-d3-l07
|
|
3
|
+
title: 'Bài 7: Cloud Native Architecture & Design Patterns'
|
|
4
|
+
slug: 07-cloud-native-architecture
|
|
5
|
+
description: >-
|
|
6
|
+
Cloud native principles, microservices vs monolith, service mesh, 12-factor
|
|
7
|
+
app, immutable infrastructure và cloud native design patterns.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 7
|
|
12
|
+
section_title: "Domain 3: Cloud Native Architecture (16%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-kcna-series-001
|
|
15
|
+
title: 'Luyện thi KCNA — Kubernetes and Cloud Native Associate'
|
|
16
|
+
slug: luyen-thi-kcna
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-kcna-bai7-cloud-native.png" alt="Cloud Native Architecture — Microservices vs Monolith, 12-Factor App" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="cloud-native">1. Cloud Native — Định nghĩa CNCF</h2>
|
|
22
|
+
|
|
23
|
+
<p>Theo <strong>CNCF (Cloud Native Computing Foundation)</strong>, cloud native là cách build và run scalable applications trong dynamic environments như public, private, hybrid cloud, sử dụng: <strong>containers, microservices, declarative APIs, immutable infrastructure</strong>.</p>
|
|
24
|
+
|
|
25
|
+
<table>
|
|
26
|
+
<thead><tr><th>Principle</th><th>Ý nghĩa</th><th>Ví dụ</th></tr></thead>
|
|
27
|
+
<tbody>
|
|
28
|
+
<tr><td><strong>Containerized</strong></td><td>Đóng gói app + dependencies</td><td>Docker image</td></tr>
|
|
29
|
+
<tr><td><strong>Dynamically orchestrated</strong></td><td>Automated scheduling, scaling, healing</td><td>Kubernetes</td></tr>
|
|
30
|
+
<tr><td><strong>Microservices</strong></td><td>Loose coupling, single responsibility</td><td>Auth service, Payment service</td></tr>
|
|
31
|
+
<tr><td><strong>Declarative APIs</strong></td><td>Describe desired state, not steps</td><td>kubectl apply -f deployment.yaml</td></tr>
|
|
32
|
+
<tr><td><strong>Immutable infrastructure</strong></td><td>Never modify running; replace instead</td><td>New image version → rolling update</td></tr>
|
|
33
|
+
</tbody>
|
|
34
|
+
</table>
|
|
35
|
+
|
|
36
|
+
<h2 id="microservices-vs-monolith">2. Microservices vs Monolith</h2>
|
|
37
|
+
|
|
38
|
+
<pre><code class="language-text">MONOLITH MICROSERVICES
|
|
39
|
+
───────────────────── ──────────────────────────────
|
|
40
|
+
┌──────────────────┐ ┌────────┐ ┌────────┐ ┌─────┐
|
|
41
|
+
│ Auth │ UI │ │ Auth │ │ Cart │ │ UI │
|
|
42
|
+
│ Cart │ API │ │Service │ │Service │ │Svc │
|
|
43
|
+
│ Payment │ DB │ └────────┘ └────────┘ └─────┘
|
|
44
|
+
└──────────────────┘ │ │ │
|
|
45
|
+
Deploy as 1 unit └─── API Gateway ───┘
|
|
46
|
+
│
|
|
47
|
+
Client/Browser</code></pre>
|
|
48
|
+
|
|
49
|
+
<table>
|
|
50
|
+
<thead><tr><th>Aspect</th><th>Monolith</th><th>Microservices</th></tr></thead>
|
|
51
|
+
<tbody>
|
|
52
|
+
<tr><td>Deployment</td><td>All-or-nothing</td><td>Independent per service</td></tr>
|
|
53
|
+
<tr><td>Scaling</td><td>Scale entire app</td><td>Scale only bottleneck service</td></tr>
|
|
54
|
+
<tr><td>Complexity</td><td>Low (single codebase)</td><td>High (distributed)</td></tr>
|
|
55
|
+
<tr><td>Fault isolation</td><td>One bug crashes all</td><td>Failure contained in service</td></tr>
|
|
56
|
+
<tr><td>Technology</td><td>Single stack</td><td>Polyglot (best tool per service)</td></tr>
|
|
57
|
+
</tbody>
|
|
58
|
+
</table>
|
|
59
|
+
|
|
60
|
+
<h2 id="12-factor">3. 12-Factor App</h2>
|
|
61
|
+
|
|
62
|
+
<p>The <strong>12-factor app</strong> methodology định nghĩa best practices cho cloud native applications:</p>
|
|
63
|
+
|
|
64
|
+
<table>
|
|
65
|
+
<thead><tr><th>#</th><th>Factor</th><th>Cloud Native Practice</th></tr></thead>
|
|
66
|
+
<tbody>
|
|
67
|
+
<tr><td>1</td><td><strong>Codebase</strong></td><td>1 repo per app, many deploys</td></tr>
|
|
68
|
+
<tr><td>2</td><td><strong>Dependencies</strong></td><td>Declare explicitly (package.json, go.mod)</td></tr>
|
|
69
|
+
<tr><td>3</td><td><strong>Config</strong></td><td>Store in environment (ConfigMap, Secrets)</td></tr>
|
|
70
|
+
<tr><td>4</td><td><strong>Backing services</strong></td><td>DB, cache = attached resources via URL</td></tr>
|
|
71
|
+
<tr><td>5</td><td><strong>Build/Release/Run</strong></td><td>Strict separation (CI builds, CD deploys)</td></tr>
|
|
72
|
+
<tr><td>6</td><td><strong>Processes</strong></td><td>Stateless processes, store state externally</td></tr>
|
|
73
|
+
<tr><td>7</td><td><strong>Port binding</strong></td><td>Export service via port (no web server layer)</td></tr>
|
|
74
|
+
<tr><td>8</td><td><strong>Concurrency</strong></td><td>Scale via process model (HPA)</td></tr>
|
|
75
|
+
<tr><td>9</td><td><strong>Disposability</strong></td><td>Fast startup, graceful shutdown</td></tr>
|
|
76
|
+
<tr><td>10</td><td><strong>Dev/Prod parity</strong></td><td>Same tools/services across environments</td></tr>
|
|
77
|
+
<tr><td>11</td><td><strong>Logs</strong></td><td>Treat as event streams (stdout, not files)</td></tr>
|
|
78
|
+
<tr><td>12</td><td><strong>Admin processes</strong></td><td>Run one-off admin tasks as Jobs</td></tr>
|
|
79
|
+
</tbody>
|
|
80
|
+
</table>
|
|
81
|
+
|
|
82
|
+
<blockquote><p><strong>Exam tip:</strong> Factors 3, 6, 9, 11 hay xuất hiện trong câu hỏi KCNA. Factor 3 (config in env) → ConfigMap/Secret. Factor 6 (stateless) → lý do dùng external storage. Factor 11 (logs as streams) → stdout → log aggregator.</p></blockquote>
|
|
83
|
+
|
|
84
|
+
<h2 id="service-mesh">4. Service Mesh</h2>
|
|
85
|
+
|
|
86
|
+
<p>Khi microservices nhiều lên, cần quản lý: mTLS, retry, circuit breaker, observability. <strong>Service Mesh</strong> giải quyết điều này bằng cách inject <strong>sidecar proxy</strong> vào mỗi Pod.</p>
|
|
87
|
+
|
|
88
|
+
<pre><code class="language-text">Without Service Mesh: With Service Mesh (Istio):
|
|
89
|
+
App A ──────────────► App B App A ──► [Envoy] ──► [Envoy] ──► App B
|
|
90
|
+
(manual TLS, retry code) sidecar sidecar
|
|
91
|
+
(auto mTLS, metrics, retry, tracing)</code></pre>
|
|
92
|
+
|
|
93
|
+
<table>
|
|
94
|
+
<thead><tr><th>Tính năng</th><th>Service Mesh cung cấp</th></tr></thead>
|
|
95
|
+
<tbody>
|
|
96
|
+
<tr><td>mTLS mutual authentication</td><td>Auto-encrypt traffic giữa services</td></tr>
|
|
97
|
+
<tr><td>Traffic management</td><td>Canary, A/B, weighted routing</td></tr>
|
|
98
|
+
<tr><td>Observability</td><td>Auto metrics, tracing, access logs</td></tr>
|
|
99
|
+
<tr><td>Resilience</td><td>Retry, timeout, circuit breaker</td></tr>
|
|
100
|
+
</tbody>
|
|
101
|
+
</table>
|
|
102
|
+
|
|
103
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
104
|
+
|
|
105
|
+
<table>
|
|
106
|
+
<thead><tr><th>Câu hỏi exam</th><th>Đáp án</th></tr></thead>
|
|
107
|
+
<tbody>
|
|
108
|
+
<tr><td>CNCF định nghĩa cloud native bao gồm?</td><td>Containers, microservices, declarative APIs, immutable infra</td></tr>
|
|
109
|
+
<tr><td>Config nên lưu ở đâu theo 12-factor?</td><td>Environment variables (không hardcode)</td></tr>
|
|
110
|
+
<tr><td>Logs theo 12-factor?</td><td>Treat as streams (stdout/stderr)</td></tr>
|
|
111
|
+
<tr><td>Service mesh inject gì vào Pod?</td><td><strong>Sidecar proxy</strong> (Envoy)</td></tr>
|
|
112
|
+
<tr><td>Microservices scale phần nào?</td><td>Chỉ service có bottleneck</td></tr>
|
|
113
|
+
</tbody>
|
|
114
|
+
</table>
|
|
115
|
+
|
|
116
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
117
|
+
|
|
118
|
+
<p><strong>Q1:</strong> According to the 12-factor app methodology, how should an application store its database connection string?</p>
|
|
119
|
+
<ul>
|
|
120
|
+
<li>A) Hardcoded in the source code</li>
|
|
121
|
+
<li>B) In a configuration file committed to the repository</li>
|
|
122
|
+
<li>C) As an environment variable (Kubernetes ConfigMap or Secret) ✓</li>
|
|
123
|
+
<li>D) In the container image as a build argument</li>
|
|
124
|
+
</ul>
|
|
125
|
+
<p><em>Explanation: Factor 3 (Config) states: "Store config in the environment." In Kubernetes, this means using ConfigMaps for non-sensitive config and Secrets for sensitive values, injected as environment variables.</em></p>
|
|
126
|
+
|
|
127
|
+
<p><strong>Q2:</strong> What is the primary benefit of using a Service Mesh in a microservices architecture?</p>
|
|
128
|
+
<ul>
|
|
129
|
+
<li>A) Replace Kubernetes for container orchestration</li>
|
|
130
|
+
<li>B) Provide infrastructure-level networking features (mTLS, retry, observability) without changing application code ✓</li>
|
|
131
|
+
<li>C) Store application configuration</li>
|
|
132
|
+
<li>D) Persist application state across container restarts</li>
|
|
133
|
+
</ul>
|
|
134
|
+
<p><em>Explanation: Service mesh moves cross-cutting concerns (security, observability, resilience) to the infrastructure layer via sidecar proxies. Developers don't need to implement retry logic or mTLS in each service.</em></p>
|
|
135
|
+
|
|
136
|
+
<p><strong>Q3:</strong> Which characteristic distinguishes "immutable infrastructure" from traditional infrastructure?</p>
|
|
137
|
+
<ul>
|
|
138
|
+
<li>A) Servers are never rebooted</li>
|
|
139
|
+
<li>B) Running systems are replaced rather than modified in-place ✓</li>
|
|
140
|
+
<li>C) Configuration changes require manual approval</li>
|
|
141
|
+
<li>D) Infrastructure is defined using only YAML files</li>
|
|
142
|
+
</ul>
|
|
143
|
+
<p><em>Explanation: Immutable infrastructure means you never update/patch a running container — you build a new image, deploy it, replace old containers. This eliminates configuration drift and improves repeatability.</em></p>
|
|
@@ -0,0 +1,143 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: kcna-d4-l08
|
|
3
|
+
title: 'Bài 8: Cloud Native Observability'
|
|
4
|
+
slug: 08-observability
|
|
5
|
+
description: >-
|
|
6
|
+
Ba trụ cột của Observability: Metrics, Logs, Traces. Prometheus, Grafana,
|
|
7
|
+
OpenTelemetry, Jaeger, Loki và observability trong Kubernetes.
|
|
8
|
+
duration_minutes: 55
|
|
9
|
+
is_free: true
|
|
10
|
+
video_url: null
|
|
11
|
+
sort_order: 8
|
|
12
|
+
section_title: "Domain 4: Cloud Native Observability & Security (16%)"
|
|
13
|
+
course:
|
|
14
|
+
id: lt-kcna-series-001
|
|
15
|
+
title: 'Luyện thi KCNA — Kubernetes and Cloud Native Associate'
|
|
16
|
+
slug: luyen-thi-kcna
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
<img src="/storage/uploads/2026/04/k8s-cert-kcna-bai8-observability.png" alt="Three Pillars of Observability — Metrics, Logs, Traces" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
20
|
+
|
|
21
|
+
<h2 id="three-pillars">1. Ba Trụ Cột của Observability</h2>
|
|
22
|
+
|
|
23
|
+
<p>Observability là khả năng hiểu trạng thái nội tại của một hệ thống qua các signals từ bên ngoài. Gồm 3 trụ cột:</p>
|
|
24
|
+
|
|
25
|
+
<table>
|
|
26
|
+
<thead><tr><th>Pillar</th><th>Là gì</th><th>Trả lời câu hỏi</th><th>Tool</th></tr></thead>
|
|
27
|
+
<tbody>
|
|
28
|
+
<tr><td><strong>Metrics</strong></td><td>Số liệu tổng hợp theo thời gian</td><td>"Hệ thống đang ở trạng thái nào?"</td><td>Prometheus + Grafana</td></tr>
|
|
29
|
+
<tr><td><strong>Logs</strong></td><td>Dòng text event từ từng service</td><td>"Điều gì đã xảy ra?"</td><td>Loki, Elasticsearch, Fluentd</td></tr>
|
|
30
|
+
<tr><td><strong>Traces</strong></td><td>Luồng request qua nhiều services</td><td>"Request đi qua đâu và mất bao lâu?"</td><td>Jaeger, Zipkin, Tempo</td></tr>
|
|
31
|
+
</tbody>
|
|
32
|
+
</table>
|
|
33
|
+
|
|
34
|
+
<pre><code class="language-text">User request fails → Use 3 pillars:
|
|
35
|
+
|
|
36
|
+
METRICS: CPU spike at 14:05?
|
|
37
|
+
LOGS: Error "DB timeout" in service B
|
|
38
|
+
TRACES: Request A→B→C, step B took 8s
|
|
39
|
+
|
|
40
|
+
→ Root cause: Service B DB connection pool exhausted</code></pre>
|
|
41
|
+
|
|
42
|
+
<blockquote><p><strong>Exam tip:</strong> KCNA thường hỏi "welche tool" cho từng pillar. Prometheus = metrics. Grafana = visualization. Jaeger = distributed tracing. Loki = log aggregation.</p></blockquote>
|
|
43
|
+
|
|
44
|
+
<h2 id="prometheus">2. Prometheus & Metrics</h2>
|
|
45
|
+
|
|
46
|
+
<p><strong>Prometheus</strong> là CNCF graduated project cho monitoring và alerting. Pull-based: Prometheus scrapes metrics từ targets.</p>
|
|
47
|
+
|
|
48
|
+
<pre><code class="language-text">Prometheus Architecture:
|
|
49
|
+
App (exposes /metrics)
|
|
50
|
+
↑ scrape
|
|
51
|
+
Prometheus Server ──► Alert Manager ──► Slack/PagerDuty
|
|
52
|
+
│
|
|
53
|
+
Grafana (query PromQL → charts)</code></pre>
|
|
54
|
+
|
|
55
|
+
<table>
|
|
56
|
+
<thead><tr><th>Metric Type</th><th>Ý nghĩa</th><th>Ví dụ</th></tr></thead>
|
|
57
|
+
<tbody>
|
|
58
|
+
<tr><td><strong>Counter</strong></td><td>Chỉ tăng (reset khi restart)</td><td>http_requests_total</td></tr>
|
|
59
|
+
<tr><td><strong>Gauge</strong></td><td>Tăng/giảm tự do</td><td>memory_usage_bytes</td></tr>
|
|
60
|
+
<tr><td><strong>Histogram</strong></td><td>Distribution, quantile</td><td>request_duration_seconds</td></tr>
|
|
61
|
+
<tr><td><strong>Summary</strong></td><td>Pre-computed quantiles</td><td>response_size_summary</td></tr>
|
|
62
|
+
</tbody>
|
|
63
|
+
</table>
|
|
64
|
+
|
|
65
|
+
<h2 id="opentelemetry">3. OpenTelemetry (OTel)</h2>
|
|
66
|
+
|
|
67
|
+
<p><strong>OpenTelemetry</strong> là CNCF standard cho thu thập telemetry (metrics, logs, traces) với vendor-neutral SDK và Collector.</p>
|
|
68
|
+
|
|
69
|
+
<pre><code class="language-text">OpenTelemetry Flow:
|
|
70
|
+
App (instrumented with OTel SDK)
|
|
71
|
+
│ OTLP (protocol)
|
|
72
|
+
OTel Collector (receive, process, export)
|
|
73
|
+
│
|
|
74
|
+
┌────┴────┐
|
|
75
|
+
Jaeger Prometheus Loki
|
|
76
|
+
(traces) (metrics) (logs)</code></pre>
|
|
77
|
+
|
|
78
|
+
<blockquote><p><strong>Exam tip:</strong> OpenTelemetry tách vendor-specific code ra khỏi apps — chỉ cần thay đổi OTel Collector config để switch từ Jaeger sang Zipkin mà không cần sửa app code.</p></blockquote>
|
|
79
|
+
|
|
80
|
+
<h2 id="k8s-observability">4. Observability trong Kubernetes</h2>
|
|
81
|
+
|
|
82
|
+
<table>
|
|
83
|
+
<thead><tr><th>Component</th><th>Cung cấp</th></tr></thead>
|
|
84
|
+
<tbody>
|
|
85
|
+
<tr><td><strong>kubelet /metrics</strong></td><td>Node resource metrics cho Prometheus</td></tr>
|
|
86
|
+
<tr><td><strong>metrics-server</strong></td><td>CPU/Memory cho kubectl top, HPA</td></tr>
|
|
87
|
+
<tr><td><strong>kube-state-metrics</strong></td><td>Kubernetes object state (Pod, Deployment status)</td></tr>
|
|
88
|
+
<tr><td><strong>Prometheus Operator</strong></td><td>Deploy Prometheus stack với CRDs (ServiceMonitor)</td></tr>
|
|
89
|
+
<tr><td><strong>Loki + Promtail</strong></td><td>Log aggregation (Promtail thu thập logs từ nodes)</td></tr>
|
|
90
|
+
</tbody>
|
|
91
|
+
</table>
|
|
92
|
+
|
|
93
|
+
<h3 id="kubectl-debug">kubectl debugging commands</h3>
|
|
94
|
+
|
|
95
|
+
<pre><code class="language-text">kubectl logs pod-name # Current container logs
|
|
96
|
+
kubectl logs pod-name --previous # Last crashed container logs
|
|
97
|
+
kubectl logs -f pod-name # Stream live logs
|
|
98
|
+
kubectl describe pod pod-name # Events + status details
|
|
99
|
+
kubectl top pod # CPU/Memory (needs metrics-server)
|
|
100
|
+
kubectl top node # Node resource usage</code></pre>
|
|
101
|
+
|
|
102
|
+
<h2 id="cheatsheet">5. Cheat Sheet</h2>
|
|
103
|
+
|
|
104
|
+
<table>
|
|
105
|
+
<thead><tr><th>Câu hỏi exam</th><th>Đáp án</th></tr></thead>
|
|
106
|
+
<tbody>
|
|
107
|
+
<tr><td>3 pillars of observability?</td><td><strong>Metrics, Logs, Traces</strong></td></tr>
|
|
108
|
+
<tr><td>Distributed tracing tool?</td><td><strong>Jaeger</strong>, Zipkin, Tempo</td></tr>
|
|
109
|
+
<tr><td>Kubernetes metrics collection?</td><td><strong>Prometheus</strong></td></tr>
|
|
110
|
+
<tr><td>Visualization dashboard?</td><td><strong>Grafana</strong></td></tr>
|
|
111
|
+
<tr><td>Vendor-neutral telemetry standard?</td><td><strong>OpenTelemetry</strong></td></tr>
|
|
112
|
+
<tr><td>kubectl top cần gì?</td><td><strong>metrics-server</strong></td></tr>
|
|
113
|
+
</tbody>
|
|
114
|
+
</table>
|
|
115
|
+
|
|
116
|
+
<h2 id="practice">6. Practice Questions</h2>
|
|
117
|
+
|
|
118
|
+
<p><strong>Q1:</strong> A team needs to trace how a single HTTP request flows through 5 microservices to find which service adds the most latency. Which observability tool should they use?</p>
|
|
119
|
+
<ul>
|
|
120
|
+
<li>A) Prometheus</li>
|
|
121
|
+
<li>B) Grafana</li>
|
|
122
|
+
<li>C) Jaeger ✓</li>
|
|
123
|
+
<li>D) Loki</li>
|
|
124
|
+
</ul>
|
|
125
|
+
<p><em>Explanation: Distributed tracing (Jaeger, Zipkin) tracks a request's entire flow across multiple services, showing each hop's latency and relationships. Prometheus shows aggregate metrics; Loki shows logs; Grafana is visualization.</em></p>
|
|
126
|
+
|
|
127
|
+
<p><strong>Q2:</strong> What type of Prometheus metric would you use to track the total number of HTTP requests served since startup?</p>
|
|
128
|
+
<ul>
|
|
129
|
+
<li>A) Gauge</li>
|
|
130
|
+
<li>B) Histogram</li>
|
|
131
|
+
<li>C) Counter ✓</li>
|
|
132
|
+
<li>D) Summary</li>
|
|
133
|
+
</ul>
|
|
134
|
+
<p><em>Explanation: Counter is a monotonically increasing metric — it only goes up (or resets to 0 on restart). Perfect for tracking cumulative events like requests, errors, or bytes transferred. Gauge is for values that go up and down (like memory usage).</em></p>
|
|
135
|
+
|
|
136
|
+
<p><strong>Q3:</strong> Which framework allows developers to instrument their application once and export telemetry to multiple backends (Jaeger, Prometheus, etc.) without code changes?</p>
|
|
137
|
+
<ul>
|
|
138
|
+
<li>A) Prometheus client libraries</li>
|
|
139
|
+
<li>B) OpenTelemetry ✓</li>
|
|
140
|
+
<li>C) Kubernetes metrics-server</li>
|
|
141
|
+
<li>D) Grafana Agent</li>
|
|
142
|
+
</ul>
|
|
143
|
+
<p><em>Explanation: OpenTelemetry provides vendor-neutral APIs and SDKs for generating traces, metrics, and logs. The OTel Collector routes telemetry to different backends. Switching backends requires only Collector config changes, not application code.</em></p>
|