@xdev-asia/xdev-knowledge-mcp 1.0.43 → 1.0.44
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/pages/xoa-du-lieu-nguoi-dung.md +68 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/01-phan-1-data-engineering/lessons/01-bai-1-data-repositories-ingestion.md +5 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/01-phan-1-data-engineering/lessons/02-bai-2-data-transformation.md +5 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/01-phan-1-data-engineering/lessons/03-bai-3-data-analysis.md +159 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/02-phan-2-modeling/lessons/04-bai-4-sagemaker-built-in-algorithms.md +186 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/02-phan-2-modeling/lessons/05-bai-5-training-hyperparameter-tuning.md +159 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/02-phan-2-modeling/lessons/06-bai-6-model-evaluation.md +169 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/03-phan-3-implementation-operations/lessons/07-bai-7-model-deployment.md +193 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/03-phan-3-implementation-operations/lessons/08-bai-8-model-monitoring-mlops.md +184 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/03-phan-3-implementation-operations/lessons/09-bai-9-security-cost.md +166 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/04-phan-4-on-tap/lessons/10-bai-10-bai-toan-thuong-gap.md +181 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/04-phan-4-on-tap/lessons/11-bai-11-cheat-sheet.md +110 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/chapters/04-phan-4-on-tap/lessons/12-bai-12-chien-luoc-thi.md +113 -0
- package/content/series/luyen-thi/luyen-thi-aws-ml-specialty/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-cka/index.md +217 -0
- package/content/series/luyen-thi/luyen-thi-ckad/index.md +199 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/01-phan-1-problem-framing/lessons/01-bai-1-framing-ml-problems.md +136 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/01-phan-1-problem-framing/lessons/02-bai-2-gcp-ai-ml-ecosystem.md +160 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/02-phan-2-data-engineering/lessons/03-bai-3-data-pipeline.md +174 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/02-phan-2-data-engineering/lessons/04-bai-4-feature-engineering.md +156 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/03-phan-3-model-development/lessons/05-bai-5-vertex-ai-training.md +155 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/03-phan-3-model-development/lessons/06-bai-6-bigquery-ml-tensorflow.md +141 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/04-phan-4-deployment-mlops/lessons/07-bai-7-model-deployment.md +134 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/04-phan-4-deployment-mlops/lessons/08-bai-8-vertex-ai-pipelines-mlops.md +149 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/05-phan-5-responsible-ai/lessons/09-bai-9-responsible-ai.md +128 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/chapters/05-phan-5-responsible-ai/lessons/10-bai-10-cheat-sheet-chien-luoc-thi.md +108 -0
- package/content/series/luyen-thi/luyen-thi-gcp-ml-engineer/index.md +1 -1
- package/content/series/luyen-thi/luyen-thi-kcna/index.md +168 -0
- package/package.json +1 -1
|
@@ -0,0 +1,160 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: 019c9619-lt03-l02
|
|
3
|
+
title: 'Bài 2: GCP AI/ML Ecosystem Overview'
|
|
4
|
+
slug: bai-2-gcp-ai-ml-ecosystem
|
|
5
|
+
description: >-
|
|
6
|
+
Vertex AI platform tổng quan. AutoML vs Custom Training.
|
|
7
|
+
BigQuery ML. Pre-trained APIs (Vision, NLP, Translation).
|
|
8
|
+
Khi nào dùng service nào — decision tree.
|
|
9
|
+
duration_minutes: 50
|
|
10
|
+
is_free: true
|
|
11
|
+
video_url: null
|
|
12
|
+
sort_order: 2
|
|
13
|
+
section_title: "Phần 1: ML Problem Framing & Architecture"
|
|
14
|
+
course:
|
|
15
|
+
id: 019c9619-lt03-7003-c003-lt0300000003
|
|
16
|
+
title: 'Luyện thi Google Cloud Professional Machine Learning Engineer'
|
|
17
|
+
slug: luyen-thi-gcp-ml-engineer
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
<div style="text-align: center; margin: 2rem 0;">
|
|
21
|
+
<img src="/storage/uploads/2026/04/gcp-mle-bai2-gcp-ecosystem.png" alt="GCP AI/ML Ecosystem" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
22
|
+
<p><em>GCP AI/ML Ecosystem: Vertex AI, AutoML, BigQuery ML, Pre-trained APIs và khi nào dùng cái nào</em></p>
|
|
23
|
+
</div>
|
|
24
|
+
|
|
25
|
+
<h2 id="gcp-ml-landscape"><strong>1. GCP ML Landscape Overview</strong></h2>
|
|
26
|
+
|
|
27
|
+
<pre><code class="language-text">GCP ML Capability Spectrum:
|
|
28
|
+
|
|
29
|
+
LOW CODE ◄────────────────────────────────────► HIGH CONTROL
|
|
30
|
+
│ │ │
|
|
31
|
+
▼ ▼ ▼
|
|
32
|
+
Pre-trained APIs Vertex AI AutoML Custom Training
|
|
33
|
+
(Vision, NLP, (no code needed, (full control,
|
|
34
|
+
Translation) you bring data) you bring code)
|
|
35
|
+
│ │ │
|
|
36
|
+
No ML expertise Some domain ML expertise
|
|
37
|
+
needed expertise required
|
|
38
|
+
|
|
39
|
+
BigQuery ML ────── SQL interface for ML on warehouse data
|
|
40
|
+
</code></pre>
|
|
41
|
+
|
|
42
|
+
<h2 id="vertex-ai"><strong>2. Vertex AI — Unified ML Platform</strong></h2>
|
|
43
|
+
|
|
44
|
+
<p>Vertex AI là GCP's unified platform cho toàn bộ ML lifecycle. Hiểu rõ các component là bắt buộc cho kỳ thi.</p>
|
|
45
|
+
|
|
46
|
+
<table>
|
|
47
|
+
<thead><tr><th>Component</th><th>Purpose</th></tr></thead>
|
|
48
|
+
<tbody>
|
|
49
|
+
<tr><td><strong>Vertex AI Workbench</strong></td><td>Managed Jupyter notebooks cho data scientists</td></tr>
|
|
50
|
+
<tr><td><strong>Vertex AI Training</strong></td><td>Custom training jobs (CPUs, GPUs, TPUs)</td></tr>
|
|
51
|
+
<tr><td><strong>Vertex AI AutoML</strong></td><td>No-code model training (Tabular, Image, Text, Video)</td></tr>
|
|
52
|
+
<tr><td><strong>Vertex AI Endpoints</strong></td><td>Deploy models cho online prediction</td></tr>
|
|
53
|
+
<tr><td><strong>Vertex AI Batch Prediction</strong></td><td>Asynchronous batch scoring</td></tr>
|
|
54
|
+
<tr><td><strong>Vertex AI Feature Store</strong></td><td>Serve features consistently across training/serving</td></tr>
|
|
55
|
+
<tr><td><strong>Vertex AI Pipelines</strong></td><td>Kubeflow Pipelines-based ML workflow orchestration</td></tr>
|
|
56
|
+
<tr><td><strong>Vertex AI Experiments</strong></td><td>Track runs, compare metrics</td></tr>
|
|
57
|
+
<tr><td><strong>Vertex AI Model Registry</strong></td><td>Version control for models</td></tr>
|
|
58
|
+
<tr><td><strong>Vertex AI Model Monitoring</strong></td><td>Detect feature skew và prediction drift</td></tr>
|
|
59
|
+
</tbody>
|
|
60
|
+
</table>
|
|
61
|
+
|
|
62
|
+
<h2 id="automl-vs-custom"><strong>3. AutoML vs. Custom Training</strong></h2>
|
|
63
|
+
|
|
64
|
+
<table>
|
|
65
|
+
<thead><tr><th>Criteria</th><th>AutoML</th><th>Custom Training</th></tr></thead>
|
|
66
|
+
<tbody>
|
|
67
|
+
<tr><td>ML expertise needed</td><td>Minimal</td><td>Required</td></tr>
|
|
68
|
+
<tr><td>Training time</td><td>Hours (automated)</td><td>Variable (you control)</td></tr>
|
|
69
|
+
<tr><td>Model interpretability</td><td>Limited</td><td>Full control</td></tr>
|
|
70
|
+
<tr><td>Cost</td><td>Higher per model</td><td>Pay per compute used</td></tr>
|
|
71
|
+
<tr><td>Best for</td><td>Quick prototypes, standard tasks</td><td>Custom architectures, research</td></tr>
|
|
72
|
+
<tr><td>Supported data types</td><td>Tabular, Image, Text, Video</td><td>Any (you write the code)</td></tr>
|
|
73
|
+
</tbody>
|
|
74
|
+
</table>
|
|
75
|
+
|
|
76
|
+
<blockquote>
|
|
77
|
+
<p><strong>Exam tip:</strong> Câu hỏi có "team doesn't have ML expertise" hoặc "fastest time to deployment" → AutoML. Câu hỏi có "custom neural architecture" hoặc "full control over training loop" → Custom Training.</p>
|
|
78
|
+
</blockquote>
|
|
79
|
+
|
|
80
|
+
<h2 id="bigquery-ml"><strong>4. BigQuery ML</strong></h2>
|
|
81
|
+
|
|
82
|
+
<p>BigQuery ML cho phép train và serve ML models bằng SQL — không cần export data khỏi BigQuery.</p>
|
|
83
|
+
|
|
84
|
+
<table>
|
|
85
|
+
<thead><tr><th>Model Type</th><th>SQL Keyword</th><th>Use Case</th></tr></thead>
|
|
86
|
+
<tbody>
|
|
87
|
+
<tr><td>Linear Regression</td><td>LINEAR_REG</td><td>Price prediction</td></tr>
|
|
88
|
+
<tr><td>Logistic Regression</td><td>LOGISTIC_REG</td><td>Classification</td></tr>
|
|
89
|
+
<tr><td>K-Means Clustering</td><td>KMEANS</td><td>Customer segmentation</td></tr>
|
|
90
|
+
<tr><td>XGBoost</td><td>BOOSTED_TREE_CLASSIFIER/REGRESSOR</td><td>Tabular classification/regression</td></tr>
|
|
91
|
+
<tr><td>Deep Neural Network</td><td>DNN_CLASSIFIER/DNN_REGRESSOR</td><td>Complex patterns</td></tr>
|
|
92
|
+
<tr><td>Matrix Factorization</td><td>MATRIX_FACTORIZATION</td><td>Recommendations</td></tr>
|
|
93
|
+
<tr><td>Imported TF models</td><td>TENSORFLOW</td><td>Custom TF models</td></tr>
|
|
94
|
+
</tbody>
|
|
95
|
+
</table>
|
|
96
|
+
|
|
97
|
+
<h2 id="pre-trained-apis"><strong>5. Pre-trained AI APIs</strong></h2>
|
|
98
|
+
|
|
99
|
+
<table>
|
|
100
|
+
<thead><tr><th>API</th><th>Capabilities</th><th>Use Case</th></tr></thead>
|
|
101
|
+
<tbody>
|
|
102
|
+
<tr><td><strong>Cloud Vision API</strong></td><td>Labels, OCR, faces, logos, safe search</td><td>Image analysis without training</td></tr>
|
|
103
|
+
<tr><td><strong>Cloud Natural Language API</strong></td><td>Entities, sentiment, syntax, categories</td><td>Text analytics</td></tr>
|
|
104
|
+
<tr><td><strong>Cloud Translation API</strong></td><td>100+ language pairs</td><td>Multi-language content</td></tr>
|
|
105
|
+
<tr><td><strong>Cloud Speech-to-Text</strong></td><td>Transcription, speaker diarization</td><td>Audio processing</td></tr>
|
|
106
|
+
<tr><td><strong>Cloud Text-to-Speech</strong></td><td>WaveNet voices, SSML</td><td>Voice UI, accessibility</td></tr>
|
|
107
|
+
<tr><td><strong>Document AI</strong></td><td>Form parsing, invoice extraction</td><td>Document automation</td></tr>
|
|
108
|
+
<tr><td><strong>Recommendations AI</strong></td><td>Real-time product recommendations</td><td>E-commerce personalization</td></tr>
|
|
109
|
+
</tbody>
|
|
110
|
+
</table>
|
|
111
|
+
|
|
112
|
+
<h2 id="decision-tree"><strong>6. Service Selection Decision Tree</strong></h2>
|
|
113
|
+
|
|
114
|
+
<pre><code class="language-text">WHICH GCP ML SERVICE?
|
|
115
|
+
|
|
116
|
+
Do you have LABELED DATA?
|
|
117
|
+
│
|
|
118
|
+
├── NO → Pre-trained API sufficient for your task (Vision, NLP)?
|
|
119
|
+
│ YES → Use Pre-trained API
|
|
120
|
+
│ NO → Vertex AI Custom Training (unsupervised)
|
|
121
|
+
│
|
|
122
|
+
└── YES → Is your data already IN BigQuery?
|
|
123
|
+
│
|
|
124
|
+
├── YES → BigQuery ML (SQL-based, fast, no export)
|
|
125
|
+
│
|
|
126
|
+
└── NO → Need rapid prototyping, no ML team?
|
|
127
|
+
│
|
|
128
|
+
├── YES → Vertex AI AutoML
|
|
129
|
+
│
|
|
130
|
+
└── NO → Vertex AI Custom Training
|
|
131
|
+
</code></pre>
|
|
132
|
+
|
|
133
|
+
<h2 id="practice"><strong>7. Practice Questions</strong></h2>
|
|
134
|
+
|
|
135
|
+
<p><strong>Q1:</strong> A data analytics team has petabytes of customer transaction data in BigQuery. They want to build a churn prediction model using their existing SQL skills without data exports. Which approach is BEST?</p>
|
|
136
|
+
<ul>
|
|
137
|
+
<li>A) Export to Cloud Storage, then use Vertex AI Custom Training</li>
|
|
138
|
+
<li>B) Use Cloud Natural Language API</li>
|
|
139
|
+
<li>C) Use BigQuery ML with CREATE MODEL LOGISTIC_REGRESSION ✓</li>
|
|
140
|
+
<li>D) Use Vertex AI AutoML Tabular</li>
|
|
141
|
+
</ul>
|
|
142
|
+
<p><em>Explanation: BigQuery ML allows training classification models directly on BigQuery data using SQL, leveraging existing data infrastructure and skills without exporting data. This is the fastest path when data is already in BigQuery.</em></p>
|
|
143
|
+
|
|
144
|
+
<p><strong>Q2:</strong> A small startup needs to add sentiment analysis to customer reviews. They have no ML team and no labeled sentiment data. Which solution requires the LEAST effort?</p>
|
|
145
|
+
<ul>
|
|
146
|
+
<li>A) Vertex AI AutoML Text Sentiment</li>
|
|
147
|
+
<li>B) Train a custom BERT model on Vertex AI</li>
|
|
148
|
+
<li>C) Cloud Natural Language API sentiment analysis ✓</li>
|
|
149
|
+
<li>D) BigQuery ML DNN classifier</li>
|
|
150
|
+
</ul>
|
|
151
|
+
<p><em>Explanation: Cloud Natural Language API is a pre-trained, fully managed service that requires no training data, no ML expertise, and no infrastructure setup. Just call the API. AutoML requires labeled sentiment examples; custom BERT requires significantly more expertise.</em></p>
|
|
152
|
+
|
|
153
|
+
<p><strong>Q3:</strong> Which Vertex AI component should a team use to ensure that feature values used during model training are identical to those served at prediction time?</p>
|
|
154
|
+
<ul>
|
|
155
|
+
<li>A) Vertex AI Experiments</li>
|
|
156
|
+
<li>B) Vertex AI Feature Store ✓</li>
|
|
157
|
+
<li>C) Vertex AI Model Registry</li>
|
|
158
|
+
<li>D) Vertex AI Pipelines</li>
|
|
159
|
+
</ul>
|
|
160
|
+
<p><em>Explanation: Vertex AI Feature Store provides a centralized repository for storing, serving, and sharing ML features. It ensures training-serving consistency by using the same feature definitions and values for both training and online/batch prediction, preventing training-serving skew.</em></p>
|
|
@@ -0,0 +1,174 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: 019c9619-lt03-l03
|
|
3
|
+
title: 'Bài 3: Data Pipeline — Dataflow, Pub/Sub, Dataproc'
|
|
4
|
+
slug: bai-3-data-pipeline
|
|
5
|
+
description: >-
|
|
6
|
+
Apache Beam trên Dataflow cho batch/streaming ETL.
|
|
7
|
+
Pub/Sub cho event-driven pipelines. Dataproc cho Spark.
|
|
8
|
+
Cloud Composer (Airflow) cho orchestration.
|
|
9
|
+
duration_minutes: 60
|
|
10
|
+
is_free: true
|
|
11
|
+
video_url: null
|
|
12
|
+
sort_order: 3
|
|
13
|
+
section_title: "Phần 2: Data Engineering & Feature Engineering"
|
|
14
|
+
course:
|
|
15
|
+
id: 019c9619-lt03-7003-c003-lt0300000003
|
|
16
|
+
title: 'Luyện thi Google Cloud Professional Machine Learning Engineer'
|
|
17
|
+
slug: luyen-thi-gcp-ml-engineer
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
<div style="text-align: center; margin: 2rem 0;">
|
|
21
|
+
<img src="/storage/uploads/2026/04/gcp-mle-bai3-data-pipeline.png" alt="GCP Data Pipeline Architecture" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
22
|
+
<p><em>GCP Data Pipeline: Pub/Sub, Dataflow, Dataproc, Cloud Composer và luồng dữ liệu cho ML</em></p>
|
|
23
|
+
</div>
|
|
24
|
+
|
|
25
|
+
<h2 id="gcp-data-pipeline"><strong>1. GCP Data Pipeline Services</strong></h2>
|
|
26
|
+
|
|
27
|
+
<table>
|
|
28
|
+
<thead><tr><th>Service</th><th>Type</th><th>When to Use</th></tr></thead>
|
|
29
|
+
<tbody>
|
|
30
|
+
<tr><td><strong>Pub/Sub</strong></td><td>Managed message queue</td><td>Event streaming, decouple producers/consumers</td></tr>
|
|
31
|
+
<tr><td><strong>Dataflow</strong></td><td>Managed Apache Beam runner</td><td>Unified batch + streaming ETL</td></tr>
|
|
32
|
+
<tr><td><strong>Dataproc</strong></td><td>Managed Spark / Hadoop</td><td>Existing Spark/Hadoop workloads, ML at scale</td></tr>
|
|
33
|
+
<tr><td><strong>Cloud Composer</strong></td><td>Managed Apache Airflow</td><td>Orchestrate multi-step ML workflows</td></tr>
|
|
34
|
+
<tr><td><strong>Cloud Storage</strong></td><td>Object store</td><td>Raw data landing zone, model artifacts</td></tr>
|
|
35
|
+
<tr><td><strong>BigQuery</strong></td><td>Data warehouse</td><td>Structured analysis, BigQuery ML</td></tr>
|
|
36
|
+
</tbody>
|
|
37
|
+
</table>
|
|
38
|
+
|
|
39
|
+
<h2 id="pubsub"><strong>2. Pub/Sub — Event Streaming</strong></h2>
|
|
40
|
+
|
|
41
|
+
<pre><code class="language-text">Pub/Sub Architecture:
|
|
42
|
+
|
|
43
|
+
Data Source → Publisher → [Topic] → Subscription → Subscriber
|
|
44
|
+
(IoT devices, (Pull or (Dataflow,
|
|
45
|
+
web clicks, Push) Cloud Functions,
|
|
46
|
+
logs) BigQuery)
|
|
47
|
+
|
|
48
|
+
Key concepts:
|
|
49
|
+
- Topic: named resource where messages are sent
|
|
50
|
+
- Subscription: named resource attached to topic
|
|
51
|
+
- Publisher: sends messages to topic
|
|
52
|
+
- Subscriber: receives messages from subscription
|
|
53
|
+
- At-least-once delivery (not exactly-once by default)
|
|
54
|
+
</code></pre>
|
|
55
|
+
|
|
56
|
+
<table>
|
|
57
|
+
<thead><tr><th>Feature</th><th>Details</th></tr></thead>
|
|
58
|
+
<tbody>
|
|
59
|
+
<tr><td><strong>Message retention</strong></td><td>7 days default (configurable)</td></tr>
|
|
60
|
+
<tr><td><strong>At-least-once delivery</strong></td><td>Idempotent subscribers needed</td></tr>
|
|
61
|
+
<tr><td><strong>Exactly-once</strong></td><td>Available in Pub/Sub Lite (same region)</td></tr>
|
|
62
|
+
<tr><td><strong>Ordering</strong></td><td>Enable message ordering with ordering key</td></tr>
|
|
63
|
+
</tbody>
|
|
64
|
+
</table>
|
|
65
|
+
|
|
66
|
+
<blockquote>
|
|
67
|
+
<p><strong>Exam tip:</strong> Pub/Sub → Dataflow → BigQuery là pipeline pattern cực phổ biến trong đề thi. Pub/Sub ingest, Dataflow transform, BigQuery store + analyze.</p>
|
|
68
|
+
</blockquote>
|
|
69
|
+
|
|
70
|
+
<h2 id="dataflow"><strong>3. Cloud Dataflow — Apache Beam</strong></h2>
|
|
71
|
+
|
|
72
|
+
<p>Dataflow là managed runner cho <strong>Apache Beam</strong> — framework cho unified batch và streaming processing. Không cần quản lý servers.</p>
|
|
73
|
+
|
|
74
|
+
<table>
|
|
75
|
+
<thead><tr><th>Concept</th><th>Description</th></tr></thead>
|
|
76
|
+
<tbody>
|
|
77
|
+
<tr><td><strong>Pipeline</strong></td><td>Chuỗi transform operations</td></tr>
|
|
78
|
+
<tr><td><strong>PCollection</strong></td><td>Distributed data collection (bounded or unbounded)</td></tr>
|
|
79
|
+
<tr><td><strong>Transform</strong></td><td>ParDo, GroupByKey, Combine, Flatten, Partition</td></tr>
|
|
80
|
+
<tr><td><strong>Windowing</strong></td><td>Fixed, Sliding, Session windows cho streaming</td></tr>
|
|
81
|
+
<tr><td><strong>Watermarks</strong></td><td>Handle late-arriving data in streaming</td></tr>
|
|
82
|
+
</tbody>
|
|
83
|
+
</table>
|
|
84
|
+
|
|
85
|
+
<pre><code class="language-text">Dataflow Windowing for Streaming ML:
|
|
86
|
+
|
|
87
|
+
Event stream: ──●──●──●──────●──●──●──────●──●──
|
|
88
|
+
|
|
89
|
+
Fixed Window (1 min):
|
|
90
|
+
├─── [W1] ──┤├─── [W2] ──┤├─── [W3] ──┤
|
|
91
|
+
|
|
92
|
+
Sliding Window (1 min, slide 30s):
|
|
93
|
+
├── [W1] ────┤
|
|
94
|
+
├── [W2] ────┤
|
|
95
|
+
├── [W3] ────┤
|
|
96
|
+
|
|
97
|
+
Session Window (2 min gap):
|
|
98
|
+
├── [S1] ──────────┤ ├── [S2] ──┤
|
|
99
|
+
(user session) (new session)
|
|
100
|
+
</code></pre>
|
|
101
|
+
|
|
102
|
+
<h2 id="dataproc"><strong>4. Cloud Dataproc — Managed Spark/Hadoop</strong></h2>
|
|
103
|
+
|
|
104
|
+
<table>
|
|
105
|
+
<thead><tr><th>Dataproc Feature</th><th>Details</th></tr></thead>
|
|
106
|
+
<tbody>
|
|
107
|
+
<tr><td><strong>Cluster lifecycle</strong></td><td>Create in 90 seconds, delete after job — cost efficient</td></tr>
|
|
108
|
+
<tr><td><strong>Ephemeral clusters</strong></td><td>Spin up → run job → shut down (per-job pricing)</td></tr>
|
|
109
|
+
<tr><td><strong>Preemptible VMs</strong></td><td>Use for worker nodes to reduce cost 60-80%</td></tr>
|
|
110
|
+
<tr><td><strong>Component gateway</strong></td><td>Access Jupyter, Zeppelin, Spark UI via browser</td></tr>
|
|
111
|
+
<tr><td><strong>ML libraries</strong></td><td>Spark MLlib, TensorFlow on Spark (TFoS)</td></tr>
|
|
112
|
+
</tbody>
|
|
113
|
+
</table>
|
|
114
|
+
|
|
115
|
+
<h2 id="composer"><strong>5. Cloud Composer — Workflow Orchestration</strong></h2>
|
|
116
|
+
|
|
117
|
+
<p>Cloud Composer là managed Apache Airflow. Dùng để orchestrate multi-step ML pipelines bao gồm data ingestion, preprocessing, training, và deployment.</p>
|
|
118
|
+
|
|
119
|
+
<pre><code class="language-text">Cloud Composer ML Workflow:
|
|
120
|
+
|
|
121
|
+
[DAG: daily_ml_pipeline]
|
|
122
|
+
Task 1: Extract data from BigQuery
|
|
123
|
+
↓
|
|
124
|
+
Task 2: Run Dataflow preprocessing job
|
|
125
|
+
↓
|
|
126
|
+
Task 3: Submit Vertex AI Training Job
|
|
127
|
+
↓
|
|
128
|
+
Task 4: Evaluate model metrics
|
|
129
|
+
↓ (if metrics pass threshold)
|
|
130
|
+
Task 5: Deploy to Vertex AI Endpoint
|
|
131
|
+
</code></pre>
|
|
132
|
+
|
|
133
|
+
<h2 id="decision-guide"><strong>6. Data Pipeline Service Selection</strong></h2>
|
|
134
|
+
|
|
135
|
+
<table>
|
|
136
|
+
<thead><tr><th>Scenario</th><th>Recommended Service</th></tr></thead>
|
|
137
|
+
<tbody>
|
|
138
|
+
<tr><td>Real-time event streaming ingestion</td><td>Pub/Sub</td></tr>
|
|
139
|
+
<tr><td>Unified batch + streaming ETL (no infra mgmt)</td><td>Dataflow (Apache Beam)</td></tr>
|
|
140
|
+
<tr><td>Migrate existing Spark jobs to GCP</td><td>Dataproc</td></tr>
|
|
141
|
+
<tr><td>Complex ML DAG orchestration</td><td>Cloud Composer</td></tr>
|
|
142
|
+
<tr><td>Stream data into BigQuery</td><td>Pub/Sub → Dataflow → BigQuery</td></tr>
|
|
143
|
+
<tr><td>Serverless data processing (SQL)</td><td>BigQuery (ETL via SQL)</td></tr>
|
|
144
|
+
</tbody>
|
|
145
|
+
</table>
|
|
146
|
+
|
|
147
|
+
<h2 id="practice"><strong>7. Practice Questions</strong></h2>
|
|
148
|
+
|
|
149
|
+
<p><strong>Q1:</strong> A company receives millions of IoT sensor events per second from factory equipment. They need to process these events in real time, detect anomalies, and store results in BigQuery. Which pipeline architecture is MOST appropriate?</p>
|
|
150
|
+
<ul>
|
|
151
|
+
<li>A) Dataproc → Spark Streaming → BigQuery</li>
|
|
152
|
+
<li>B) Pub/Sub → Dataflow → BigQuery ✓</li>
|
|
153
|
+
<li>C) Cloud Functions → Cloud SQL</li>
|
|
154
|
+
<li>D) Batch upload to Cloud Storage → BigQuery import</li>
|
|
155
|
+
</ul>
|
|
156
|
+
<p><em>Explanation: Pub/Sub ingests high-volume streaming events reliably. Dataflow processes the stream in real time using Apache Beam (windowing, transformations, anomaly detection). BigQuery stores the results for analysis. This is the canonical GCP streaming analytics pattern.</em></p>
|
|
157
|
+
|
|
158
|
+
<p><strong>Q2:</strong> A data engineering team has an existing Apache Spark job that processes training data for ML models. They want to migrate it to GCP with minimal code changes. Which service should they use?</p>
|
|
159
|
+
<ul>
|
|
160
|
+
<li>A) Cloud Dataflow</li>
|
|
161
|
+
<li>B) Cloud Dataproc ✓</li>
|
|
162
|
+
<li>C) BigQuery ETL</li>
|
|
163
|
+
<li>D) Cloud Composer</li>
|
|
164
|
+
</ul>
|
|
165
|
+
<p><em>Explanation: Cloud Dataproc supports Apache Spark natively, allowing teams to run existing Spark jobs on GCP with minimal changes. Dataflow uses Apache Beam (different programming model). Dataproc is the lift-and-shift option for Spark workloads.</em></p>
|
|
166
|
+
|
|
167
|
+
<p><strong>Q3:</strong> A team needs to orchestrate a daily ML pipeline that includes data extraction from BigQuery, preprocessing, Vertex AI training, and deployment if accuracy exceeds 90%. Which service handles this workflow orchestration?</p>
|
|
168
|
+
<ul>
|
|
169
|
+
<li>A) Vertex AI Pipelines</li>
|
|
170
|
+
<li>B) Cloud Dataflow</li>
|
|
171
|
+
<li>C) Cloud Composer ✓</li>
|
|
172
|
+
<li>D) Pub/Sub triggers</li>
|
|
173
|
+
</ul>
|
|
174
|
+
<p><em>Explanation: Cloud Composer (managed Apache Airflow) is designed for complex DAG orchestration across multiple services. It handles scheduling, conditional branching (deploy only if accuracy > 90%), retry logic, and monitoring across heterogeneous services like BigQuery, Dataflow, and Vertex AI.</em></p>
|
|
@@ -0,0 +1,156 @@
|
|
|
1
|
+
---
|
|
2
|
+
id: 019c9619-lt03-l04
|
|
3
|
+
title: 'Bài 4: Feature Engineering & Vertex AI Feature Store'
|
|
4
|
+
slug: bai-4-feature-engineering
|
|
5
|
+
description: >-
|
|
6
|
+
Feature engineering techniques. BigQuery cho feature computation.
|
|
7
|
+
Vertex AI Feature Store: online/offline serving.
|
|
8
|
+
Feature monitoring, consistency giữa training/serving.
|
|
9
|
+
duration_minutes: 60
|
|
10
|
+
is_free: true
|
|
11
|
+
video_url: null
|
|
12
|
+
sort_order: 4
|
|
13
|
+
section_title: "Phần 2: Data Engineering & Feature Engineering"
|
|
14
|
+
course:
|
|
15
|
+
id: 019c9619-lt03-7003-c003-lt0300000003
|
|
16
|
+
title: 'Luyện thi Google Cloud Professional Machine Learning Engineer'
|
|
17
|
+
slug: luyen-thi-gcp-ml-engineer
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
<div style="text-align: center; margin: 2rem 0;">
|
|
21
|
+
<img src="/storage/uploads/2026/04/gcp-mle-bai4-feature-store.png" alt="Vertex AI Feature Store" style="max-width: 800px; width: 100%; border-radius: 12px;" />
|
|
22
|
+
<p><em>Feature Engineering & Vertex AI Feature Store: tạo, lưu trữ, và tái sử dụng features cho ML</em></p>
|
|
23
|
+
</div>
|
|
24
|
+
|
|
25
|
+
<h2 id="feature-engineering"><strong>1. Feature Engineering Techniques</strong></h2>
|
|
26
|
+
|
|
27
|
+
<table>
|
|
28
|
+
<thead><tr><th>Technique</th><th>When to Use</th><th>Example</th></tr></thead>
|
|
29
|
+
<tbody>
|
|
30
|
+
<tr><td><strong>Normalization (Min-Max)</strong></td><td>Bounded range required (0-1)</td><td>Image pixels, probabilities</td></tr>
|
|
31
|
+
<tr><td><strong>Standardization (Z-score)</strong></td><td>Normal-ish distribution, no bounds</td><td>Customer age, transaction amount</td></tr>
|
|
32
|
+
<tr><td><strong>Log Transform</strong></td><td>Skewed distributions (price, salary)</td><td>Log(price) for housing</td></tr>
|
|
33
|
+
<tr><td><strong>One-Hot Encoding</strong></td><td>Nominal categorical (no order)</td><td>Country, brand, color</td></tr>
|
|
34
|
+
<tr><td><strong>Label Encoding</strong></td><td>Ordinal categorical (has order)</td><td>Low/Medium/High → 0/1/2</td></tr>
|
|
35
|
+
<tr><td><strong>Feature Crossing</strong></td><td>Capture interaction between features</td><td>city × day_of_week</td></tr>
|
|
36
|
+
<tr><td><strong>Bucketizing</strong></td><td>Convert continuous to categorical</td><td>Age → age_group</td></tr>
|
|
37
|
+
<tr><td><strong>Embeddings</strong></td><td>High-cardinality categorical</td><td>UserID, ProductID</td></tr>
|
|
38
|
+
</tbody>
|
|
39
|
+
</table>
|
|
40
|
+
|
|
41
|
+
<h2 id="missing-values"><strong>2. Handling Missing Values</strong></h2>
|
|
42
|
+
|
|
43
|
+
<table>
|
|
44
|
+
<thead><tr><th>Strategy</th><th>When</th></tr></thead>
|
|
45
|
+
<tbody>
|
|
46
|
+
<tr><td><strong>Mean/Median imputation</strong></td><td>Numerical, low missingness rate</td></tr>
|
|
47
|
+
<tr><td><strong>Mode imputation</strong></td><td>Categorical features</td></tr>
|
|
48
|
+
<tr><td><strong>Model-based imputation</strong></td><td>High missingness, complex patterns</td></tr>
|
|
49
|
+
<tr><td><strong>Indicator variable</strong></td><td>Missingness itself is informative (add is_missing flag)</td></tr>
|
|
50
|
+
<tr><td><strong>Drop rows</strong></td><td>Missing target / very few rows affected</td></tr>
|
|
51
|
+
<tr><td><strong>Drop column</strong></td><td>>80% missing</td></tr>
|
|
52
|
+
</tbody>
|
|
53
|
+
</table>
|
|
54
|
+
|
|
55
|
+
<h2 id="training-serving-skew"><strong>3. Training-Serving Skew</strong></h2>
|
|
56
|
+
|
|
57
|
+
<p><strong>Training-serving skew</strong> là vấn đề nghiêm trọng: features được compute khác nhau giữa training và serving, khiến model hoạt động kém trong production dù test metrics tốt.</p>
|
|
58
|
+
|
|
59
|
+
<pre><code class="language-text">Training-Serving Skew Example:
|
|
60
|
+
|
|
61
|
+
TRAINING TIME:
|
|
62
|
+
avg_purchase_last_30d = mean(all purchases in batch) ← computed over full period
|
|
63
|
+
|
|
64
|
+
SERVING TIME:
|
|
65
|
+
avg_purchase_last_30d = mean(last 5 purchases) ← computed differently!
|
|
66
|
+
|
|
67
|
+
Result: Feature distribution mismatch → poor predictions
|
|
68
|
+
|
|
69
|
+
SOLUTION: Vertex AI Feature Store
|
|
70
|
+
Same feature serve logic used at training AND serving time
|
|
71
|
+
</code></pre>
|
|
72
|
+
|
|
73
|
+
<h2 id="feature-store"><strong>4. Vertex AI Feature Store</strong></h2>
|
|
74
|
+
|
|
75
|
+
<table>
|
|
76
|
+
<thead><tr><th>Component</th><th>Description</th></tr></thead>
|
|
77
|
+
<tbody>
|
|
78
|
+
<tr><td><strong>Feature Store</strong></td><td>Centralized repository for ML features</td></tr>
|
|
79
|
+
<tr><td><strong>Entity Type</strong></td><td>Category of things you track (User, Product)</td></tr>
|
|
80
|
+
<tr><td><strong>Feature</strong></td><td>Named attribute of an entity (user.avg_spend)</td></tr>
|
|
81
|
+
<tr><td><strong>Online Store</strong></td><td>Low-latency serving (ms) for real-time predictions</td></tr>
|
|
82
|
+
<tr><td><strong>Offline Store</strong></td><td>BigQuery-backed, for batch training data retrieval</td></tr>
|
|
83
|
+
</tbody>
|
|
84
|
+
</table>
|
|
85
|
+
|
|
86
|
+
<pre><code class="language-text">Vertex AI Feature Store Architecture:
|
|
87
|
+
|
|
88
|
+
Feature Ingestion (Batch or Streaming)
|
|
89
|
+
↓
|
|
90
|
+
┌──── Feature Store ────────────────┐
|
|
91
|
+
│ Offline Store (BigQuery) │ ← Training data export
|
|
92
|
+
│ Online Store (Bigtable-backed) │ ← Serving (ms latency)
|
|
93
|
+
└───────────────────────────────────┘
|
|
94
|
+
↑ Same features ↑
|
|
95
|
+
Training Inference
|
|
96
|
+
Pipeline Endpoint
|
|
97
|
+
</code></pre>
|
|
98
|
+
|
|
99
|
+
<h2 id="bigquery-features"><strong>5. BigQuery for Feature Engineering</strong></h2>
|
|
100
|
+
|
|
101
|
+
<p>BigQuery là công cụ tốt nhất trên GCP để compute aggregate features từ large datasets.</p>
|
|
102
|
+
|
|
103
|
+
<table>
|
|
104
|
+
<thead><tr><th>Feature Pattern</th><th>BigQuery Approach</th></tr></thead>
|
|
105
|
+
<tbody>
|
|
106
|
+
<tr><td>Rolling window aggregates</td><td>Window functions: AVG() OVER (PARTITION BY ... ORDER BY ... ROWS BETWEEN ...)</td></tr>
|
|
107
|
+
<tr><td>User activity counts</td><td>COUNT() GROUP BY user_id</td></tr>
|
|
108
|
+
<tr><td>Categorical encoding</td><td>CASE WHEN ... or ML.ONE_HOT_ENCODE()</td></tr>
|
|
109
|
+
<tr><td>Hash embedding (high cardinality)</td><td>FARM_FINGERPRINT() mod N</td></tr>
|
|
110
|
+
<tr><td>Feature normalization</td><td>ML.STANDARD_SCALER() in BigQuery ML</td></tr>
|
|
111
|
+
</tbody>
|
|
112
|
+
</table>
|
|
113
|
+
|
|
114
|
+
<blockquote>
|
|
115
|
+
<p><strong>Exam tip:</strong> Khi câu hỏi nhắc đến "training-serving consistency" hoặc "feature reuse across multiple models" → <strong>Vertex AI Feature Store</strong>. Khi nhắc đến "compute features from BigQuery data at scale" → BigQuery window functions + scheduled queries.</p>
|
|
116
|
+
</blockquote>
|
|
117
|
+
|
|
118
|
+
<h2 id="feature-monitoring"><strong>6. Feature Drift Monitoring</strong></h2>
|
|
119
|
+
|
|
120
|
+
<table>
|
|
121
|
+
<thead><tr><th>Type</th><th>What Changes</th><th>Detection Method</th></tr></thead>
|
|
122
|
+
<tbody>
|
|
123
|
+
<tr><td><strong>Feature Skew</strong></td><td>Training vs serving feature distribution differs</td><td>Compare training baseline vs serving stats</td></tr>
|
|
124
|
+
<tr><td><strong>Feature Drift</strong></td><td>Serving features change over time</td><td>Monitor serving feature distributions daily</td></tr>
|
|
125
|
+
<tr><td><strong>Label Drift</strong></td><td>Target variable distribution changes</td><td>Track prediction distribution shifts</td></tr>
|
|
126
|
+
</tbody>
|
|
127
|
+
</table>
|
|
128
|
+
|
|
129
|
+
<h2 id="practice"><strong>7. Practice Questions</strong></h2>
|
|
130
|
+
|
|
131
|
+
<p><strong>Q1:</strong> A team's ML model has excellent accuracy during testing but performs poorly in production. Investigations reveal that the average purchase feature is calculated differently in training (using historical batch data) vs. serving (using real-time lookups). What is this problem called and how should it be solved?</p>
|
|
132
|
+
<ul>
|
|
133
|
+
<li>A) Model drift — retrain the model more frequently</li>
|
|
134
|
+
<li>B) Training-serving skew — use Vertex AI Feature Store ✓</li>
|
|
135
|
+
<li>C) Data leakage — remove the purchase feature</li>
|
|
136
|
+
<li>D) Overfitting — add dropout layers</li>
|
|
137
|
+
</ul>
|
|
138
|
+
<p><em>Explanation: Training-serving skew occurs when features are computed differently at training and serving time. Vertex AI Feature Store solves this by providing a single source of truth for feature computation, ensuring the same logic is used for both training data export and online serving.</em></p>
|
|
139
|
+
|
|
140
|
+
<p><strong>Q2:</strong> A feature has values ranging from $10 to $10,000,000 with a heavily right-skewed distribution. Which transformation is MOST appropriate before using this feature in a linear model?</p>
|
|
141
|
+
<ul>
|
|
142
|
+
<li>A) One-Hot Encoding</li>
|
|
143
|
+
<li>B) Min-Max Normalization</li>
|
|
144
|
+
<li>C) Log transformation ✓</li>
|
|
145
|
+
<li>D) Label Encoding</li>
|
|
146
|
+
</ul>
|
|
147
|
+
<p><em>Explanation: Log transformation compresses the scale of highly skewed distributions, making them more normal-like and suitable for linear models. Min-Max normalization would still preserve the skew. One-hot encoding is for categorical data.</em></p>
|
|
148
|
+
|
|
149
|
+
<p><strong>Q3:</strong> Which Vertex AI Feature Store store type is optimized for serving features to real-time prediction endpoints with millisecond latency?</p>
|
|
150
|
+
<ul>
|
|
151
|
+
<li>A) Offline Store (BigQuery)</li>
|
|
152
|
+
<li>B) Online Store (Bigtable-backed) ✓</li>
|
|
153
|
+
<li>C) Feature Catalog</li>
|
|
154
|
+
<li>D) Cloud Memorystore</li>
|
|
155
|
+
</ul>
|
|
156
|
+
<p><em>Explanation: The Online Store in Vertex AI Feature Store is backed by Bigtable and designed for sub-100ms latency lookups, serving fresh feature values to real-time prediction endpoints. The Offline Store uses BigQuery and is for batch training data retrieval.</em></p>
|