agent-starter-pack 0.0.1b0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of agent-starter-pack might be problematic. Click here for more details.
- agent_starter_pack-0.0.1b0.dist-info/METADATA +143 -0
- agent_starter_pack-0.0.1b0.dist-info/RECORD +162 -0
- agent_starter_pack-0.0.1b0.dist-info/WHEEL +4 -0
- agent_starter_pack-0.0.1b0.dist-info/entry_points.txt +2 -0
- agent_starter_pack-0.0.1b0.dist-info/licenses/LICENSE +201 -0
- agents/agentic_rag_vertexai_search/README.md +22 -0
- agents/agentic_rag_vertexai_search/app/agent.py +145 -0
- agents/agentic_rag_vertexai_search/app/retrievers.py +79 -0
- agents/agentic_rag_vertexai_search/app/templates.py +53 -0
- agents/agentic_rag_vertexai_search/notebooks/evaluating_langgraph_agent.ipynb +1561 -0
- agents/agentic_rag_vertexai_search/template/.templateconfig.yaml +14 -0
- agents/agentic_rag_vertexai_search/tests/integration/test_agent.py +57 -0
- agents/crewai_coding_crew/README.md +34 -0
- agents/crewai_coding_crew/app/agent.py +86 -0
- agents/crewai_coding_crew/app/crew/config/agents.yaml +39 -0
- agents/crewai_coding_crew/app/crew/config/tasks.yaml +37 -0
- agents/crewai_coding_crew/app/crew/crew.py +71 -0
- agents/crewai_coding_crew/notebooks/evaluating_crewai_agent.ipynb +1571 -0
- agents/crewai_coding_crew/notebooks/evaluating_langgraph_agent.ipynb +1561 -0
- agents/crewai_coding_crew/template/.templateconfig.yaml +12 -0
- agents/crewai_coding_crew/tests/integration/test_agent.py +47 -0
- agents/langgraph_base_react/README.md +9 -0
- agents/langgraph_base_react/app/agent.py +73 -0
- agents/langgraph_base_react/notebooks/evaluating_langgraph_agent.ipynb +1561 -0
- agents/langgraph_base_react/template/.templateconfig.yaml +13 -0
- agents/langgraph_base_react/tests/integration/test_agent.py +48 -0
- agents/multimodal_live_api/README.md +50 -0
- agents/multimodal_live_api/app/agent.py +86 -0
- agents/multimodal_live_api/app/server.py +193 -0
- agents/multimodal_live_api/app/templates.py +51 -0
- agents/multimodal_live_api/app/vector_store.py +55 -0
- agents/multimodal_live_api/template/.templateconfig.yaml +15 -0
- agents/multimodal_live_api/tests/integration/test_server_e2e.py +254 -0
- agents/multimodal_live_api/tests/load_test/load_test.py +40 -0
- agents/multimodal_live_api/tests/unit/test_server.py +143 -0
- src/base_template/.gitignore +197 -0
- src/base_template/Makefile +37 -0
- src/base_template/README.md +91 -0
- src/base_template/app/utils/tracing.py +143 -0
- src/base_template/app/utils/typing.py +115 -0
- src/base_template/deployment/README.md +123 -0
- src/base_template/deployment/cd/deploy-to-prod.yaml +98 -0
- src/base_template/deployment/cd/staging.yaml +215 -0
- src/base_template/deployment/ci/pr_checks.yaml +51 -0
- src/base_template/deployment/terraform/apis.tf +34 -0
- src/base_template/deployment/terraform/build_triggers.tf +122 -0
- src/base_template/deployment/terraform/dev/apis.tf +42 -0
- src/base_template/deployment/terraform/dev/iam.tf +90 -0
- src/base_template/deployment/terraform/dev/log_sinks.tf +66 -0
- src/base_template/deployment/terraform/dev/providers.tf +29 -0
- src/base_template/deployment/terraform/dev/storage.tf +76 -0
- src/base_template/deployment/terraform/dev/variables.tf +126 -0
- src/base_template/deployment/terraform/dev/vars/env.tfvars +21 -0
- src/base_template/deployment/terraform/iam.tf +130 -0
- src/base_template/deployment/terraform/locals.tf +50 -0
- src/base_template/deployment/terraform/log_sinks.tf +72 -0
- src/base_template/deployment/terraform/providers.tf +35 -0
- src/base_template/deployment/terraform/service_accounts.tf +42 -0
- src/base_template/deployment/terraform/storage.tf +100 -0
- src/base_template/deployment/terraform/variables.tf +202 -0
- src/base_template/deployment/terraform/vars/env.tfvars +43 -0
- src/base_template/pyproject.toml +113 -0
- src/base_template/tests/unit/test_utils/test_tracing_exporter.py +140 -0
- src/cli/commands/create.py +534 -0
- src/cli/commands/setup_cicd.py +730 -0
- src/cli/main.py +35 -0
- src/cli/utils/__init__.py +35 -0
- src/cli/utils/cicd.py +662 -0
- src/cli/utils/gcp.py +120 -0
- src/cli/utils/logging.py +51 -0
- src/cli/utils/template.py +644 -0
- src/data_ingestion/README.md +79 -0
- src/data_ingestion/data_ingestion_pipeline/components/ingest_data.py +175 -0
- src/data_ingestion/data_ingestion_pipeline/components/process_data.py +321 -0
- src/data_ingestion/data_ingestion_pipeline/pipeline.py +58 -0
- src/data_ingestion/data_ingestion_pipeline/submit_pipeline.py +184 -0
- src/data_ingestion/pyproject.toml +17 -0
- src/data_ingestion/uv.lock +999 -0
- src/deployment_targets/agent_engine/app/agent_engine_app.py +238 -0
- src/deployment_targets/agent_engine/app/utils/gcs.py +42 -0
- src/deployment_targets/agent_engine/deployment_metadata.json +4 -0
- src/deployment_targets/agent_engine/notebooks/intro_reasoning_engine.ipynb +869 -0
- src/deployment_targets/agent_engine/tests/integration/test_agent_engine_app.py +120 -0
- src/deployment_targets/agent_engine/tests/load_test/.results/.placeholder +0 -0
- src/deployment_targets/agent_engine/tests/load_test/.results/report.html +264 -0
- src/deployment_targets/agent_engine/tests/load_test/.results/results_exceptions.csv +1 -0
- src/deployment_targets/agent_engine/tests/load_test/.results/results_failures.csv +1 -0
- src/deployment_targets/agent_engine/tests/load_test/.results/results_stats.csv +3 -0
- src/deployment_targets/agent_engine/tests/load_test/.results/results_stats_history.csv +22 -0
- src/deployment_targets/agent_engine/tests/load_test/README.md +42 -0
- src/deployment_targets/agent_engine/tests/load_test/load_test.py +100 -0
- src/deployment_targets/agent_engine/tests/unit/test_dummy.py +22 -0
- src/deployment_targets/cloud_run/Dockerfile +29 -0
- src/deployment_targets/cloud_run/app/server.py +128 -0
- src/deployment_targets/cloud_run/deployment/terraform/artifact_registry.tf +22 -0
- src/deployment_targets/cloud_run/deployment/terraform/dev/service_accounts.tf +20 -0
- src/deployment_targets/cloud_run/tests/integration/test_server_e2e.py +192 -0
- src/deployment_targets/cloud_run/tests/load_test/.results/.placeholder +0 -0
- src/deployment_targets/cloud_run/tests/load_test/README.md +79 -0
- src/deployment_targets/cloud_run/tests/load_test/load_test.py +85 -0
- src/deployment_targets/cloud_run/tests/unit/test_server.py +142 -0
- src/deployment_targets/cloud_run/uv.lock +6952 -0
- src/frontends/live_api_react/frontend/package-lock.json +19405 -0
- src/frontends/live_api_react/frontend/package.json +56 -0
- src/frontends/live_api_react/frontend/public/favicon.ico +0 -0
- src/frontends/live_api_react/frontend/public/index.html +62 -0
- src/frontends/live_api_react/frontend/public/robots.txt +3 -0
- src/frontends/live_api_react/frontend/src/App.scss +189 -0
- src/frontends/live_api_react/frontend/src/App.test.tsx +25 -0
- src/frontends/live_api_react/frontend/src/App.tsx +205 -0
- src/frontends/live_api_react/frontend/src/components/audio-pulse/AudioPulse.tsx +64 -0
- src/frontends/live_api_react/frontend/src/components/audio-pulse/audio-pulse.scss +68 -0
- src/frontends/live_api_react/frontend/src/components/control-tray/ControlTray.tsx +217 -0
- src/frontends/live_api_react/frontend/src/components/control-tray/control-tray.scss +201 -0
- src/frontends/live_api_react/frontend/src/components/logger/Logger.tsx +241 -0
- src/frontends/live_api_react/frontend/src/components/logger/logger.scss +133 -0
- src/frontends/live_api_react/frontend/src/components/logger/mock-logs.ts +151 -0
- src/frontends/live_api_react/frontend/src/components/side-panel/SidePanel.tsx +161 -0
- src/frontends/live_api_react/frontend/src/components/side-panel/side-panel.scss +285 -0
- src/frontends/live_api_react/frontend/src/contexts/LiveAPIContext.tsx +48 -0
- src/frontends/live_api_react/frontend/src/hooks/use-live-api.ts +115 -0
- src/frontends/live_api_react/frontend/src/hooks/use-media-stream-mux.ts +23 -0
- src/frontends/live_api_react/frontend/src/hooks/use-screen-capture.ts +72 -0
- src/frontends/live_api_react/frontend/src/hooks/use-webcam.ts +69 -0
- src/frontends/live_api_react/frontend/src/index.css +28 -0
- src/frontends/live_api_react/frontend/src/index.tsx +35 -0
- src/frontends/live_api_react/frontend/src/multimodal-live-types.ts +242 -0
- src/frontends/live_api_react/frontend/src/react-app-env.d.ts +17 -0
- src/frontends/live_api_react/frontend/src/reportWebVitals.ts +31 -0
- src/frontends/live_api_react/frontend/src/setupTests.ts +21 -0
- src/frontends/live_api_react/frontend/src/utils/audio-recorder.ts +111 -0
- src/frontends/live_api_react/frontend/src/utils/audio-streamer.ts +270 -0
- src/frontends/live_api_react/frontend/src/utils/audioworklet-registry.ts +43 -0
- src/frontends/live_api_react/frontend/src/utils/multimodal-live-client.ts +329 -0
- src/frontends/live_api_react/frontend/src/utils/store-logger.ts +64 -0
- src/frontends/live_api_react/frontend/src/utils/utils.ts +86 -0
- src/frontends/live_api_react/frontend/src/utils/worklets/audio-processing.ts +73 -0
- src/frontends/live_api_react/frontend/src/utils/worklets/vol-meter.ts +65 -0
- src/frontends/live_api_react/frontend/tsconfig.json +25 -0
- src/frontends/streamlit/frontend/side_bar.py +213 -0
- src/frontends/streamlit/frontend/streamlit_app.py +263 -0
- src/frontends/streamlit/frontend/style/app_markdown.py +37 -0
- src/frontends/streamlit/frontend/utils/chat_utils.py +67 -0
- src/frontends/streamlit/frontend/utils/local_chat_history.py +125 -0
- src/frontends/streamlit/frontend/utils/message_editing.py +59 -0
- src/frontends/streamlit/frontend/utils/multimodal_utils.py +217 -0
- src/frontends/streamlit/frontend/utils/stream_handler.py +282 -0
- src/frontends/streamlit/frontend/utils/title_summary.py +77 -0
- src/resources/containers/data_processing/Dockerfile +25 -0
- src/resources/locks/uv-agentic_rag_vertexai_search-agent_engine.lock +4684 -0
- src/resources/locks/uv-agentic_rag_vertexai_search-cloud_run.lock +5799 -0
- src/resources/locks/uv-crewai_coding_crew-agent_engine.lock +5509 -0
- src/resources/locks/uv-crewai_coding_crew-cloud_run.lock +6688 -0
- src/resources/locks/uv-langgraph_base_react-agent_engine.lock +4595 -0
- src/resources/locks/uv-langgraph_base_react-cloud_run.lock +5710 -0
- src/resources/locks/uv-multimodal_live_api-cloud_run.lock +5665 -0
- src/resources/setup_cicd/cicd_variables.tf +36 -0
- src/resources/setup_cicd/github.tf +85 -0
- src/resources/setup_cicd/providers.tf +39 -0
- src/utils/generate_locks.py +135 -0
- src/utils/lock_utils.py +82 -0
- src/utils/watch_and_rebuild.py +190 -0
|
@@ -0,0 +1 @@
|
|
|
1
|
+
Count,Message,Traceback,Nodes
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
Method,Name,Error,Occurrences
|
|
@@ -0,0 +1,3 @@
|
|
|
1
|
+
Type,Name,Request Count,Failure Count,Median Response Time,Average Response Time,Min Response Time,Max Response Time,Average Content Size,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%
|
|
2
|
+
STREAM_END,reasoning_engine_stream_end,18,0,2400.0,2360.90800497267,1843.3630466461182,2849.168300628662,1650.2222222222222,1.101276838322333,0.0,2400,2400,2500,2500,2600,2800,2800,2800,2800,2800,2800
|
|
3
|
+
,Aggregated,18,0,2400.0,2360.90800497267,1843.3630466461182,2849.168300628662,1650.2222222222222,1.101276838322333,0.0,2400,2400,2500,2500,2600,2800,2800,2800,2800,2800,2800
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
Timestamp,User Count,Type,Name,Requests/s,Failures/s,50%,66%,75%,80%,90%,95%,98%,99%,99.9%,99.99%,100%,Total Request Count,Total Failure Count,Total Median Response Time,Total Average Response Time,Total Min Response Time,Total Max Response Time,Total Average Content Size
|
|
2
|
+
1737391419,0,,Aggregated,0.000000,0.000000,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0,0,0,0.0,0,0,0
|
|
3
|
+
1737391420,1,,Aggregated,0.000000,0.000000,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0,0,0,0.0,0,0,0
|
|
4
|
+
1737391421,2,,Aggregated,0.000000,0.000000,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0,0,0,0.0,0,0,0
|
|
5
|
+
1737391422,2,,Aggregated,0.000000,0.000000,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,1,0,2390.5889987945557,2390.5889987945557,2390.5889987945557,2390.5889987945557,1637.0
|
|
6
|
+
1737391423,3,,Aggregated,0.000000,0.000000,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2,0,1900.0,2129.990339279175,1869.391679763794,2390.5889987945557,1641.5
|
|
7
|
+
1737391424,3,,Aggregated,0.000000,0.000000,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2,0,1900.0,2129.990339279175,1869.391679763794,2390.5889987945557,1641.5
|
|
8
|
+
1737391425,4,,Aggregated,0.000000,0.000000,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2,0,1900.0,2129.990339279175,1869.391679763794,2390.5889987945557,1641.5
|
|
9
|
+
1737391426,4,,Aggregated,0.400000,0.000000,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,4,0,2400.0,2269.8291540145874,1869.391679763794,2444.993019104004,1643.75
|
|
10
|
+
1737391427,5,,Aggregated,0.400000,0.000000,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,2400,4,0,2400.0,2269.8291540145874,1869.391679763794,2444.993019104004,1643.75
|
|
11
|
+
1737391428,5,,Aggregated,0.285714,0.000000,2400,2400,2400,2400,2500,2500,2500,2500,2500,2500,2500,6,0,2400.0,2324.6165911356607,1869.391679763794,2451.6820907592773,1644.5
|
|
12
|
+
1737391429,6,,Aggregated,0.285714,0.000000,2400,2400,2400,2400,2500,2500,2500,2500,2500,2500,2500,6,0,2400.0,2324.6165911356607,1869.391679763794,2451.6820907592773,1644.5
|
|
13
|
+
1737391430,6,,Aggregated,0.444444,0.000000,2400,2400,2400,2400,2500,2500,2500,2500,2500,2500,2500,9,0,2400.0,2345.083819495307,1869.391679763794,2451.6820907592773,1645.0
|
|
14
|
+
1737391431,7,,Aggregated,0.444444,0.000000,2400,2400,2400,2400,2500,2500,2500,2500,2500,2500,2500,9,0,2400.0,2345.083819495307,1869.391679763794,2451.6820907592773,1645.0
|
|
15
|
+
1737391432,7,,Aggregated,0.600000,0.000000,2400,2400,2400,2400,2500,2500,2500,2500,2500,2500,2500,12,0,2400.0,2364.6987676620483,1869.391679763794,2516.0341262817383,1646.8333333333333
|
|
16
|
+
1737391433,8,,Aggregated,0.600000,0.000000,2400,2400,2400,2400,2500,2500,2500,2500,2500,2500,2500,12,0,2400.0,2364.6987676620483,1869.391679763794,2516.0341262817383,1646.8333333333333
|
|
17
|
+
1737391434,8,,Aggregated,0.900000,0.000000,2400,2400,2500,2500,2500,2800,2800,2800,2800,2800,2800,14,0,2400.0,2411.8646723883494,1869.391679763794,2849.168300628662,1651.4285714285713
|
|
18
|
+
1737391435,9,,Aggregated,1.100000,0.000000,2400,2400,2500,2500,2500,2800,2800,2800,2800,2800,2800,16,0,2400.0,2379.8188269138336,1869.391679763794,2849.168300628662,1650.75
|
|
19
|
+
1737391437,9,,Aggregated,1.000000,0.000000,2400,2400,2500,2500,2600,2800,2800,2800,2800,2800,2800,18,0,2400.0,2360.90800497267,1843.3630466461182,2849.168300628662,1650.2222222222222
|
|
20
|
+
1737391438,10,,Aggregated,1.000000,0.000000,2400,2400,2500,2500,2600,2800,2800,2800,2800,2800,2800,18,0,2400.0,2360.90800497267,1843.3630466461182,2849.168300628662,1650.2222222222222
|
|
21
|
+
1737391439,10,,Aggregated,1.000000,0.000000,2400,2400,2500,2500,2600,2800,2800,2800,2800,2800,2800,18,0,2400.0,2360.90800497267,1843.3630466461182,2849.168300628662,1650.2222222222222
|
|
22
|
+
1737391440,10,,Aggregated,1.000000,0.000000,2400,2400,2500,2500,2600,2800,2800,2800,2800,2800,2800,18,0,2400.0,2360.90800497267,1843.3630466461182,2849.168300628662,1650.2222222222222
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
# Robust Load Testing for Generative AI Applications
|
|
2
|
+
|
|
3
|
+
This directory provides a comprehensive load testing framework for your Generative AI application, leveraging the power of [Locust](http://locust.io), a leading open-source load testing tool.
|
|
4
|
+
|
|
5
|
+
## Load Testing
|
|
6
|
+
|
|
7
|
+
Before running load tests, ensure you have deployed the backend remotely.
|
|
8
|
+
|
|
9
|
+
Follow these steps to execute load tests:
|
|
10
|
+
|
|
11
|
+
**1. Deploy the Backend Remotely:**
|
|
12
|
+
```bash
|
|
13
|
+
gcloud config set project <your-dev-project-id>
|
|
14
|
+
make backend
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
**2. Create a Virtual Environment for Locust:**
|
|
18
|
+
It's recommended to use a separate terminal tab and create a virtual environment for Locust to avoid conflicts with your application's Python environment.
|
|
19
|
+
|
|
20
|
+
```bash
|
|
21
|
+
# Create and activate virtual environment
|
|
22
|
+
python3 -m venv locust_env
|
|
23
|
+
source locust_env/bin/activate
|
|
24
|
+
|
|
25
|
+
# Install required packages
|
|
26
|
+
pip install locust==2.31.1 "google-cloud-aiplatform[langchain,reasoningengine]>=1.77.0"
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
**3. Execute the Load Test:**
|
|
30
|
+
Trigger the Locust load test with the following command:
|
|
31
|
+
|
|
32
|
+
```bash
|
|
33
|
+
export _AUTH_TOKEN=$(gcloud auth print-access-token -q)
|
|
34
|
+
locust -f tests/load_test/load_test.py \
|
|
35
|
+
--headless \
|
|
36
|
+
-t 30s -u 5 -r 2 \
|
|
37
|
+
--csv=tests/load_test/.results/results \
|
|
38
|
+
--html=tests/load_test/.results/report.html
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
This command initiates a 30-second load test, simulating 2 users spawning per second, reaching a maximum of 10 concurrent users.
|
|
42
|
+
|
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
# Copyright 2025 Google LLC
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
import json
|
|
16
|
+
import logging
|
|
17
|
+
import os
|
|
18
|
+
import time
|
|
19
|
+
|
|
20
|
+
from locust import HttpUser, between, task
|
|
21
|
+
|
|
22
|
+
# Configure logging
|
|
23
|
+
logging.basicConfig(
|
|
24
|
+
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
|
25
|
+
)
|
|
26
|
+
logger = logging.getLogger(__name__)
|
|
27
|
+
|
|
28
|
+
# Initialize Vertex AI and load agent config
|
|
29
|
+
with open("deployment_metadata.json") as f:
|
|
30
|
+
remote_agent_engine_id = json.load(f)["remote_agent_engine_id"]
|
|
31
|
+
|
|
32
|
+
parts = remote_agent_engine_id.split("/")
|
|
33
|
+
project_id = parts[1]
|
|
34
|
+
location = parts[3]
|
|
35
|
+
engine_id = parts[5]
|
|
36
|
+
|
|
37
|
+
# Convert remote agent engine ID to streaming URL.
|
|
38
|
+
base_url = f"https://{location}-aiplatform.googleapis.com"
|
|
39
|
+
url_path = f"/v1beta1/projects/{project_id}/locations/{location}/reasoningEngines/{engine_id}:streamQuery"
|
|
40
|
+
|
|
41
|
+
logger.info("Using remote agent engine ID: %s", remote_agent_engine_id)
|
|
42
|
+
logger.info("Using base URL: %s", base_url)
|
|
43
|
+
logger.info("Using URL path: %s", url_path)
|
|
44
|
+
|
|
45
|
+
|
|
46
|
+
class ChatStreamUser(HttpUser):
|
|
47
|
+
"""Simulates a user interacting with the chat stream API."""
|
|
48
|
+
|
|
49
|
+
wait_time = between(1, 3) # Wait 1-3 seconds between tasks
|
|
50
|
+
host = base_url # Set the base host URL for Locust
|
|
51
|
+
|
|
52
|
+
@task
|
|
53
|
+
def chat_stream(self) -> None:
|
|
54
|
+
"""Simulates a chat stream interaction."""
|
|
55
|
+
headers = {"Content-Type": "application/json"}
|
|
56
|
+
headers["Authorization"] = f"Bearer {os.environ['_AUTH_TOKEN']}"
|
|
57
|
+
|
|
58
|
+
data = {
|
|
59
|
+
"input": {
|
|
60
|
+
"input": {
|
|
61
|
+
"messages": [
|
|
62
|
+
{"type": "human", "content": "Hello, AI!"},
|
|
63
|
+
{"type": "ai", "content": "Hello!"},
|
|
64
|
+
{"type": "human", "content": "How are you?"},
|
|
65
|
+
]
|
|
66
|
+
},
|
|
67
|
+
"config": {
|
|
68
|
+
"metadata": {"user_id": "test-user", "session_id": "test-session"}
|
|
69
|
+
},
|
|
70
|
+
}
|
|
71
|
+
}
|
|
72
|
+
|
|
73
|
+
start_time = time.time()
|
|
74
|
+
with self.client.post(
|
|
75
|
+
url_path,
|
|
76
|
+
headers=headers,
|
|
77
|
+
json=data,
|
|
78
|
+
catch_response=True,
|
|
79
|
+
name="/stream_messages first message",
|
|
80
|
+
stream=True,
|
|
81
|
+
params={"alt": "sse"},
|
|
82
|
+
) as response:
|
|
83
|
+
if response.status_code == 200:
|
|
84
|
+
events = []
|
|
85
|
+
for line in response.iter_lines():
|
|
86
|
+
if line:
|
|
87
|
+
event = json.loads(line)
|
|
88
|
+
events.append(event)
|
|
89
|
+
end_time = time.time()
|
|
90
|
+
total_time = end_time - start_time
|
|
91
|
+
self.environment.events.request.fire(
|
|
92
|
+
request_type="POST",
|
|
93
|
+
name="/stream_messages end",
|
|
94
|
+
response_time=total_time * 1000, # Convert to milliseconds
|
|
95
|
+
response_length=len(json.dumps(events)),
|
|
96
|
+
response=response,
|
|
97
|
+
context={},
|
|
98
|
+
)
|
|
99
|
+
else:
|
|
100
|
+
response.failure(f"Unexpected status code: {response.status_code}")
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
# Copyright 2025 Google LLC
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
"""
|
|
16
|
+
You can add your unit tests here.
|
|
17
|
+
"""
|
|
18
|
+
|
|
19
|
+
|
|
20
|
+
def test_dummy() -> None:
|
|
21
|
+
"""Placeholder - replace with real tests."""
|
|
22
|
+
assert 1 == 1
|
|
@@ -0,0 +1,29 @@
|
|
|
1
|
+
# Copyright 2025 Google LLC
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
FROM python:3.11-slim
|
|
16
|
+
|
|
17
|
+
RUN pip install --no-cache-dir uv
|
|
18
|
+
|
|
19
|
+
WORKDIR /code
|
|
20
|
+
|
|
21
|
+
COPY ./pyproject.toml ./README.md ./uv.lock* ./
|
|
22
|
+
|
|
23
|
+
COPY ./app ./app
|
|
24
|
+
|
|
25
|
+
RUN uv sync --frozen
|
|
26
|
+
|
|
27
|
+
EXPOSE 8080
|
|
28
|
+
|
|
29
|
+
CMD ["uv", "run", "uvicorn", "app.server:app", "--host", "0.0.0.0", "--port", "8080"]
|
|
@@ -0,0 +1,128 @@
|
|
|
1
|
+
# Copyright 2025 Google LLC
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
import logging
|
|
16
|
+
import os
|
|
17
|
+
from collections.abc import Generator
|
|
18
|
+
|
|
19
|
+
from fastapi import FastAPI
|
|
20
|
+
from fastapi.responses import RedirectResponse, StreamingResponse
|
|
21
|
+
from google.cloud import logging as google_cloud_logging
|
|
22
|
+
from langchain_core.runnables import RunnableConfig
|
|
23
|
+
from traceloop.sdk import Instruments, Traceloop
|
|
24
|
+
|
|
25
|
+
from app.agent import agent
|
|
26
|
+
from app.utils.tracing import CloudTraceLoggingSpanExporter
|
|
27
|
+
from app.utils.typing import Feedback, InputChat, Request, dumps, ensure_valid_config
|
|
28
|
+
|
|
29
|
+
# Initialize FastAPI app and logging
|
|
30
|
+
app = FastAPI(
|
|
31
|
+
title="{{cookiecutter.project_name}}",
|
|
32
|
+
description="API for interacting with the Agent {{cookiecutter.project_name}}",
|
|
33
|
+
)
|
|
34
|
+
logging_client = google_cloud_logging.Client()
|
|
35
|
+
logger = logging_client.logger(__name__)
|
|
36
|
+
|
|
37
|
+
# Initialize Telemetry
|
|
38
|
+
try:
|
|
39
|
+
Traceloop.init(
|
|
40
|
+
app_name=app.title,
|
|
41
|
+
disable_batch=False,
|
|
42
|
+
exporter=CloudTraceLoggingSpanExporter(),
|
|
43
|
+
instruments={% raw %}{{% endraw %}{%- for instrumentation in cookiecutter.otel_instrumentations %}{{ instrumentation }}{% if not loop.last %}, {% endif %}{%- endfor %}{% raw %}}{% endraw %},
|
|
44
|
+
)
|
|
45
|
+
except Exception as e:
|
|
46
|
+
logging.error("Failed to initialize Telemetry: %s", str(e))
|
|
47
|
+
|
|
48
|
+
|
|
49
|
+
def set_tracing_properties(config: RunnableConfig) -> None:
|
|
50
|
+
"""Sets tracing association properties for the current request.
|
|
51
|
+
|
|
52
|
+
Args:
|
|
53
|
+
config: Optional RunnableConfig containing request metadata
|
|
54
|
+
"""
|
|
55
|
+
Traceloop.set_association_properties(
|
|
56
|
+
{
|
|
57
|
+
"log_type": "tracing",
|
|
58
|
+
"run_id": str(config.get("run_id", "None")),
|
|
59
|
+
"user_id": config["metadata"].pop("user_id", "None"),
|
|
60
|
+
"session_id": config["metadata"].pop("session_id", "None"),
|
|
61
|
+
"commit_sha": os.environ.get("COMMIT_SHA", "None"),
|
|
62
|
+
}
|
|
63
|
+
)
|
|
64
|
+
|
|
65
|
+
|
|
66
|
+
def stream_messages(
|
|
67
|
+
input: InputChat,
|
|
68
|
+
config: RunnableConfig | None = None,
|
|
69
|
+
) -> Generator[str, None, None]:
|
|
70
|
+
"""Stream events in response to an input chat.
|
|
71
|
+
|
|
72
|
+
Args:
|
|
73
|
+
input: The input chat messages
|
|
74
|
+
config: Optional configuration for the runnable
|
|
75
|
+
|
|
76
|
+
Yields:
|
|
77
|
+
JSON serialized event data
|
|
78
|
+
"""
|
|
79
|
+
config = ensure_valid_config(config=config)
|
|
80
|
+
set_tracing_properties(config)
|
|
81
|
+
input_dict = input.model_dump()
|
|
82
|
+
|
|
83
|
+
for data in agent.stream(input_dict, config=config, stream_mode="messages"):
|
|
84
|
+
yield dumps(data) + "\n"
|
|
85
|
+
|
|
86
|
+
|
|
87
|
+
# Routes
|
|
88
|
+
@app.get("/", response_class=RedirectResponse)
|
|
89
|
+
def redirect_root_to_docs() -> RedirectResponse:
|
|
90
|
+
"""Redirect the root URL to the API documentation."""
|
|
91
|
+
return RedirectResponse(url="/docs")
|
|
92
|
+
|
|
93
|
+
|
|
94
|
+
@app.post("/feedback")
|
|
95
|
+
def collect_feedback(feedback: Feedback) -> dict[str, str]:
|
|
96
|
+
"""Collect and log feedback.
|
|
97
|
+
|
|
98
|
+
Args:
|
|
99
|
+
feedback: The feedback data to log
|
|
100
|
+
|
|
101
|
+
Returns:
|
|
102
|
+
Success message
|
|
103
|
+
"""
|
|
104
|
+
logger.log_struct(feedback.model_dump(), severity="INFO")
|
|
105
|
+
return {"status": "success"}
|
|
106
|
+
|
|
107
|
+
|
|
108
|
+
@app.post("/stream_messages")
|
|
109
|
+
def stream_chat_events(request: Request) -> StreamingResponse:
|
|
110
|
+
"""Stream chat events in response to an input request.
|
|
111
|
+
|
|
112
|
+
Args:
|
|
113
|
+
request: The chat request containing input and config
|
|
114
|
+
|
|
115
|
+
Returns:
|
|
116
|
+
Streaming response of chat events
|
|
117
|
+
"""
|
|
118
|
+
return StreamingResponse(
|
|
119
|
+
stream_messages(input=request.input, config=request.config),
|
|
120
|
+
media_type="text/event-stream",
|
|
121
|
+
)
|
|
122
|
+
|
|
123
|
+
|
|
124
|
+
# Main execution
|
|
125
|
+
if __name__ == "__main__":
|
|
126
|
+
import uvicorn
|
|
127
|
+
|
|
128
|
+
uvicorn.run(app, host="0.0.0.0", port=8000)
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
# Copyright 2025 Google LLC
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
resource "google_artifact_registry_repository" "repo-artifacts-genai" {
|
|
16
|
+
location = var.region
|
|
17
|
+
repository_id = var.artifact_registry_repo_name
|
|
18
|
+
description = "Repo for Generative AI applications"
|
|
19
|
+
format = "DOCKER"
|
|
20
|
+
project = var.cicd_runner_project_id
|
|
21
|
+
depends_on = [resource.google_project_service.cicd_services, resource.google_project_service.shared_services]
|
|
22
|
+
}
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
# Copyright 2025 Google LLC
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
resource "google_service_account" "cloud_run_app_sa" {
|
|
16
|
+
account_id = var.cloud_run_app_sa_name
|
|
17
|
+
display_name = "Cloud Run Generative AI app SA"
|
|
18
|
+
project = var.dev_project_id
|
|
19
|
+
depends_on = [resource.google_project_service.services]
|
|
20
|
+
}
|
|
@@ -0,0 +1,192 @@
|
|
|
1
|
+
# Copyright 2025 Google LLC
|
|
2
|
+
#
|
|
3
|
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
4
|
+
# you may not use this file except in compliance with the License.
|
|
5
|
+
# You may obtain a copy of the License at
|
|
6
|
+
#
|
|
7
|
+
# http://www.apache.org/licenses/LICENSE-2.0
|
|
8
|
+
#
|
|
9
|
+
# Unless required by applicable law or agreed to in writing, software
|
|
10
|
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
11
|
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
12
|
+
# See the License for the specific language governing permissions and
|
|
13
|
+
# limitations under the License.
|
|
14
|
+
|
|
15
|
+
import json
|
|
16
|
+
import logging
|
|
17
|
+
import os
|
|
18
|
+
import subprocess
|
|
19
|
+
import sys
|
|
20
|
+
import threading
|
|
21
|
+
import time
|
|
22
|
+
import uuid
|
|
23
|
+
from collections.abc import Iterator
|
|
24
|
+
from typing import Any
|
|
25
|
+
|
|
26
|
+
import pytest
|
|
27
|
+
import requests
|
|
28
|
+
from requests.exceptions import RequestException
|
|
29
|
+
|
|
30
|
+
# Configure logging
|
|
31
|
+
logging.basicConfig(level=logging.INFO)
|
|
32
|
+
logger = logging.getLogger(__name__)
|
|
33
|
+
|
|
34
|
+
BASE_URL = "http://127.0.0.1:8000/"
|
|
35
|
+
STREAM_URL = BASE_URL + "stream_messages"
|
|
36
|
+
FEEDBACK_URL = BASE_URL + "feedback"
|
|
37
|
+
|
|
38
|
+
HEADERS = {"Content-Type": "application/json"}
|
|
39
|
+
|
|
40
|
+
|
|
41
|
+
def log_output(pipe: Any, log_func: Any) -> None:
|
|
42
|
+
"""Log the output from the given pipe."""
|
|
43
|
+
for line in iter(pipe.readline, ""):
|
|
44
|
+
log_func(line.strip())
|
|
45
|
+
|
|
46
|
+
|
|
47
|
+
def start_server() -> subprocess.Popen[str]:
|
|
48
|
+
"""Start the FastAPI server using subprocess and log its output."""
|
|
49
|
+
command = [
|
|
50
|
+
sys.executable,
|
|
51
|
+
"-m",
|
|
52
|
+
"uvicorn",
|
|
53
|
+
"app.server:app",
|
|
54
|
+
"--host",
|
|
55
|
+
"0.0.0.0",
|
|
56
|
+
"--port",
|
|
57
|
+
"8000",
|
|
58
|
+
]
|
|
59
|
+
env = os.environ.copy()
|
|
60
|
+
env["INTEGRATION_TEST"] = "TRUE"
|
|
61
|
+
process = subprocess.Popen(
|
|
62
|
+
command,
|
|
63
|
+
stdout=subprocess.PIPE,
|
|
64
|
+
stderr=subprocess.PIPE,
|
|
65
|
+
text=True,
|
|
66
|
+
bufsize=1,
|
|
67
|
+
env=env,
|
|
68
|
+
)
|
|
69
|
+
|
|
70
|
+
# Start threads to log stdout and stderr in real-time
|
|
71
|
+
threading.Thread(
|
|
72
|
+
target=log_output, args=(process.stdout, logger.info), daemon=True
|
|
73
|
+
).start()
|
|
74
|
+
threading.Thread(
|
|
75
|
+
target=log_output, args=(process.stderr, logger.error), daemon=True
|
|
76
|
+
).start()
|
|
77
|
+
|
|
78
|
+
return process
|
|
79
|
+
|
|
80
|
+
|
|
81
|
+
def wait_for_server(timeout: int = 60, interval: int = 1) -> bool:
|
|
82
|
+
"""Wait for the server to be ready."""
|
|
83
|
+
start_time = time.time()
|
|
84
|
+
while time.time() - start_time < timeout:
|
|
85
|
+
try:
|
|
86
|
+
response = requests.get("http://127.0.0.1:8000/docs", timeout=10)
|
|
87
|
+
if response.status_code == 200:
|
|
88
|
+
logger.info("Server is ready")
|
|
89
|
+
return True
|
|
90
|
+
except RequestException:
|
|
91
|
+
pass
|
|
92
|
+
time.sleep(interval)
|
|
93
|
+
logger.error(f"Server did not become ready within {timeout} seconds")
|
|
94
|
+
return False
|
|
95
|
+
|
|
96
|
+
|
|
97
|
+
@pytest.fixture(scope="session")
|
|
98
|
+
def server_fixture(request: Any) -> Iterator[subprocess.Popen[str]]:
|
|
99
|
+
"""Pytest fixture to start and stop the server for testing."""
|
|
100
|
+
logger.info("Starting server process")
|
|
101
|
+
server_process = start_server()
|
|
102
|
+
if not wait_for_server():
|
|
103
|
+
pytest.fail("Server failed to start")
|
|
104
|
+
logger.info("Server process started")
|
|
105
|
+
|
|
106
|
+
def stop_server() -> None:
|
|
107
|
+
logger.info("Stopping server process")
|
|
108
|
+
server_process.terminate()
|
|
109
|
+
server_process.wait()
|
|
110
|
+
logger.info("Server process stopped")
|
|
111
|
+
|
|
112
|
+
request.addfinalizer(stop_server)
|
|
113
|
+
yield server_process
|
|
114
|
+
|
|
115
|
+
|
|
116
|
+
def test_chat_stream(server_fixture: subprocess.Popen[str]) -> None:
|
|
117
|
+
"""Test the chat stream functionality."""
|
|
118
|
+
logger.info("Starting chat stream test")
|
|
119
|
+
|
|
120
|
+
data = {
|
|
121
|
+
"input": {
|
|
122
|
+
"messages": [
|
|
123
|
+
{"type": "human", "content": "Hello, AI!"},
|
|
124
|
+
{"type": "ai", "content": "Hello!"},
|
|
125
|
+
{"type": "human", "content": "What is the weather in NY?"},
|
|
126
|
+
]
|
|
127
|
+
},
|
|
128
|
+
"config": {"metadata": {"user_id": "test-user", "session_id": "test-session"}},
|
|
129
|
+
}
|
|
130
|
+
|
|
131
|
+
response = requests.post(
|
|
132
|
+
STREAM_URL, headers=HEADERS, json=data, stream=True, timeout=10
|
|
133
|
+
)
|
|
134
|
+
assert response.status_code == 200
|
|
135
|
+
|
|
136
|
+
events = [json.loads(line) for line in response.iter_lines() if line]
|
|
137
|
+
assert events, "No events received from stream"
|
|
138
|
+
|
|
139
|
+
# Verify each event is a tuple of message and metadata
|
|
140
|
+
for event in events:
|
|
141
|
+
assert isinstance(event, list), "Event should be a list"
|
|
142
|
+
assert len(event) == 2, "Event should contain message and metadata"
|
|
143
|
+
message, _ = event
|
|
144
|
+
|
|
145
|
+
# Verify message structure
|
|
146
|
+
assert isinstance(message, dict), "Message should be a dictionary"
|
|
147
|
+
assert message["type"] == "constructor"
|
|
148
|
+
assert "kwargs" in message, "Constructor message should have kwargs"
|
|
149
|
+
|
|
150
|
+
# Verify at least one message has content
|
|
151
|
+
has_content = False
|
|
152
|
+
for event in events:
|
|
153
|
+
message = event[0]
|
|
154
|
+
if message.get("type") == "constructor" and "content" in message["kwargs"]:
|
|
155
|
+
has_content = True
|
|
156
|
+
break
|
|
157
|
+
assert has_content, "At least one message should have content"
|
|
158
|
+
|
|
159
|
+
|
|
160
|
+
def test_chat_stream_error_handling(server_fixture: subprocess.Popen[str]) -> None:
|
|
161
|
+
"""Test the chat stream error handling."""
|
|
162
|
+
logger.info("Starting chat stream error handling test")
|
|
163
|
+
|
|
164
|
+
data = {
|
|
165
|
+
"input": {"messages": [{"type": "invalid_type", "content": "Cause an error"}]}
|
|
166
|
+
}
|
|
167
|
+
response = requests.post(
|
|
168
|
+
STREAM_URL, headers=HEADERS, json=data, stream=True, timeout=10
|
|
169
|
+
)
|
|
170
|
+
|
|
171
|
+
assert response.status_code == 422, (
|
|
172
|
+
f"Expected status code 422, got {response.status_code}"
|
|
173
|
+
)
|
|
174
|
+
logger.info("Error handling test completed successfully")
|
|
175
|
+
|
|
176
|
+
|
|
177
|
+
def test_collect_feedback(server_fixture: subprocess.Popen[str]) -> None:
|
|
178
|
+
"""
|
|
179
|
+
Test the feedback collection endpoint (/feedback) to ensure it properly
|
|
180
|
+
logs the received feedback.
|
|
181
|
+
"""
|
|
182
|
+
# Create sample feedback data
|
|
183
|
+
feedback_data = {
|
|
184
|
+
"score": 4,
|
|
185
|
+
"run_id": str(uuid.uuid4()),
|
|
186
|
+
"text": "Great response!",
|
|
187
|
+
}
|
|
188
|
+
|
|
189
|
+
response = requests.post(
|
|
190
|
+
FEEDBACK_URL, json=feedback_data, headers=HEADERS, timeout=10
|
|
191
|
+
)
|
|
192
|
+
assert response.status_code == 200
|
|
File without changes
|
|
@@ -0,0 +1,79 @@
|
|
|
1
|
+
# Robust Load Testing for Generative AI Applications
|
|
2
|
+
|
|
3
|
+
This directory provides a comprehensive load testing framework for your Generative AI application, leveraging the power of [Locust](http://locust.io), a leading open-source load testing tool.
|
|
4
|
+
|
|
5
|
+
## Local Load Testing
|
|
6
|
+
|
|
7
|
+
Follow these steps to execute load tests on your local machine:
|
|
8
|
+
|
|
9
|
+
**1. Start the FastAPI Server:**
|
|
10
|
+
|
|
11
|
+
Launch the FastAPI server in a separate terminal:
|
|
12
|
+
|
|
13
|
+
```bash
|
|
14
|
+
poetry run uvicorn app.server:app --host 0.0.0.0 --port 8000 --reload
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
**2. (In another tab) Create virtual environment with Locust**
|
|
18
|
+
Using another terminal tab, This is suggested to avoid conflicts with the existing application python environment.
|
|
19
|
+
|
|
20
|
+
```commandline
|
|
21
|
+
python3 -m venv locust_env && source locust_env/bin/activate && pip install locust==2.31.1
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
**3. Execute the Load Test:**
|
|
25
|
+
Trigger the Locust load test with the following command:
|
|
26
|
+
|
|
27
|
+
```bash
|
|
28
|
+
locust -f tests/load_test/load_test.py \
|
|
29
|
+
-H http://127.0.0.1:8000 \
|
|
30
|
+
--headless \
|
|
31
|
+
-t 30s -u 60 -r 2 \
|
|
32
|
+
--csv=tests/load_test/.results/results \
|
|
33
|
+
--html=tests/load_test/.results/report.html
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
This command initiates a 30-second load test, simulating 2 users spawning per second, reaching a maximum of 60 concurrent users.
|
|
37
|
+
|
|
38
|
+
**Results:**
|
|
39
|
+
|
|
40
|
+
Comprehensive CSV and HTML reports detailing the load test performance will be generated and saved in the `tests/load_test/.results` directory.
|
|
41
|
+
|
|
42
|
+
## Remote Load Testing (Targeting Cloud Run)
|
|
43
|
+
|
|
44
|
+
This framework also supports load testing against remote targets, such as a staging Cloud Run instance. This process is seamlessly integrated into the Continuous Delivery pipeline via Cloud Build, as defined in the [pipeline file](cicd/cd/staging.yaml).
|
|
45
|
+
|
|
46
|
+
**Prerequisites:**
|
|
47
|
+
|
|
48
|
+
- **Dependencies:** Ensure your environment has the same dependencies required for local testing.
|
|
49
|
+
- **Cloud Run Invoker Role:** You'll need the `roles/run.invoker` role to invoke the Cloud Run service.
|
|
50
|
+
|
|
51
|
+
**Steps:**
|
|
52
|
+
|
|
53
|
+
**1. Obtain Cloud Run Service URL:**
|
|
54
|
+
|
|
55
|
+
Navigate to the Cloud Run console, select your service, and copy the URL displayed at the top. Set this URL as an environment variable:
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
export RUN_SERVICE_URL=https://your-cloud-run-service-url.run.app
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
**2. Obtain ID Token:**
|
|
62
|
+
|
|
63
|
+
Retrieve the ID token required for authentication:
|
|
64
|
+
|
|
65
|
+
```bash
|
|
66
|
+
export _ID_TOKEN=$(gcloud auth print-identity-token -q)
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**3. Execute the Load Test:**
|
|
70
|
+
The following command executes the same load test parameters as the local test but targets your remote Cloud Run instance.
|
|
71
|
+
|
|
72
|
+
```bash
|
|
73
|
+
poetry run locust -f tests/load_test/load_test.py \
|
|
74
|
+
-H $RUN_SERVICE_URL \
|
|
75
|
+
--headless \
|
|
76
|
+
-t 30s -u 60 -r 2 \
|
|
77
|
+
--csv=tests/load_test/.results/results \
|
|
78
|
+
--html=tests/load_test/.results/report.html
|
|
79
|
+
```
|