loki-mode 6.71.1 → 6.72.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +9 -1
- package/SKILL.md +2 -2
- package/VERSION +1 -1
- package/autonomy/hooks/migration-hooks.sh +26 -0
- package/autonomy/loki +429 -92
- package/autonomy/run.sh +219 -38
- package/dashboard/__init__.py +1 -1
- package/dashboard/server.py +101 -19
- package/docs/INSTALLATION.md +20 -11
- package/docs/bug-fixes/agent-01-cli-fixes.md +101 -0
- package/docs/bug-fixes/agent-02-purplelab-fixes.md +88 -0
- package/docs/bug-fixes/agent-03-dashboard-fixes.md +119 -0
- package/docs/bug-fixes/agent-04-memory-fixes.md +105 -0
- package/docs/bug-fixes/agent-05-provider-fixes.md +86 -0
- package/docs/bug-fixes/agent-06-integration-fixes.md +101 -0
- package/docs/bug-fixes/agent-07-dash-run-fixes.md +101 -0
- package/docs/bug-fixes/agent-08-docker-fixes.md +164 -0
- package/docs/bug-fixes/agent-09-e2e-build-fixes.md +69 -0
- package/docs/bug-fixes/agent-10-e2e-fullstack-fixes.md +102 -0
- package/docs/bug-fixes/agent-11-e2e-session-fixes.md +70 -0
- package/docs/bug-fixes/agent-12-scenario-fixes.md +120 -0
- package/docs/bug-fixes/agent-13-enterprise-fixes.md +143 -0
- package/docs/bug-fixes/agent-14-uat-newuser-fixes.md +88 -0
- package/docs/bug-fixes/agent-15-uat-poweruser-fixes.md +132 -0
- package/docs/bug-fixes/agent-19-code-review.md +316 -0
- package/docs/bug-fixes/agent-20-architecture-review.md +331 -0
- package/docs/competitive/bolt-new-analysis.md +579 -0
- package/docs/competitive/emergence-others-analysis.md +605 -0
- package/docs/competitive/replit-lovable-analysis.md +622 -0
- package/docs/test-scenarios/edge-cases.md +813 -0
- package/docs/test-scenarios/enterprise-scenarios.md +732 -0
- package/mcp/__init__.py +1 -1
- package/mcp/server.py +49 -5
- package/memory/consolidation.py +33 -0
- package/memory/embeddings.py +10 -1
- package/memory/engine.py +83 -38
- package/memory/retrieval.py +36 -0
- package/memory/storage.py +56 -4
- package/memory/token_economics.py +14 -2
- package/memory/vector_index.py +36 -7
- package/package.json +1 -1
- package/providers/gemini.sh +89 -2
- package/templates/README.md +1 -1
- package/templates/cli-tool.md +30 -0
- package/templates/dashboard.md +4 -0
- package/templates/data-pipeline.md +4 -0
- package/templates/discord-bot.md +47 -0
- package/templates/game.md +4 -0
- package/templates/microservice.md +4 -0
- package/templates/npm-library.md +4 -0
- package/templates/rest-api-auth.md +50 -20
- package/templates/rest-api.md +15 -0
- package/templates/saas-starter.md +1 -1
- package/templates/slack-bot.md +36 -0
- package/templates/static-landing-page.md +9 -1
- package/templates/web-scraper.md +4 -0
- package/web-app/dist/assets/Badge-CeBkFjo6.js +1 -0
- package/web-app/dist/assets/Button-yuhqo8Fq.js +1 -0
- package/web-app/dist/assets/{Card-B1bV4syB.js → Card-BG17vsX0.js} +1 -1
- package/web-app/dist/assets/{HomePage-CZTV6Nea.js → HomePage-BMSQ7Apj.js} +3 -3
- package/web-app/dist/assets/{LoginPage-D4UdURJc.js → LoginPage-aH_6iolg.js} +1 -1
- package/web-app/dist/assets/{NotFoundPage-CCLSeL6j.js → NotFoundPage-Di8cNtB1.js} +1 -1
- package/web-app/dist/assets/ProjectPage-BtRssmw9.js +285 -0
- package/web-app/dist/assets/ProjectsPage-B-FTFagc.js +6 -0
- package/web-app/dist/assets/{SettingsPage-Xuv8EfAg.js → SettingsPage-DIJPBla4.js} +1 -1
- package/web-app/dist/assets/TeamsPage--19fNX7w.js +36 -0
- package/web-app/dist/assets/TemplatesPage-ChUQNOOv.js +11 -0
- package/web-app/dist/assets/TerminalOutput-Dwrzecyl.js +31 -0
- package/web-app/dist/assets/activity-BNRWeu9N.js +6 -0
- package/web-app/dist/assets/{arrow-left-CaGtolHc.js → arrow-left-Ce6g1_YE.js} +1 -1
- package/web-app/dist/assets/circle-alert-LIndawHL.js +11 -0
- package/web-app/dist/assets/clock-Bpj4VPlP.js +6 -0
- package/web-app/dist/assets/{external-link-CazyUyav.js → external-link-BhhdF0iQ.js} +1 -1
- package/web-app/dist/assets/folder-open-CM2LgfxI.js +11 -0
- package/web-app/dist/assets/index-8-KpWWq7.css +1 -0
- package/web-app/dist/assets/index-kPDW4e_b.js +236 -0
- package/web-app/dist/assets/lock-sAk3Xe54.js +16 -0
- package/web-app/dist/assets/search-CR-2i9by.js +6 -0
- package/web-app/dist/assets/server-DuFh4ymA.js +26 -0
- package/web-app/dist/assets/trash-2-BmkkT8V_.js +11 -0
- package/web-app/dist/index.html +2 -2
- package/web-app/server.py +1321 -53
- package/web-app/dist/assets/Badge-CBUx2PjL.js +0 -6
- package/web-app/dist/assets/Button-DsRiznlh.js +0 -21
- package/web-app/dist/assets/ProjectPage-D0w_X9tG.js +0 -237
- package/web-app/dist/assets/ProjectsPage-ByYxDlKC.js +0 -16
- package/web-app/dist/assets/TemplatesPage-BKWN07mc.js +0 -1
- package/web-app/dist/assets/TerminalOutput-Dj98V8Z-.js +0 -51
- package/web-app/dist/assets/clock-C_CDmobx.js +0 -11
- package/web-app/dist/assets/index-D452pFGl.css +0 -1
- package/web-app/dist/assets/index-Df4_kgLY.js +0 -196
|
@@ -0,0 +1,732 @@
|
|
|
1
|
+
# Enterprise Scenarios - Agent 13
|
|
2
|
+
|
|
3
|
+
50 enterprise-grade scenarios covering team workflows, git integration, CI/CD,
|
|
4
|
+
security/compliance, and production deployment. Each scenario references actual
|
|
5
|
+
codebase paths and infrastructure definitions verified during authoring.
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## 1. Team Workflow Scenarios
|
|
10
|
+
|
|
11
|
+
### TW-01: Team Lead Assigns Project via Dashboard API
|
|
12
|
+
|
|
13
|
+
```
|
|
14
|
+
GIVEN the dashboard is running with LOKI_ENTERPRISE_AUTH=true
|
|
15
|
+
AND a team lead has a token with role "admin" (scopes: ["*"])
|
|
16
|
+
AND a developer has a token with role "operator" (scopes: ["control","read","write"])
|
|
17
|
+
WHEN the team lead POST /api/projects with a PRD and assigns the developer
|
|
18
|
+
THEN the project is created (HTTP 201)
|
|
19
|
+
AND the developer can view it via GET /api/projects (scope "read")
|
|
20
|
+
AND the developer can start a build via POST /api/control/start (scope "control")
|
|
21
|
+
AND the project appears in the developer dashboard within 5 seconds
|
|
22
|
+
```
|
|
23
|
+
Source: `dashboard/server.py:781`, `dashboard/auth.py:42-47`
|
|
24
|
+
|
|
25
|
+
### TW-02: Multiple Developers Iterating via Concurrent Sessions
|
|
26
|
+
|
|
27
|
+
```
|
|
28
|
+
GIVEN two developers (Dev-A, Dev-B) each have operator tokens
|
|
29
|
+
AND both have cloned the same repository
|
|
30
|
+
WHEN Dev-A runs "loki start --session dev-a ./feature-a.md"
|
|
31
|
+
AND Dev-B runs "loki start --session dev-b ./feature-b.md"
|
|
32
|
+
THEN both sessions appear in GET /api/status under "sessions" list
|
|
33
|
+
AND each session has a unique PID under .loki/sessions/<id>/loki.pid
|
|
34
|
+
AND Dev-A pause/stop only affects session "dev-a"
|
|
35
|
+
AND Dev-B pause/stop only affects session "dev-b"
|
|
36
|
+
```
|
|
37
|
+
Source: `dashboard/server.py:638-683` (concurrent session discovery)
|
|
38
|
+
|
|
39
|
+
### TW-03: Code Review Within Loki Quality Gates
|
|
40
|
+
|
|
41
|
+
```
|
|
42
|
+
GIVEN a PR is opened on a repository with loki-ci-example.yml workflow
|
|
43
|
+
AND the PR changes 5 files across 2 modules
|
|
44
|
+
WHEN GitHub Actions triggers "Loki CI Quality Gate" job
|
|
45
|
+
THEN "loki ci --pr --fail-on critical,high --format markdown" executes
|
|
46
|
+
AND critical/high severity issues block the PR merge
|
|
47
|
+
AND a markdown report is posted as a PR comment
|
|
48
|
+
AND the CI exits non-zero if any blocking issue is found
|
|
49
|
+
```
|
|
50
|
+
Source: `.github/workflows/loki-ci-example.yml:41-44`
|
|
51
|
+
|
|
52
|
+
### TW-04: Handoff Between Human and AI Coding
|
|
53
|
+
|
|
54
|
+
```
|
|
55
|
+
GIVEN a loki autonomous session is running (iteration 5 of 20)
|
|
56
|
+
AND the human developer notices a design issue
|
|
57
|
+
WHEN the developer creates .loki/PAUSE signal file
|
|
58
|
+
THEN the session pauses after the current RARV iteration completes
|
|
59
|
+
AND status shows "paused" via GET /api/status
|
|
60
|
+
WHEN the developer modifies code manually and removes .loki/PAUSE
|
|
61
|
+
THEN the session resumes from the next iteration
|
|
62
|
+
AND the modified files are picked up in the next build_prompt() call
|
|
63
|
+
```
|
|
64
|
+
Source: `autonomy/run.sh:7897` (check_human_intervention)
|
|
65
|
+
|
|
66
|
+
### TW-05: Project Template Sharing Across Team
|
|
67
|
+
|
|
68
|
+
```
|
|
69
|
+
GIVEN the team uses loki templates (13 types: saas, cli, discord-bot, etc.)
|
|
70
|
+
AND the team lead wants to standardize on a custom template
|
|
71
|
+
WHEN the lead creates a PRD template at templates/custom-team.md
|
|
72
|
+
AND commits it to the shared repository
|
|
73
|
+
THEN all team members can run "loki start --template custom-team ./my-prd.md"
|
|
74
|
+
AND the template is listed via "loki template list"
|
|
75
|
+
AND the template variables are interpolated correctly
|
|
76
|
+
```
|
|
77
|
+
Source: `templates/` directory (13 templates)
|
|
78
|
+
|
|
79
|
+
### TW-06: Team-Wide Quality Gate Configuration
|
|
80
|
+
|
|
81
|
+
```
|
|
82
|
+
GIVEN the team configures quality gates via .loki/config/quality-gates.json
|
|
83
|
+
AND the config sets: { "coverage_threshold": 85, "max_critical": 0 }
|
|
84
|
+
WHEN any team member runs "loki audit test"
|
|
85
|
+
THEN coverage below 85% fails the gate (exit 1)
|
|
86
|
+
AND zero critical issues are tolerated
|
|
87
|
+
AND the 3-reviewer parallel system is invoked for code changes
|
|
88
|
+
AND anti-sycophancy checks activate on unanimous approval
|
|
89
|
+
```
|
|
90
|
+
Source: `skills/quality-gates.md`, `autonomy/run.sh:4935` (run_code_review)
|
|
91
|
+
|
|
92
|
+
### TW-07: Shared Provider API Key Management via K8s Secrets
|
|
93
|
+
|
|
94
|
+
```
|
|
95
|
+
GIVEN the team deploys Autonomi via Helm chart
|
|
96
|
+
AND API keys are stored in an existing Kubernetes Secret "team-api-keys"
|
|
97
|
+
WHEN the Helm chart is installed with:
|
|
98
|
+
secrets.existingSecret=team-api-keys
|
|
99
|
+
THEN the controlplane and worker deployments mount "team-api-keys" as envFrom
|
|
100
|
+
AND no Secret resource is created by the chart (template is skipped)
|
|
101
|
+
AND all pods receive ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY
|
|
102
|
+
```
|
|
103
|
+
Source: `deploy/helm/autonomi/templates/secret.yaml:1`, `deploy/helm/autonomi/values.yaml:172`
|
|
104
|
+
|
|
105
|
+
### TW-08: Team Metrics Dashboard Usage
|
|
106
|
+
|
|
107
|
+
```
|
|
108
|
+
GIVEN the dashboard is running with Prometheus scrape annotations
|
|
109
|
+
AND the production values enable:
|
|
110
|
+
podAnnotations:
|
|
111
|
+
prometheus.io/scrape: "true"
|
|
112
|
+
prometheus.io/port: "57374"
|
|
113
|
+
prometheus.io/path: "/metrics"
|
|
114
|
+
WHEN Prometheus scrapes /metrics on port 57374
|
|
115
|
+
THEN response includes RARV iteration counts, task completion rates, and API latency
|
|
116
|
+
AND Grafana dashboards can visualize team velocity
|
|
117
|
+
```
|
|
118
|
+
Source: `deploy/helm/autonomi/values-production.yaml:16-19`
|
|
119
|
+
|
|
120
|
+
### TW-09: Onboarding New Team Member to Existing Project
|
|
121
|
+
|
|
122
|
+
```
|
|
123
|
+
GIVEN an admin has a running Autonomi deployment
|
|
124
|
+
AND a new developer needs read-only access
|
|
125
|
+
WHEN the admin calls POST /api/enterprise/tokens with:
|
|
126
|
+
{ "name": "new-dev", "role": "viewer" }
|
|
127
|
+
THEN a token is generated with scopes: ["read"]
|
|
128
|
+
AND the new developer can GET /api/status, GET /api/projects
|
|
129
|
+
AND POST /api/control/pause returns HTTP 403 (insufficient permissions)
|
|
130
|
+
AND POST /api/projects returns HTTP 403 (requires "control" scope)
|
|
131
|
+
```
|
|
132
|
+
Source: `dashboard/auth.py:42-47`, `dashboard/server.py:1719`
|
|
133
|
+
|
|
134
|
+
### TW-10: Team Lead Monitoring Multiple Active Builds
|
|
135
|
+
|
|
136
|
+
```
|
|
137
|
+
GIVEN the team lead has admin access to the dashboard
|
|
138
|
+
AND three sessions are running across two worktrees
|
|
139
|
+
WHEN the lead calls GET /api/status
|
|
140
|
+
THEN the response includes sessions: [{session_id, pid, started_at, worktree}]
|
|
141
|
+
AND active_sessions count equals 3
|
|
142
|
+
AND each session shows its current RARV phase and iteration number
|
|
143
|
+
WHEN the lead calls POST /api/control/stop with session_id="session-2"
|
|
144
|
+
THEN only session-2 is stopped; sessions 1 and 3 continue
|
|
145
|
+
```
|
|
146
|
+
Source: `dashboard/server.py:638-683`
|
|
147
|
+
|
|
148
|
+
### TW-11: Role-Based Access Enforcement (Admin/Operator/Viewer/Auditor)
|
|
149
|
+
|
|
150
|
+
```
|
|
151
|
+
GIVEN four tokens exist with roles: admin, operator, viewer, auditor
|
|
152
|
+
WHEN the viewer token calls POST /api/control/pause
|
|
153
|
+
THEN HTTP 403 is returned (viewer has only "read" scope)
|
|
154
|
+
WHEN the operator token calls POST /api/control/pause
|
|
155
|
+
THEN HTTP 200 is returned (operator has "control" scope)
|
|
156
|
+
WHEN the auditor token calls GET /api/enterprise/audit
|
|
157
|
+
THEN HTTP 200 is returned (auditor has "audit" scope)
|
|
158
|
+
WHEN the auditor token calls POST /api/enterprise/tokens
|
|
159
|
+
THEN HTTP 403 is returned (auditor lacks "admin" scope)
|
|
160
|
+
WHEN the admin token calls POST /api/enterprise/tokens
|
|
161
|
+
THEN HTTP 200 is returned (admin has "*" scope, which grants all)
|
|
162
|
+
```
|
|
163
|
+
Source: `dashboard/auth.py:42-57` (ROLES and _SCOPE_HIERARCHY)
|
|
164
|
+
|
|
165
|
+
### TW-12: Team Notification Preferences via Slack/Teams
|
|
166
|
+
|
|
167
|
+
```
|
|
168
|
+
GIVEN the deployment has LOKI_SLACK_BOT_TOKEN configured
|
|
169
|
+
AND LOKI_TEAMS_WEBHOOK_URL is also configured
|
|
170
|
+
WHEN a build completes successfully
|
|
171
|
+
THEN a Slack notification is sent via the bot token
|
|
172
|
+
AND a Teams notification is sent via the webhook URL
|
|
173
|
+
WHEN a team member calls PUT /api/notifications/triggers with custom thresholds
|
|
174
|
+
THEN future notifications respect the updated trigger configuration
|
|
175
|
+
```
|
|
176
|
+
Source: `deploy/docker-compose/docker-compose.yml:34-38`, `dashboard/server.py:3387`
|
|
177
|
+
|
|
178
|
+
---
|
|
179
|
+
|
|
180
|
+
## 2. Git Integration Scenarios
|
|
181
|
+
|
|
182
|
+
### GI-01: Branch Creation from Loki Parallel Workflows
|
|
183
|
+
|
|
184
|
+
```
|
|
185
|
+
GIVEN a PRD specifies 3 parallel work streams
|
|
186
|
+
AND the repository has a clean main branch
|
|
187
|
+
WHEN "loki start --parallel" is invoked
|
|
188
|
+
THEN 3 git worktrees are created under .claude/worktrees/
|
|
189
|
+
AND each worktree has a branch named loki/stream-<n>
|
|
190
|
+
AND branches do not conflict with each other
|
|
191
|
+
AND "git worktree list" shows all 3 worktrees
|
|
192
|
+
```
|
|
193
|
+
Source: `skills/parallel-workflows.md`
|
|
194
|
+
|
|
195
|
+
### GI-02: Meaningful AI Commit Messages
|
|
196
|
+
|
|
197
|
+
```
|
|
198
|
+
GIVEN a loki session has completed a RARV iteration
|
|
199
|
+
AND the iteration modified 4 files
|
|
200
|
+
WHEN the autonomous system commits changes
|
|
201
|
+
THEN the commit message follows the format: "<type>: <description>"
|
|
202
|
+
AND the type is one of: feat, fix, refactor, test, docs, chore
|
|
203
|
+
AND the message does not contain emojis (hard rule from CLAUDE.md)
|
|
204
|
+
AND the commit is signed with the configured user identity
|
|
205
|
+
```
|
|
206
|
+
Source: `CLAUDE.md` (Git Commit Workflow section)
|
|
207
|
+
|
|
208
|
+
### GI-03: PR Creation with Description via GitHub Integration
|
|
209
|
+
|
|
210
|
+
```
|
|
211
|
+
GIVEN the loki-enterprise.yml workflow is triggered by an issue labeled "loki-mode"
|
|
212
|
+
AND the issue contains a feature request
|
|
213
|
+
WHEN Loki Mode completes execution
|
|
214
|
+
THEN the report job posts results back to the issue
|
|
215
|
+
AND a PR is created with a description summarizing the changes
|
|
216
|
+
AND the PR references the original issue number
|
|
217
|
+
```
|
|
218
|
+
Source: `.github/workflows/loki-enterprise.yml:165-226`
|
|
219
|
+
|
|
220
|
+
### GI-04: Merge Conflict Detection During Parallel Streams
|
|
221
|
+
|
|
222
|
+
```
|
|
223
|
+
GIVEN two parallel worktrees (stream-1, stream-2) both modify src/index.ts
|
|
224
|
+
WHEN the auto-merge phase runs after both streams complete
|
|
225
|
+
THEN a merge conflict is detected
|
|
226
|
+
AND the system attempts 3-way merge resolution
|
|
227
|
+
AND if resolution fails, the conflict is flagged for human review
|
|
228
|
+
AND the human receives a notification via the configured channel
|
|
229
|
+
```
|
|
230
|
+
Source: `skills/parallel-workflows.md`
|
|
231
|
+
|
|
232
|
+
### GI-05: Git Hooks Interaction with Loki CI
|
|
233
|
+
|
|
234
|
+
```
|
|
235
|
+
GIVEN the repository has a pre-commit hook running ESLint
|
|
236
|
+
AND Loki Mode generates code with a lint warning
|
|
237
|
+
WHEN Loki Mode attempts to commit
|
|
238
|
+
THEN the pre-commit hook runs and may fail
|
|
239
|
+
AND Loki Mode detects the hook failure (non-zero exit)
|
|
240
|
+
AND Loki Mode fixes the lint issue in the next RARV iteration
|
|
241
|
+
AND a NEW commit is created (never --amend on hook failure)
|
|
242
|
+
```
|
|
243
|
+
Source: `CLAUDE.md` (Git Operations section)
|
|
244
|
+
|
|
245
|
+
### GI-06: Monorepo Support with Multiple Package Detection
|
|
246
|
+
|
|
247
|
+
```
|
|
248
|
+
GIVEN a monorepo with packages/frontend and packages/backend
|
|
249
|
+
AND the PRD requests changes to both packages
|
|
250
|
+
WHEN "loki start ./prd.md" runs detect_complexity()
|
|
251
|
+
THEN the complexity is detected as "complex" (multi-package)
|
|
252
|
+
AND the system creates separate task queue entries per package
|
|
253
|
+
AND quality gates run independently per package
|
|
254
|
+
AND the final commit includes changes from all packages
|
|
255
|
+
```
|
|
256
|
+
Source: `autonomy/run.sh:1182` (detect_complexity)
|
|
257
|
+
|
|
258
|
+
### GI-07: Submodule Handling During Autonomous Execution
|
|
259
|
+
|
|
260
|
+
```
|
|
261
|
+
GIVEN the repository has git submodules defined in .gitmodules
|
|
262
|
+
WHEN Loki Mode clones/initializes the workspace
|
|
263
|
+
THEN submodules are initialized with "git submodule update --init"
|
|
264
|
+
AND the submodule code is available for analysis
|
|
265
|
+
AND Loki Mode does NOT modify submodule contents
|
|
266
|
+
AND commits do not accidentally include submodule pointer changes
|
|
267
|
+
```
|
|
268
|
+
|
|
269
|
+
### GI-08: Large File (LFS) Awareness
|
|
270
|
+
|
|
271
|
+
```
|
|
272
|
+
GIVEN the repository uses Git LFS for *.bin and *.model files
|
|
273
|
+
AND the PRD generates a new ML model file
|
|
274
|
+
WHEN Loki Mode creates the file
|
|
275
|
+
THEN LFS tracking is respected (file matches .gitattributes patterns)
|
|
276
|
+
AND "git lfs ls-files" shows the new file
|
|
277
|
+
AND the commit does not store the binary in the git object store
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
### GI-09: Branch Protection Rules Interaction
|
|
281
|
+
|
|
282
|
+
```
|
|
283
|
+
GIVEN the main branch has protection rules:
|
|
284
|
+
- Require PR reviews (1 reviewer)
|
|
285
|
+
- Require status checks to pass
|
|
286
|
+
WHEN Loki Mode completes and attempts to push to main
|
|
287
|
+
THEN the push is rejected (branch protection blocks direct push)
|
|
288
|
+
AND Loki Mode creates a PR from the working branch instead
|
|
289
|
+
AND the CI status checks run on the PR
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
### GI-10: Fork Safety in Enterprise Workflow
|
|
293
|
+
|
|
294
|
+
```
|
|
295
|
+
GIVEN the loki-enterprise.yml workflow is triggered by a PR review
|
|
296
|
+
AND the PR originates from a forked repository
|
|
297
|
+
WHEN the "Verify actor trust for fork PRs" step runs
|
|
298
|
+
THEN the workflow exits with error code 1
|
|
299
|
+
AND the message "Skipping execution: pull_request_review from a fork repository is not trusted" is logged
|
|
300
|
+
AND no Loki Mode execution occurs (prevents untrusted code execution)
|
|
301
|
+
```
|
|
302
|
+
Source: `.github/workflows/loki-enterprise.yml:107-120`
|
|
303
|
+
|
|
304
|
+
---
|
|
305
|
+
|
|
306
|
+
## 3. CI/CD Scenarios
|
|
307
|
+
|
|
308
|
+
### CI-01: GitHub Actions Triggered by Loki Push
|
|
309
|
+
|
|
310
|
+
```
|
|
311
|
+
GIVEN the repository has the release.yml workflow
|
|
312
|
+
AND a version bump is committed to the VERSION file on main
|
|
313
|
+
WHEN the push triggers the Release workflow
|
|
314
|
+
THEN a git tag "v<VERSION>" is created
|
|
315
|
+
AND a GitHub Release is created with 5 artifacts:
|
|
316
|
+
- loki-mode-<ver>.zip (Claude.ai upload)
|
|
317
|
+
- loki-mode-<ver>.skill (skill file)
|
|
318
|
+
- loki-mode-api-<ver>.zip (API package)
|
|
319
|
+
- loki-mode-claude-code-<ver>.zip (full package)
|
|
320
|
+
- loki-mode-claude-code-<ver>.tar.gz (full package)
|
|
321
|
+
AND downstream jobs trigger: npm, Docker, VSCode, Homebrew, Slack
|
|
322
|
+
```
|
|
323
|
+
Source: `.github/workflows/release.yml:1-148`
|
|
324
|
+
|
|
325
|
+
### CI-02: Test Results Across Multiple Runtimes
|
|
326
|
+
|
|
327
|
+
```
|
|
328
|
+
GIVEN the test.yml workflow triggers on push to main or PR
|
|
329
|
+
WHEN the workflow runs
|
|
330
|
+
THEN Node.js tests execute on versions 18, 20, 22 (matrix strategy)
|
|
331
|
+
AND Python tests execute on versions 3.10, 3.11, 3.12, 3.13
|
|
332
|
+
AND shell tests run via tests/run-all-tests.sh
|
|
333
|
+
AND Helm chart linting passes via "helm lint deploy/helm/autonomi"
|
|
334
|
+
AND dashboard frontend build verification completes (output > 100KB)
|
|
335
|
+
AND all jobs use fail-fast: false (all versions tested even if one fails)
|
|
336
|
+
```
|
|
337
|
+
Source: `.github/workflows/test.yml`
|
|
338
|
+
|
|
339
|
+
### CI-03: Docker Build and Multi-Tag Push
|
|
340
|
+
|
|
341
|
+
```
|
|
342
|
+
GIVEN the release workflow's publish-docker job runs
|
|
343
|
+
AND DOCKERHUB_USERNAME and DOCKERHUB_TOKEN secrets are configured
|
|
344
|
+
WHEN the job builds the Docker image
|
|
345
|
+
THEN dashboard frontend is rebuilt first (npm ci && npm run build:all)
|
|
346
|
+
AND dashboard/static/index.html existence is verified
|
|
347
|
+
AND the image is pushed with three tags:
|
|
348
|
+
- asklokesh/loki-mode:v<VERSION>
|
|
349
|
+
- asklokesh/loki-mode:<VERSION>
|
|
350
|
+
- asklokesh/loki-mode:latest
|
|
351
|
+
AND Docker Hub description is updated from DOCKER_README.md
|
|
352
|
+
```
|
|
353
|
+
Source: `.github/workflows/release.yml:333-378`
|
|
354
|
+
|
|
355
|
+
### CI-04: Rollback on Failed Deployment
|
|
356
|
+
|
|
357
|
+
```
|
|
358
|
+
GIVEN a Helm-deployed production environment with values-production.yaml
|
|
359
|
+
AND the current deployment is healthy (readiness probe passing)
|
|
360
|
+
WHEN a new version is deployed with "helm upgrade autonomi ./deploy/helm/autonomi"
|
|
361
|
+
AND the new version fails readiness probes (3 consecutive failures)
|
|
362
|
+
THEN Kubernetes rolls back to the previous ReplicaSet
|
|
363
|
+
AND the PodDisruptionBudget ensures minAvailable: 1 is maintained
|
|
364
|
+
AND the failed deployment is visible in "helm history autonomi"
|
|
365
|
+
```
|
|
366
|
+
Source: `deploy/helm/autonomi/values-ha.yaml:61-63`, `deploy/helm/autonomi/templates/pdb.yaml`
|
|
367
|
+
|
|
368
|
+
### CI-05: Staging vs Production Environment Configuration
|
|
369
|
+
|
|
370
|
+
```
|
|
371
|
+
GIVEN two Helm value files: values.yaml (staging) and values-production.yaml
|
|
372
|
+
WHEN staging is deployed with default values
|
|
373
|
+
THEN controlplane.replicas=1, worker.replicas=1, autoscaling.enabled=false
|
|
374
|
+
AND logLevel=INFO, maxWorkers=4, networkPolicy.enabled=false
|
|
375
|
+
WHEN production is deployed with values-production.yaml
|
|
376
|
+
THEN controlplane.replicas=2, worker.replicas=3, autoscaling.enabled=true
|
|
377
|
+
AND logLevel=WARNING, maxWorkers=8, networkPolicy.enabled=true
|
|
378
|
+
AND Prometheus scrape annotations are present on pods
|
|
379
|
+
```
|
|
380
|
+
Source: `deploy/helm/autonomi/values.yaml`, `deploy/helm/autonomi/values-production.yaml`
|
|
381
|
+
|
|
382
|
+
### CI-06: Environment Variable Management via ConfigMap
|
|
383
|
+
|
|
384
|
+
```
|
|
385
|
+
GIVEN the Helm chart creates a ConfigMap with config values
|
|
386
|
+
WHEN the ConfigMap is updated (e.g., LOKI_LOG_LEVEL changed from INFO to DEBUG)
|
|
387
|
+
THEN the deployment template detects the change via:
|
|
388
|
+
checksum/config annotation (sha256 of configmap.yaml)
|
|
389
|
+
AND a rolling restart is triggered automatically
|
|
390
|
+
AND no manual pod deletion is needed
|
|
391
|
+
```
|
|
392
|
+
Source: `deploy/helm/autonomi/templates/deployment-controlplane.yaml:16`
|
|
393
|
+
|
|
394
|
+
### CI-07: Secret Management with External Secrets
|
|
395
|
+
|
|
396
|
+
```
|
|
397
|
+
GIVEN the team uses an external secret manager (Vault, AWS Secrets Manager)
|
|
398
|
+
AND an ExternalSecret CRD syncs secrets to K8s Secret "team-api-keys"
|
|
399
|
+
WHEN the Helm chart is installed with:
|
|
400
|
+
secrets.existingSecret=team-api-keys
|
|
401
|
+
THEN the chart does NOT create its own Secret
|
|
402
|
+
AND both controlplane and worker reference "team-api-keys" via envFrom
|
|
403
|
+
AND API keys are never stored in Helm values or version control
|
|
404
|
+
```
|
|
405
|
+
Source: `deploy/helm/autonomi/templates/secret.yaml:1` (conditional on existingSecret)
|
|
406
|
+
|
|
407
|
+
### CI-08: Build Caching in CI Pipeline
|
|
408
|
+
|
|
409
|
+
```
|
|
410
|
+
GIVEN the test.yml workflow has dashboard-build job
|
|
411
|
+
AND it uses actions/setup-node with cache: npm
|
|
412
|
+
WHEN the job runs
|
|
413
|
+
THEN npm dependencies are cached via cache-dependency-path: dashboard-ui/package-lock.json
|
|
414
|
+
AND subsequent runs skip "npm ci" download phase
|
|
415
|
+
AND build verification checks output size > 100KB
|
|
416
|
+
```
|
|
417
|
+
Source: `.github/workflows/test.yml:116-117`
|
|
418
|
+
|
|
419
|
+
### CI-09: Multi-Channel Release Pipeline
|
|
420
|
+
|
|
421
|
+
```
|
|
422
|
+
GIVEN the release.yml workflow detects a new version
|
|
423
|
+
WHEN all release jobs execute
|
|
424
|
+
THEN the following channels are updated in parallel:
|
|
425
|
+
- npm: publish with NODE_AUTH_TOKEN
|
|
426
|
+
- Docker: build, push, update description
|
|
427
|
+
- VSCode: compile, package .vsix, publish to marketplace
|
|
428
|
+
- Homebrew: compute SHA256 of tarball, update formula via API
|
|
429
|
+
- Python SDK: sync version, build wheel, publish to PyPI
|
|
430
|
+
- TypeScript SDK: sync version, build, publish to npm
|
|
431
|
+
- Slack: post release notification with changelog
|
|
432
|
+
AND Homebrew update waits for npm and Docker (dependency chain)
|
|
433
|
+
```
|
|
434
|
+
Source: `.github/workflows/release.yml:238-496`
|
|
435
|
+
|
|
436
|
+
### CI-10: Pipeline Status via WebSocket Updates
|
|
437
|
+
|
|
438
|
+
```
|
|
439
|
+
GIVEN the dashboard WebSocket endpoint is available at /ws
|
|
440
|
+
AND a CI pipeline is running a loki session
|
|
441
|
+
WHEN the WebSocket client connects
|
|
442
|
+
THEN real-time events are streamed including:
|
|
443
|
+
- iteration_start / iteration_complete
|
|
444
|
+
- task_queued / task_completed
|
|
445
|
+
- quality_gate_passed / quality_gate_failed
|
|
446
|
+
- build_progress (cost, ETA, phase data)
|
|
447
|
+
AND the client receives JSON-formatted events
|
|
448
|
+
```
|
|
449
|
+
Source: `dashboard/server.py` (WebSocket endpoint)
|
|
450
|
+
|
|
451
|
+
---
|
|
452
|
+
|
|
453
|
+
## 4. Security and Compliance Scenarios
|
|
454
|
+
|
|
455
|
+
### SC-01: API Token Generation and Rotation
|
|
456
|
+
|
|
457
|
+
```
|
|
458
|
+
GIVEN enterprise auth is enabled (LOKI_ENTERPRISE_AUTH=true)
|
|
459
|
+
AND an admin token exists
|
|
460
|
+
WHEN the admin calls POST /api/enterprise/tokens with:
|
|
461
|
+
{ "name": "ci-token", "role": "operator", "expires_days": 90 }
|
|
462
|
+
THEN a token "loki_<urlsafe_base64>" is generated
|
|
463
|
+
AND the token hash is stored with a random 16-byte salt
|
|
464
|
+
AND the raw token is returned only once (never stored in plaintext)
|
|
465
|
+
AND after 90 days, the token validation returns None (expired)
|
|
466
|
+
WHEN the admin calls DELETE /api/enterprise/tokens/ci-token
|
|
467
|
+
THEN the token is permanently deleted from tokens.json
|
|
468
|
+
AND subsequent requests with the token return HTTP 401
|
|
469
|
+
```
|
|
470
|
+
Source: `dashboard/auth.py:173-256`
|
|
471
|
+
|
|
472
|
+
### SC-02: Session Timeout via Token Expiration
|
|
473
|
+
|
|
474
|
+
```
|
|
475
|
+
GIVEN a token was created with expires_days=1
|
|
476
|
+
AND 25 hours have elapsed since creation
|
|
477
|
+
WHEN a request is made with the expired token
|
|
478
|
+
THEN validate_token() checks expires_at against current UTC time
|
|
479
|
+
AND returns None (token expired)
|
|
480
|
+
AND the API responds with HTTP 401: "Invalid, expired, or revoked token"
|
|
481
|
+
```
|
|
482
|
+
Source: `dashboard/auth.py:376-379`
|
|
483
|
+
|
|
484
|
+
### SC-03: Audit Log for All AI Actions
|
|
485
|
+
|
|
486
|
+
```
|
|
487
|
+
GIVEN LOKI_AUDIT_ENABLED=true in the deployment
|
|
488
|
+
AND audit logs are persisted to /data/audit/audit.log (via PVC)
|
|
489
|
+
WHEN any control action is performed (start, pause, stop, force-review)
|
|
490
|
+
THEN the action is logged with timestamp, actor (token name), and action details
|
|
491
|
+
AND auditors can query via GET /api/enterprise/audit (requires "audit" scope)
|
|
492
|
+
AND GET /api/enterprise/audit/summary provides aggregated statistics
|
|
493
|
+
```
|
|
494
|
+
Source: `dashboard/server.py:1816-1848`, `deploy/helm/autonomi/values.yaml:152-154`
|
|
495
|
+
|
|
496
|
+
### SC-04: Data Retention via Persistent Volume Claims
|
|
497
|
+
|
|
498
|
+
```
|
|
499
|
+
GIVEN the Helm chart creates PVCs for checkpoints and audit logs
|
|
500
|
+
AND production values set: checkpoints=50Gi, auditLogs=100Gi
|
|
501
|
+
WHEN the deployment runs for 6 months
|
|
502
|
+
THEN checkpoint data is stored on the checkpoints PVC (/data/checkpoints)
|
|
503
|
+
AND audit logs are stored on the audit-logs PVC (/data/audit)
|
|
504
|
+
AND PVCs persist across pod restarts and upgrades
|
|
505
|
+
AND cleanup policies are managed externally (not by the chart)
|
|
506
|
+
```
|
|
507
|
+
Source: `deploy/helm/autonomi/templates/pvc-checkpoints.yaml`, `deploy/helm/autonomi/templates/pvc-audit.yaml`
|
|
508
|
+
|
|
509
|
+
### SC-05: Sensitive Data Detection in Generated Code
|
|
510
|
+
|
|
511
|
+
```
|
|
512
|
+
GIVEN Loki Mode generates code that includes a hardcoded API key
|
|
513
|
+
WHEN the quality gate system runs static analysis
|
|
514
|
+
THEN the sensitive data detector flags the hardcoded secret
|
|
515
|
+
AND the quality gate returns severity "critical"
|
|
516
|
+
AND the build is blocked (critical issues block by default)
|
|
517
|
+
AND the next RARV iteration is instructed to extract the key to environment variables
|
|
518
|
+
```
|
|
519
|
+
Source: `skills/quality-gates.md`
|
|
520
|
+
|
|
521
|
+
### SC-06: Network Isolation via NetworkPolicy
|
|
522
|
+
|
|
523
|
+
```
|
|
524
|
+
GIVEN the Helm chart is deployed with security.networkPolicy.enabled=true
|
|
525
|
+
WHEN the NetworkPolicies are applied
|
|
526
|
+
THEN the controlplane accepts ingress only on port 57374 (from any source in namespace)
|
|
527
|
+
AND the controlplane allows all egress (needs to call LLM APIs)
|
|
528
|
+
AND workers accept ingress ONLY from controlplane pods
|
|
529
|
+
AND workers allow all egress (needs to call LLM APIs)
|
|
530
|
+
AND inter-worker communication is blocked by default
|
|
531
|
+
```
|
|
532
|
+
Source: `deploy/helm/autonomi/templates/networkpolicy.yaml`
|
|
533
|
+
|
|
534
|
+
### SC-07: Rate Limiting Per Endpoint Category
|
|
535
|
+
|
|
536
|
+
```
|
|
537
|
+
GIVEN the dashboard has two rate limiters:
|
|
538
|
+
- _control_limiter: max_calls=10, window_seconds=60
|
|
539
|
+
- _read_limiter: max_calls=60, window_seconds=60
|
|
540
|
+
WHEN a client sends 11 control requests (POST /api/control/pause) within 60 seconds
|
|
541
|
+
THEN the 11th request is rejected with HTTP 429 (Too Many Requests)
|
|
542
|
+
WHEN a client sends 61 read requests (GET /api/status) within 60 seconds
|
|
543
|
+
THEN the 61st request is rejected with HTTP 429
|
|
544
|
+
AND the rate limit resets after the 60-second window
|
|
545
|
+
```
|
|
546
|
+
Source: `dashboard/server.py:101-141`
|
|
547
|
+
|
|
548
|
+
### SC-08: Input Sanitization for PRD Content
|
|
549
|
+
|
|
550
|
+
```
|
|
551
|
+
GIVEN the loki-enterprise.yml workflow receives PRD content from user input
|
|
552
|
+
WHEN the workflow processes the input
|
|
553
|
+
THEN PRD content is written via environment variable (not shell interpolation):
|
|
554
|
+
printf '%s' "$PRD_CONTENT" > "$PRD_FILE"
|
|
555
|
+
AND this prevents shell injection attacks in the PRD content
|
|
556
|
+
AND the temp file is cleaned up after execution
|
|
557
|
+
```
|
|
558
|
+
Source: `.github/workflows/loki-enterprise.yml:133-135`
|
|
559
|
+
|
|
560
|
+
### SC-09: CORS Configuration for Dashboard
|
|
561
|
+
|
|
562
|
+
```
|
|
563
|
+
GIVEN the dashboard defaults CORS to localhost only:
|
|
564
|
+
"http://localhost:57374,http://127.0.0.1:57374"
|
|
565
|
+
WHEN LOKI_DASHBOARD_CORS is set to "https://internal.company.com"
|
|
566
|
+
THEN only requests from https://internal.company.com are allowed
|
|
567
|
+
AND requests from other origins receive no CORS headers
|
|
568
|
+
WHEN LOKI_DASHBOARD_CORS is set to "*"
|
|
569
|
+
THEN a warning is logged: "all origins are allowed"
|
|
570
|
+
AND all origins are accepted (insecure, not recommended for production)
|
|
571
|
+
```
|
|
572
|
+
Source: `dashboard/server.py:445-461`
|
|
573
|
+
|
|
574
|
+
### SC-10: Container Security Hardening
|
|
575
|
+
|
|
576
|
+
```
|
|
577
|
+
GIVEN the Dockerfile.sandbox is used for production deployment
|
|
578
|
+
WHEN the container starts
|
|
579
|
+
THEN it runs as non-root user (UID 1000)
|
|
580
|
+
AND SUID/SGID binaries have been stripped
|
|
581
|
+
AND shell history is disabled (HISTFILE=/dev/null)
|
|
582
|
+
AND the health check verifies: loki CLI, workspace writable, node available
|
|
583
|
+
AND operators are advised to use:
|
|
584
|
+
--security-opt=no-new-privileges:true
|
|
585
|
+
--cap-drop=ALL --cap-add=CHOWN,SETUID,SETGID,DAC_OVERRIDE
|
|
586
|
+
--cpus=2 --memory=4g --pids-limit=256
|
|
587
|
+
```
|
|
588
|
+
Source: `Dockerfile.sandbox:141-268`
|
|
589
|
+
|
|
590
|
+
---
|
|
591
|
+
|
|
592
|
+
## 5. Production Deployment Scenarios
|
|
593
|
+
|
|
594
|
+
### PD-01: Self-Hosted K8s Deployment via Helm
|
|
595
|
+
|
|
596
|
+
```
|
|
597
|
+
GIVEN a Kubernetes cluster with Helm 3 installed
|
|
598
|
+
WHEN the operator runs:
|
|
599
|
+
helm install autonomi ./deploy/helm/autonomi \
|
|
600
|
+
-f deploy/helm/autonomi/values-production.yaml \
|
|
601
|
+
--set secrets.existingSecret=my-api-keys
|
|
602
|
+
THEN the following resources are created:
|
|
603
|
+
- ServiceAccount (automountServiceAccountToken: false at SA level)
|
|
604
|
+
- ConfigMap with runtime configuration
|
|
605
|
+
- Controlplane Deployment (2 replicas, 500m-2 CPU, 1-2Gi memory)
|
|
606
|
+
- Worker Deployment (3 replicas with HPA, 1-4 CPU, 2-4Gi memory)
|
|
607
|
+
- HPA for workers (minReplicas: 3, maxReplicas: 10)
|
|
608
|
+
- Services (ClusterIP for controlplane, headless for workers)
|
|
609
|
+
- PVCs for checkpoints (50Gi) and audit logs (100Gi)
|
|
610
|
+
- NetworkPolicy (restricts worker ingress to controlplane only)
|
|
611
|
+
- RBAC Role+RoleBinding (pods, logs, configmaps, events)
|
|
612
|
+
```
|
|
613
|
+
Source: `deploy/helm/autonomi/` (full chart)
|
|
614
|
+
|
|
615
|
+
### PD-02: Helm Chart Value Override Chain
|
|
616
|
+
|
|
617
|
+
```
|
|
618
|
+
GIVEN three value files exist: values.yaml, values-production.yaml, values-ha.yaml
|
|
619
|
+
WHEN the operator deploys HA production:
|
|
620
|
+
helm install autonomi ./deploy/helm/autonomi \
|
|
621
|
+
-f values-production.yaml \
|
|
622
|
+
-f values-ha.yaml
|
|
623
|
+
THEN values-ha.yaml overrides production settings:
|
|
624
|
+
controlplane.replicas: 3 (was 2)
|
|
625
|
+
worker.replicas: 5 (was 3)
|
|
626
|
+
worker.autoscaling.maxReplicas: 20 (was 10)
|
|
627
|
+
PDB enabled with minAvailable: 1
|
|
628
|
+
AND anti-affinity ensures controlplane pods spread across nodes
|
|
629
|
+
```
|
|
630
|
+
Source: `deploy/helm/autonomi/values-ha.yaml`
|
|
631
|
+
|
|
632
|
+
### PD-03: Horizontal Scaling via HPA
|
|
633
|
+
|
|
634
|
+
```
|
|
635
|
+
GIVEN the worker HPA is enabled with:
|
|
636
|
+
minReplicas: 3, maxReplicas: 10
|
|
637
|
+
targetCPUUtilizationPercentage: 70
|
|
638
|
+
targetMemoryUtilizationPercentage: 80
|
|
639
|
+
WHEN worker CPU utilization exceeds 70%
|
|
640
|
+
THEN the HPA scales up worker replicas (up to 10)
|
|
641
|
+
AND new workers join the RARV execution pool
|
|
642
|
+
WHEN CPU utilization drops below 70%
|
|
643
|
+
THEN the HPA scales down (minimum 3 replicas maintained)
|
|
644
|
+
AND the scale-down is gradual (default stabilization window)
|
|
645
|
+
```
|
|
646
|
+
Source: `deploy/helm/autonomi/templates/hpa-worker.yaml`
|
|
647
|
+
|
|
648
|
+
### PD-04: Health Check and Readiness Probes
|
|
649
|
+
|
|
650
|
+
```
|
|
651
|
+
GIVEN the controlplane deployment has readiness and liveness probes
|
|
652
|
+
WHEN the controlplane starts
|
|
653
|
+
THEN the readiness probe checks GET /health:57374 after 10s delay
|
|
654
|
+
AND the readiness probe runs every 15s with 5s timeout, 3 failure threshold
|
|
655
|
+
AND the liveness probe checks GET /health:57374 after 15s delay
|
|
656
|
+
AND the liveness probe runs every 20s with 5s timeout, 5 failure threshold
|
|
657
|
+
WHEN the worker starts
|
|
658
|
+
THEN the worker liveness probe checks: pgrep -f "loki" (process alive)
|
|
659
|
+
AND the worker readiness probe checks: test -f /tmp/loki-ready (ready file)
|
|
660
|
+
AND Helm test pods verify connectivity via curl to /health and /api/status
|
|
661
|
+
```
|
|
662
|
+
Source: `deploy/helm/autonomi/templates/deployment-controlplane.yaml:49-64`, `deploy/helm/autonomi/templates/deployment-worker.yaml:49-68`
|
|
663
|
+
|
|
664
|
+
### PD-05: Log Aggregation via OTEL Collector
|
|
665
|
+
|
|
666
|
+
```
|
|
667
|
+
GIVEN the docker-compose deployment includes the observability profile
|
|
668
|
+
WHEN started with "docker compose --profile observability up -d"
|
|
669
|
+
THEN an OTEL collector (v0.96.0) receives OTLP traces on ports 4317 (gRPC) and 4318 (HTTP)
|
|
670
|
+
AND traces are batched (timeout: 5s, batch_size: 1024)
|
|
671
|
+
AND traces are forwarded to Jaeger (port 4317, insecure TLS)
|
|
672
|
+
AND Jaeger UI is available at http://localhost:16686
|
|
673
|
+
AND traces include RARV iteration spans, API request spans, and LLM call spans
|
|
674
|
+
```
|
|
675
|
+
Source: `deploy/docker-compose/docker-compose.yml:57-79`, `deploy/docker-compose/otel-config.yaml`
|
|
676
|
+
|
|
677
|
+
### PD-06: Prometheus Metrics Collection
|
|
678
|
+
|
|
679
|
+
```
|
|
680
|
+
GIVEN the production values include Prometheus scrape annotations
|
|
681
|
+
AND the Helm chart supports ServiceMonitor (observability.serviceMonitor)
|
|
682
|
+
WHEN Prometheus is configured to scrape the namespace
|
|
683
|
+
THEN pods are discovered via annotations:
|
|
684
|
+
prometheus.io/scrape: "true"
|
|
685
|
+
prometheus.io/port: "57374"
|
|
686
|
+
prometheus.io/path: "/metrics"
|
|
687
|
+
AND metrics include request latency, active sessions, task completion counts
|
|
688
|
+
AND ServiceMonitor can be enabled for prometheus-operator setups
|
|
689
|
+
```
|
|
690
|
+
Source: `deploy/helm/autonomi/values-production.yaml:16-19`, `deploy/helm/autonomi/values.yaml:186-191`
|
|
691
|
+
|
|
692
|
+
### PD-07: Backup and Disaster Recovery via PVCs
|
|
693
|
+
|
|
694
|
+
```
|
|
695
|
+
GIVEN the deployment uses PVCs for checkpoints and audit logs
|
|
696
|
+
AND the HA values set checkpoints to 100Gi with ReadWriteMany access
|
|
697
|
+
WHEN a disaster recovery scenario requires data restoration
|
|
698
|
+
THEN checkpoints PVC (/data/checkpoints) contains all session state snapshots
|
|
699
|
+
AND audit-logs PVC (/data/audit) contains the complete audit trail
|
|
700
|
+
AND PVCs can be backed up using standard K8s backup tools (Velero, etc.)
|
|
701
|
+
AND restoring PVCs and redeploying recovers the full system state
|
|
702
|
+
```
|
|
703
|
+
Source: `deploy/helm/autonomi/values-ha.yaml:44-52`
|
|
704
|
+
|
|
705
|
+
### PD-08: Zero-Downtime Upgrade Path
|
|
706
|
+
|
|
707
|
+
```
|
|
708
|
+
GIVEN a production deployment with PDB enabled (minAvailable: 1)
|
|
709
|
+
AND controlplane has 3 replicas with pod anti-affinity
|
|
710
|
+
WHEN "helm upgrade autonomi" is executed with a new image tag
|
|
711
|
+
THEN the rolling update proceeds one pod at a time
|
|
712
|
+
AND the PDB prevents more than (replicas - 1) pods from being unavailable
|
|
713
|
+
AND the configmap/secret checksum annotations trigger rolling restarts
|
|
714
|
+
AND if the new version fails probes, the rollout pauses automatically
|
|
715
|
+
AND "helm rollback autonomi 1" restores the previous version
|
|
716
|
+
```
|
|
717
|
+
Source: `deploy/helm/autonomi/templates/pdb.yaml`, `deploy/helm/autonomi/values-ha.yaml:7-19`
|
|
718
|
+
|
|
719
|
+
---
|
|
720
|
+
|
|
721
|
+
## Appendix: Bug Cross-References
|
|
722
|
+
|
|
723
|
+
The following bugs were discovered during scenario authoring and are documented
|
|
724
|
+
in full in `docs/bug-fixes/agent-13-enterprise-fixes.md`:
|
|
725
|
+
|
|
726
|
+
| ID | Severity | Summary |
|
|
727
|
+
|----|----------|---------|
|
|
728
|
+
| BUG-E01 | Medium | Helm Chart appVersion "5.52.0" vs actual version 6.71.1 |
|
|
729
|
+
| BUG-E02 | Low | automountServiceAccountToken conflict between SA and Deployment |
|
|
730
|
+
| BUG-E03 | Low | Agent card reports "sso": false but OIDC/SSO is implemented |
|
|
731
|
+
| BUG-E04 | Low | Worker deployment missing audit-logs volume mount |
|
|
732
|
+
| BUG-E05 | Medium | Helm test test-health.yaml expects python3 in curl image |
|