methodology-m 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. package/bin/m.mjs +76 -0
  2. package/dist-m/CHANGELOG.md +45 -0
  3. package/dist-m/capabilities/bootstrap-root-repo/SKILL.md +138 -0
  4. package/dist-m/capabilities/decompose-story/SKILL.md +299 -0
  5. package/dist-m/capabilities/generate-acceptance-tests/SKILL.md +305 -0
  6. package/dist-m/capabilities/generate-pats/SKILL.md +131 -0
  7. package/dist-m/capabilities/scaffold-repo/SKILL.md +641 -0
  8. package/dist-m/capabilities/setup-workspace/SKILL.md +70 -0
  9. package/dist-m/capabilities/tag-release/SKILL.md +121 -0
  10. package/dist-m/capabilities/wire-orchestration/SKILL.md +351 -0
  11. package/dist-m/m.md +126 -0
  12. package/dist-m/providers/provider-interface.md +191 -0
  13. package/dist-m/providers/scm/gitlab.md +377 -0
  14. package/dist-m/schemas/pat.schema.json +161 -0
  15. package/dist-m/schemas/project.schema.json +177 -0
  16. package/package.json +27 -0
  17. package/src/commands/changelog.mjs +58 -0
  18. package/src/commands/clone.mjs +199 -0
  19. package/src/commands/diff.mjs +29 -0
  20. package/src/commands/init.mjs +51 -0
  21. package/src/commands/update.mjs +41 -0
  22. package/src/commands/version.mjs +43 -0
  23. package/src/lib/copy.mjs +20 -0
  24. package/src/lib/detect-agent.mjs +25 -0
  25. package/src/lib/diff-trees.mjs +95 -0
  26. package/src/lib/topology.mjs +62 -0
  27. package/src/lib/version-file.mjs +25 -0
  28. package/src/lib/workspace.mjs +40 -0
  29. package/src/lib/wrappers/claude.mjs +54 -0
  30. package/templates/claude/skills/bootstrap-root-repo/SKILL.md +13 -0
  31. package/templates/claude/skills/decompose-story/SKILL.md +13 -0
  32. package/templates/claude/skills/generate-acceptance-tests/SKILL.md +13 -0
  33. package/templates/claude/skills/generate-pats/SKILL.md +13 -0
  34. package/templates/claude/skills/scaffold-repo/SKILL.md +13 -0
  35. package/templates/claude/skills/setup-workspace/SKILL.md +13 -0
  36. package/templates/claude/skills/tag-release/SKILL.md +13 -0
  37. package/templates/claude/skills/wire-orchestration/SKILL.md +13 -0
  38. package/templates/claude/steering/m-steering.md +3 -0
@@ -0,0 +1,70 @@
1
+ # Setup Workspace
2
+
3
+ **Capability:** Create a new M-type project workspace
4
+
5
+ ## What it does
6
+
7
+ Creates a group (or subgroup) on the SCM platform that serves as the container for an M-type project. The group is where all repos (root, managed) live.
8
+
9
+ ## Parameters
10
+
11
+ | Parameter | Type | Required | Description |
12
+ |-----------|------|----------|-------------|
13
+ | `project-name` | string | Yes | Name of the project (becomes group name) |
14
+ | `parent-path` | string | No | Parent group path (e.g., `methodology-m`). Creates subgroup if provided; creates top-level group if omitted. |
15
+ | `visibility` | string | No | `public` or `private` (default: `public`) |
16
+ | `description` | string | No | Group description |
17
+
18
+ ## Execution
19
+
20
+ ### Top-level group
21
+
22
+ ```
23
+ scm.create_group(
24
+ name: <project-name>,
25
+ path: <project-name>,
26
+ visibility: <visibility>,
27
+ description: <description>
28
+ )
29
+ ```
30
+
31
+ ### Subgroup under parent
32
+
33
+ ```
34
+ parent_id = scm.resolve_group_id(<parent-path>)
35
+
36
+ scm.create_group(
37
+ name: <project-name>,
38
+ path: <project-name>,
39
+ parent_id: <parent_id>,
40
+ visibility: <visibility>,
41
+ description: <description>
42
+ )
43
+ ```
44
+
45
+ ## Examples
46
+
47
+ **Create top-level group:**
48
+ ```
49
+ setup-workspace
50
+ project-name: "todo-m-workshop"
51
+ visibility: "public"
52
+ description: "Reference implementation for Methodology M"
53
+ ```
54
+
55
+ **Create subgroup under parent:**
56
+ ```
57
+ setup-workspace
58
+ project-name: "todo-m-workshop"
59
+ parent-path: "methodology-m"
60
+ visibility: "public"
61
+ description: "Reference implementation for Methodology M"
62
+ ```
63
+
64
+ ## Outcome
65
+
66
+ Group created at `<scm-platform>/<namespace>/<project-name>`. Ready for Story Zero.
67
+
68
+ ## Notes
69
+
70
+ - Some SCM platforms restrict top-level group creation via API — check your provider docs.
@@ -0,0 +1,121 @@
1
+ # tag-release
2
+
3
+ **Capability:** Tag a managed repo at a version and update the root repo topology
4
+
5
+ ## What it does
6
+
7
+ Tags a managed repo with a semver version (e.g. `v0.1.0`), then updates
8
+ the root repo's `project.yaml` to pin that component to the new version.
9
+ After this step, the topology manifest reflects the released state and
10
+ the component is reproducibly referenceable.
11
+
12
+ ## Parameters
13
+
14
+ | Parameter | Type | Required | Description |
15
+ |----------------|--------|----------|----------------------------------------------------|
16
+ | sub-task-id | string | Yes | Sub-task identifier (e.g. TODOM-000c) |
17
+ | version | string | Yes | Semver version to tag (e.g. v0.1.0) |
18
+ | repo-path | path | No | Local path to the managed repo (inferred from CWD if omitted) |
19
+ | root-repo-path | path | No | Local path to the root repo (inferred from sibling directory if omitted) |
20
+
21
+ ## Prerequisites
22
+
23
+ - Managed repo has a passing implementation and acceptance tests
24
+ - All tests pass (`npm test` exits 0)
25
+ - The managed repo's `main` branch is up to date
26
+ - Root repo exists locally with `project.yaml`
27
+
28
+ ## Execution
29
+
30
+ ### Step 1 — Verify readiness
31
+
32
+ Before tagging, confirm:
33
+ 1. Working directory is clean (no uncommitted changes)
34
+ 2. On `main` branch (or the MR has been merged to main)
35
+ 3. `npm test` passes
36
+
37
+ If any check fails, stop and report. Do not tag broken code.
38
+
39
+ ### Step 2 — Determine component name
40
+
41
+ Read `jira/<sub-task-id>.md` to extract the component name.
42
+ This is used to find the correct entry in `project.yaml`.
43
+
44
+ Alternatively, read `package.json` `name` field — the repo name
45
+ follows the pattern `<project>-<component>` (e.g. `todo-m-api-read`
46
+ → component `api-read`).
47
+
48
+ ### Step 3 — Create annotated tag
49
+
50
+ Create an annotated git tag on the current HEAD:
51
+
52
+ ```
53
+ git tag -a <version> -m "Release <version> — <sub-task-id>"
54
+ ```
55
+
56
+ Annotated tags (not lightweight) because:
57
+ - They carry metadata (tagger, date, message)
58
+ - They're the convention for release tags
59
+ - `git describe` works with annotated tags
60
+
61
+ ### Step 4 — Push the tag
62
+
63
+ ```
64
+ git push origin <version>
65
+ ```
66
+
67
+ This pushes only the tag, not any branch changes.
68
+
69
+ ### Step 5 — Update project.yaml in root repo
70
+
71
+ In the root repo's `project.yaml`, find the component entry and
72
+ update its `tag` field from `~` (null) to the new version:
73
+
74
+ ```
75
+ # Before
76
+ - name: api-read
77
+ type: referenced
78
+ location: methodology-m/todo-m-workshop/todo-m-api-read
79
+ tag: ~
80
+
81
+ # After
82
+ - name: api-read
83
+ type: referenced
84
+ location: methodology-m/todo-m-workshop/todo-m-api-read
85
+ tag: v0.1.0
86
+ ```
87
+
88
+ **Important:** Only update the specific component's tag. Do not
89
+ touch other components' entries.
90
+
91
+ ### Step 6 — Report
92
+
93
+ Output:
94
+ - Tag created: `<version>` on `<repo-name>`
95
+ - Tag pushed to origin
96
+ - `project.yaml` updated: `<component>` now pinned to `<version>`
97
+ - Reminder: commit and push the root repo `project.yaml` change
98
+ (this capability updates the file but does not auto-commit the
99
+ root repo — the presenter controls when that happens)
100
+
101
+ ## Notes
102
+
103
+ - This capability works locally. It creates and pushes a git tag on
104
+ the managed repo, and edits `project.yaml` in the local root repo.
105
+ - The root repo change (updated `project.yaml`) is NOT auto-committed.
106
+ In the workshop flow, the presenter commits this as part of a
107
+ topology update or as a standalone commit. In a real M workflow,
108
+ this would be part of a topology MR.
109
+ - Version `v0.1.0` is the convention for Story Zero's first release.
110
+ It signals "the component exists and meets its contract" — not
111
+ production-ready, but validated.
112
+ - The tag is on the managed repo's `main` branch HEAD. If the
113
+ implementation was done on a feature branch, it must be merged
114
+ to main first.
115
+ - For the workshop fast-forward flow, this capability is called
116
+ once per managed repo after implementation and tests are in place.
117
+ - This capability does NOT update the readiness tracker
118
+ (`stories/<story-id>.yaml`). That's a separate concern tracked
119
+ at the story level, not the component level.
120
+ - If the tag already exists (re-run scenario), report it and skip.
121
+ Do not force-push tags.
@@ -0,0 +1,351 @@
1
+ # wire-orchestration
2
+
3
+ **Capability:** Connect managed repos to the root repo's orchestration layer
4
+
5
+ ## What it does
6
+
7
+ After all managed repos are scaffolded, this capability wires them to
8
+ the root repo. It installs webhooks, pipeline triggers, CI variables,
9
+ and the root repo's CI pipeline. After this step, the full Methodology M
10
+ orchestration is operational — raising an MR on a managed repo triggers
11
+ shadow integration on the root repo automatically.
12
+
13
+ ## Parameters
14
+
15
+ | Parameter | Type | Required | Description |
16
+ |----------------|--------|----------|------------------------------------------------------|
17
+ | project-yaml | path | Yes | Path to project.yaml (source of truth for topology) |
18
+ | root-project-id| string | No | SCM project ID or path for root repo (derived from project.yaml if omitted) |
19
+ | access-tokens | map | Yes | Map of component name → access token value (from scaffold-repo output) |
20
+
21
+ ## Prerequisites
22
+
23
+ - Root repo exists (created by `bootstrap-root-repo`)
24
+ - All managed repos exist and are scaffolded (created by `scaffold-repo`)
25
+ - Project access tokens for each managed repo are available
26
+ - Each managed repo has a `Dockerfile` at its root
27
+ - Each API repo exposes a `GET /health` endpoint (convention for compose health checks)
28
+ - Root repo has a `docker-compose.yml` defining all services with build contexts pointing to sibling directories
29
+
30
+ ## Execution
31
+
32
+ ### Step 1 — Read topology
33
+
34
+ Read project.yaml. For each `type: referenced` component, extract:
35
+ - Component name
36
+ - Project path (from `location:` field)
37
+
38
+ ```
39
+ for each component:
40
+ project_id = scm.resolve_project_id(<location>)
41
+ ```
42
+
43
+ ### Step 2 — Create pipeline trigger on root repo
44
+
45
+ ```
46
+ trigger_token = scm.create_pipeline_trigger(
47
+ repo: <root-repo>,
48
+ description: "m-shadow-integration-trigger"
49
+ )
50
+ ```
51
+
52
+ This token is what managed repo webhooks use to kick off shadow
53
+ integration pipelines on the root repo.
54
+
55
+ Store the trigger token value for webhook configuration.
56
+
57
+ ### Step 3 — Install webhooks on managed repos
58
+
59
+ For each managed repo:
60
+
61
+ ```
62
+ scm.create_webhook(
63
+ repo: <managed-repo>,
64
+ url: <trigger-url-with-token>,
65
+ events: { merge_request: true, pipeline: true, push: false },
66
+ ssl_verify: true
67
+ )
68
+ ```
69
+
70
+ **Important:** Push events MUST be explicitly disabled to avoid
71
+ triggering the root repo pipeline on every push to managed repos.
72
+
73
+ Pipeline events (`pipeline: true`) are required so that when a managed
74
+ repo's own pipeline fails, the root repo is notified and can propagate
75
+ the failure to all story MRs. Without this, sibling MRs retain stale
76
+ green status until the next MR event.
77
+
78
+ ### Step 4 — Store access tokens as CI secrets
79
+
80
+ For each managed repo:
81
+
82
+ ```
83
+ scm.store_ci_secret(
84
+ repo: <root-repo>,
85
+ key: "M_TOKEN_<COMPONENT_NAME_UPPER>", # e.g. M_TOKEN_API_READ
86
+ value: <access-token-from-scaffold-repo>,
87
+ protected: true,
88
+ masked: true
89
+ )
90
+ ```
91
+
92
+ These tokens are used by the merge transaction pipeline to merge MRs
93
+ on managed repos via the SCM API.
94
+
95
+ ### Step 5 — Push root repo CI pipeline
96
+
97
+ Push `.gitlab-ci.yml` to the root repo. The pipeline has three concerns:
98
+
99
+ 1. **Shell lifecycle** — install/build/test for the embedded shell component
100
+ 2. **Shadow integration** — triggered by managed repo webhooks
101
+ 3. **Post-merge validation** — compose + integration-test on MR events and main
102
+
103
+ #### Pipeline structure
104
+
105
+ The root repo CI uses Docker-in-Docker (DinD) for the compose stages.
106
+ This is the **reference implementation** using GitLab CI + Docker Compose.
107
+ Other compose strategies (k8s, serverless, etc.) would replace this
108
+ section while preserving the same lifecycle contract.
109
+
110
+ **Stages:** install → build → test → compose → integration-test → report-status → merge-transaction
111
+
112
+ **Shell lifecycle jobs** (run on MR events and main pushes, skip triggers):
113
+ - `install` — `npm ci` for root and shell package
114
+ - `build` — build the shell
115
+ - `test` — shell unit tests
116
+
117
+ **Shadow integration jobs** (run only on trigger events from managed repo webhooks):
118
+ - `detect-trigger` — determines event type: `aot_integration`, `cascade_merge`, or `pipeline_failure`
119
+ - `shadow:invalidate-status` — immediately pushes `pending` to ALL story MRs (including root) to block merge during AOT window
120
+ - `shadow:integration` — clone siblings, Docker build, start, health check, run tests. Skips compose for `cascade_merge` and `pipeline_failure` (exits with failure for pipeline_failure to trigger report-failure)
121
+ - `shadow:report-status` — push `success` commit status to ALL story MRs (including root)
122
+ - `shadow:report-failure` — push `failed` commit status to ALL story MRs (including root)
123
+
124
+ **Validation jobs** (run on MR events and main pushes, skip triggers):
125
+ - `validate:integration` — compose, health check, run story-level tests. On MR pipelines, the `after_script` extracts the story ID from the branch name and fans out the result (success/failure) to all story MRs via `report-shadow-status.sh`. This ensures that a root MR pipeline failure makes all sibling MRs go red.
126
+
127
+ **Merge transaction** (manual trigger):
128
+ - `merge-transaction` — merge managed MRs atomically, update topology
129
+
130
+ #### Compose job contract (methodology)
131
+
132
+ The compose stage must:
133
+ 1. Clone all sibling repos at the appropriate ref
134
+ 2. Build the full system (all components)
135
+ 3. Start all services
136
+ 4. Verify each service is healthy (health check with timeout)
137
+ 5. Tear down on failure or after completion
138
+
139
+ This contract is methodology — it applies regardless of compose strategy.
140
+
141
+ #### Reference implementation: Docker Compose on GitLab DinD
142
+
143
+ The reference implementation uses Docker-in-Docker on GitLab CI shared
144
+ runners. Key configuration:
145
+
146
+ ```
147
+ image: docker:latest
148
+ services:
149
+ - docker:dind
150
+ variables:
151
+ DOCKER_TLS_CERTDIR: "/certs"
152
+ GROUP: <gitlab-group-path>
153
+ before_script:
154
+ - apk add --no-cache git curl
155
+ ```
156
+
157
+ **Clone sibling repos** using `CI_JOB_TOKEN` for authentication:
158
+ ```
159
+ git clone --depth 1 --branch main \
160
+ "https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/${GROUP}/<repo>.git" \
161
+ ../<repo>
162
+ ```
163
+
164
+ **Build and start** via Docker Compose:
165
+ ```
166
+ docker compose build
167
+ docker compose up -d
168
+ ```
169
+
170
+ **Health check loop** with configurable timeout:
171
+ ```
172
+ DOCKER_GATEWAY="${DOCKER_GATEWAY:-docker}"
173
+ TIMEOUT=60
174
+ ENDPOINTS="http://${DOCKER_GATEWAY}:<port>/health ..."
175
+ ```
176
+
177
+ **Teardown** in `after_script` (runs even on failure):
178
+ ```
179
+ docker compose down 2>/dev/null || true
180
+ ```
181
+
182
+ #### DinD networking gotcha
183
+
184
+ In GitLab's DinD setup with `DOCKER_TLS_CERTDIR`, the Docker daemon
185
+ runs in the `docker:dind` service container. Published ports bind on
186
+ that daemon's network interface, reachable from the CI script via the
187
+ hostname `docker` — NOT `localhost`.
188
+
189
+ Health check URLs must use `docker:<port>`, not `localhost:<port>`.
190
+ The `DOCKER_GATEWAY` variable defaults to `docker` but can be
191
+ overridden for local testing (set to `localhost`).
192
+
193
+ #### Integration test gate
194
+
195
+ Shadow integration runs story-level integration tests after health
196
+ checks pass. Three failure modes ensure no story can slip through:
197
+
198
+ - **Structural failure:** the integration test framework detects that
199
+ a story is in-flight (open MRs with the story ID in branch names)
200
+ but no integration test script exists at
201
+ `scripts/integration-tests/<story-id>.sh`. This fails immediately
202
+ with a clear message telling the team to add tests.
203
+
204
+ - **Completeness failure:** the integrity check verifies that ALL
205
+ repos in the topology have open MRs for the story — managed repos
206
+ AND the root repo. The root repo delivers story-level integration
207
+ tests; without its MR the gate is meaningless. The repo list must
208
+ be derived from `project.yaml` components, not hardcoded.
209
+
210
+ - **Logical failure:** the integration test script exists but fails
211
+ because not all components have implemented their part yet. The
212
+ failure output shows exactly which checks failed, so devs know
213
+ who to talk to.
214
+
215
+ - **Mergeability failure:** a constituent MR exists but is not
216
+ mergeable (e.g. failing pipeline, blocked by discussions, needs
217
+ approval). An unmergeable MR is functionally equivalent to a
218
+ missing MR — the story cannot proceed. The integrity check must
219
+ verify `detailed_merge_status` for each MR, not just its existence.
220
+ Acceptable statuses are `mergeable`, `checking` (pipeline in
221
+ progress), `approvable`, and `approved`. Anything else (e.g.
222
+ `ci_must_pass`, `blocked_status`, `not_approved`) means the MR
223
+ is not ready and the story is incomplete.
224
+
225
+ The root repo always has a story branch for every story — even when
226
+ the shell code doesn't change — because the integration tests are
227
+ the root repo's contribution. The `decompose-story` capability
228
+ enforces this by always generating a root repo sub-task.
229
+
230
+ The shadow pipeline bootstraps from the root repo's story branch
231
+ (resolved via `resolve-story-branches.sh`) to pick up story-specific
232
+ integration tests, compose config, and Cypress specs.
233
+
234
+ #### Health endpoint convention
235
+
236
+ All API components must expose `GET /health` returning a 2xx response.
237
+ This is the standard health check endpoint used by compose jobs.
238
+
239
+ Frontend components (nginx-based) are checked via their served content:
240
+ - Shell: `http://<host>:3000` (serves index.html)
241
+ - MFE: `http://<host>:3001/remoteEntry.js` (Module Federation entry)
242
+
243
+ #### MR pipeline rules
244
+
245
+ The `validate:compose` and `validate:integration-test` jobs must include
246
+ `merge_request_event` in their rules, not just `main` branch. Without
247
+ this, root repo MRs have no pipeline and are blocked by the
248
+ `only_allow_merge_if_pipeline_succeeds` setting:
249
+
250
+ ```
251
+ rules:
252
+ - if: $CI_PIPELINE_SOURCE == "merge_request_event"
253
+ - if: $CI_COMMIT_BRANCH == "main" && $CI_PIPELINE_SOURCE != "trigger"
254
+ ```
255
+
256
+ #### Shadow status reporting
257
+
258
+ The shadow pipeline reports results to ALL repos with open story MRs —
259
+ managed repos AND the root repo. This is fan-out, not point-to-point.
260
+ When shadow integration passes or fails, every constituent MR gets the
261
+ same status simultaneously. This requires a group-level token with API
262
+ scope (`M_GROUP_TOKEN`) as a CI secret on the root repo.
263
+
264
+ Two jobs handle this:
265
+ - `shadow:report-status` (on success) — pushes `state=success` to all story MRs
266
+ - `shadow:report-failure` (on failure) — pushes `state=failed` to all story MRs
267
+
268
+ For each repo with an open story MR, both call:
269
+ ```
270
+ scm.post_commit_status(
271
+ repo: <story-repo>,
272
+ sha: <commit-sha>,
273
+ state: success | failed,
274
+ name: "shadow-integration",
275
+ description: <human-readable message>,
276
+ target_url: <link to root pipeline>
277
+ )
278
+ ```
279
+
280
+ ### Step 6 — Protect root repo main branch
281
+
282
+ ```
283
+ scm.protect_branch(
284
+ repo: <root-repo>,
285
+ branch: "main",
286
+ push: none,
287
+ merge: maintainer,
288
+ force_push: false
289
+ )
290
+ ```
291
+
292
+ ### Step 7 — Report
293
+
294
+ Output:
295
+ - Pipeline trigger token (masked, for reference)
296
+ - Webhook status per managed repo (created / already exists)
297
+ - CI variables installed on root repo
298
+ - Root repo pipeline status
299
+ - Branch protection status
300
+
301
+ ## Webhook Design Notes
302
+
303
+ ### GitLab free tier constraints
304
+
305
+ GitLab free tier supports project webhooks and pipeline triggers. The
306
+ webhook-to-trigger pattern works without any premium features.
307
+
308
+ ### Trigger URL format
309
+
310
+ ```
311
+ https://gitlab.com/api/v4/projects/<root-project-id>/trigger/pipeline
312
+ ?token=<trigger-token>
313
+ &ref=main
314
+ ```
315
+
316
+ ### Alternative: GitLab CI `trigger` keyword
317
+
318
+ For projects on GitLab Premium+, managed repo pipelines can use the
319
+ `trigger` keyword to directly trigger downstream root repo pipelines.
320
+ The webhook approach works on all tiers.
321
+
322
+ ## Critical — Repo Lists Must Include Root
323
+
324
+ Every script that iterates repos for a story — `integration-test.sh`,
325
+ `report-shadow-status.sh`, `invalidate-story-status.sh` — MUST include
326
+ the root repo alongside managed repos. The root repo is a constituent
327
+ of every story (it delivers integration tests). Excluding it from any
328
+ repo list creates a gap where the root MR can have stale status or
329
+ bypass the integrity gate. Use a single `REPOS` variable derived from
330
+ the topology, not a hardcoded list of managed repos.
331
+
332
+ ## Notes
333
+
334
+ - This capability is idempotent — running it again updates existing
335
+ webhooks and variables rather than creating duplicates
336
+ - The root repo pipeline uses `resource_group: distributed_merge`
337
+ to serialise merge transactions
338
+ - CI variables are protected and masked
339
+ - `wire-orchestration` does NOT create managed repos — that's `scaffold-repo`
340
+ - `wire-orchestration` does NOT create the root repo — that's `bootstrap-root-repo`
341
+ - The compose strategy (Docker Compose + DinD) is the reference
342
+ implementation. The methodology defines the contract (clone, build,
343
+ start, health-check, teardown); the strategy implements it. See I-018
344
+ for the plugin boundary design.
345
+ - The shadow:compose and validate:compose jobs currently duplicate the
346
+ compose logic. See I-017 for the DRY refactor plan (extract to shared
347
+ script or YAML anchors).
348
+ - Workflow rules prevent duplicate pipelines — trigger, MR, and main
349
+ branch events are handled distinctly
350
+ - The embedded shell follows the same lifecycle as managed repos
351
+ (install/build/test) plus inline compose validation. See I-012.
package/dist-m/m.md ADDED
@@ -0,0 +1,126 @@
1
+ # Methodology M — Steering
2
+
3
+ **Version:** 0.3.1
4
+
5
+ Methodology M is an AI-driven delivery method for distributed software systems where a single user story spans multiple repos, multiple deployable units, and can only be verified in an integrated environment. The full specification lives in `methodology-m.md` at the repo root.
6
+
7
+ This directory (`.m/`) is the agent-neutral, machine-readable expression of the methodology. Any AI agent that can read markdown and call platform APIs can execute these capabilities. Agent-specific adapters (`.kiro/`, `.claude/`, etc.) provide thin wrappers for discovery — the intelligence lives here.
8
+
9
+ **Path convention:** All paths in M documents are relative to the repo root, not to the file they appear in.
10
+
11
+ ## Key Files
12
+
13
+ - `methodology-m.md` — the full methodology specification (the paper)
14
+ - `.m/m.md` — this file: steering, capabilities, execution protocol
15
+ - `.m/capabilities/` — standalone playbooks for each M capability
16
+ - `.m/schemas/pat.schema.json` — JSON Schema for PAT.yaml files (story and sub-task)
17
+ - `.m/schemas/project.schema.json` — JSON Schema for project.yaml topology manifest
18
+ - `.m/providers/provider-interface.md` — namespace contracts and resolution protocol
19
+ - `.m/providers/scm/gitlab.md` — GitLab SCM provider (reference implementation)
20
+
21
+ ## When Working in This Repo
22
+
23
+ This is the methodology repo itself — not an M-type project. You are
24
+ editing the methodology, its capabilities, and its provider implementations.
25
+
26
+ When the user asks you to:
27
+ - **Execute a capability** (e.g. "scaffold a repo") — read the capability
28
+ file and provider file, then execute
29
+ - **Edit a capability** — modify `.m/capabilities/<name>.md`
30
+ - **Edit a provider** — modify `.m/providers/scm/<provider>.md`
31
+ - **Discuss methodology concepts** — refer to `methodology-m.md`
32
+
33
+ ## Capabilities
34
+
35
+ Each capability follows the [Agent Skills](https://agentskills.io) standard (`<name>/SKILL.md`). Read the full file before executing — summaries below are for orientation only.
36
+
37
+ | Capability | File | Description |
38
+ |---|---|---|
39
+ | `setup-workspace` | [SKILL.md](capabilities/setup-workspace/SKILL.md) | Create a project workspace (group/org) on the SCM platform |
40
+ | `bootstrap-root-repo` | [SKILL.md](capabilities/bootstrap-root-repo/SKILL.md) | Create and seed the root repo with topology manifest and Story Zero |
41
+ | `generate-pats` | [SKILL.md](capabilities/generate-pats/SKILL.md) | Transform story acceptance criteria into PAT.yaml format |
42
+ | `decompose-story` | [SKILL.md](capabilities/decompose-story/SKILL.md) | Map story-level PATs to components, generate sub-tasks and readiness tracker |
43
+ | `scaffold-repo` | [SKILL.md](capabilities/scaffold-repo/SKILL.md) | Create and configure a managed repo from a sub-task file |
44
+ | `wire-orchestration` | [SKILL.md](capabilities/wire-orchestration/SKILL.md) | Connect managed repos to root repo orchestration (webhooks, CI, tokens) |
45
+ | `generate-acceptance-tests` | [SKILL.md](capabilities/generate-acceptance-tests/SKILL.md) | Compile PAT stubs into executable acceptance tests (CATs) |
46
+ | `tag-release` | [SKILL.md](capabilities/tag-release/SKILL.md) | Tag a managed repo at a version and update root repo topology |
47
+
48
+ ## Execution Protocol
49
+
50
+ 1. User requests a capability (by name or by describing the intent)
51
+ 2. Agent reads the corresponding `.m/capabilities/<name>/SKILL.md` — the FULL file
52
+ 3. For any namespaced function call (e.g. `scm.create_repo(...)`):
53
+ a. Check `project.yaml` → `providers` to find the active provider
54
+ b. Read `.m/providers/<category>/<provider>.md` for the concrete implementation
55
+ c. Execute the platform-specific operation
56
+ 4. Agent produces the report described in the capability file
57
+
58
+ No shortcuts. The capability files contain critical sequences (branch protection ordering, webhook flags, CI image requirements) that cannot be skipped.
59
+
60
+ ## Configuration Resolution
61
+
62
+ Capabilities that generate artefacts (CI configs, test setups, templates) resolve their choices through a four-tier hierarchy:
63
+
64
+ ```
65
+ Repo-level override → Project-level (project.yaml) → Org Config → M Core Config
66
+ ```
67
+
68
+ See `methodology-m.md` Section 6a for the full strategy/plugin architecture. The capabilities define the *what*. The configuration hierarchy determines the *with what tools*.
69
+
70
+ ## Provider Namespaces
71
+
72
+ Capabilities call platform operations via namespaced functions. Each
73
+ namespace is backed by a provider selected in `project.yaml`.
74
+
75
+ | Namespace | Category | Provider interface |
76
+ |---|---|---|
77
+ | `scm.*` | Source code management | [provider-interface.md](providers/provider-interface.md) |
78
+
79
+ New namespaces (e.g. `test.cat.*`, `compose.*`) are added as the
80
+ methodology evolves. See the provider interface doc for the full
81
+ function contracts and the protocol for adding new namespaces/providers.
82
+
83
+ ## Directory Structure
84
+
85
+ ```
86
+ .m/
87
+ m.md ← this file
88
+ capabilities/
89
+ <name>/SKILL.md ← one per capability (Agent Skills standard)
90
+ schemas/
91
+ pat.schema.json ← JSON Schema for PAT.yaml files
92
+ project.schema.json ← JSON Schema for project.yaml
93
+ providers/
94
+ provider-interface.md ← namespace contracts and resolution protocol
95
+ scm/gitlab.md ← GitLab SCM provider (reference implementation)
96
+ ```
97
+
98
+ Agent-specific steering (how to behave) lives in agent directories
99
+ (`.claude/steering/`, `.kiro/steering/`). Steering templates for
100
+ managed repos are embedded in the `scaffold-repo` capability doc.
101
+
102
+ ## Schemas — Mandatory Validation
103
+
104
+ The `.m/schemas/` directory contains JSON Schema definitions for M
105
+ artefacts. These are the formal contracts — capability docs describe
106
+ *when* and *why* to generate an artefact, schemas define *what it must
107
+ look like*.
108
+
109
+ | Schema | Validates | Used by |
110
+ |---|---|---|
111
+ | `pat.schema.json` | `*.pat.yaml` files (story-level and sub-task) | generate-pats, decompose-story, generate-acceptance-tests |
112
+ | `project.schema.json` | `project.yaml` topology manifest | bootstrap-root-repo, scaffold-repo, wire-orchestration |
113
+
114
+ **Agent rule:** When generating or modifying a PAT.yaml or project.yaml
115
+ file, read the corresponding schema FIRST. Validate your output against
116
+ the schema before committing. If a field is marked `required` in the
117
+ schema, it must be present. If a value has an `enum` constraint or
118
+ `pattern`, use only valid values. Do not invent fields not in the schema.
119
+
120
+ ## Relationship to Agent Runtimes
121
+
122
+ This directory is the canonical source. Agent-specific directories contain thin wrappers:
123
+
124
+ - **Kiro** (`.kiro/powers/m-power/`) — a Power that bundles these capabilities with MCP servers and hooks
125
+ - **Claude** (`.claude/skills/<name>/SKILL.md`) — thin skill wrappers pointing to `.m/capabilities/<name>/SKILL.md`
126
+ - **Any SKILL.md-compatible agent** — can consume these files directly or via lightweight adapters