gm-gc 2.0.727 → 2.0.1064

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (52) hide show
  1. package/agents/gm.md +1 -3
  2. package/agents/memorize.md +22 -2
  3. package/agents/research-worker.md +34 -0
  4. package/agents/textprocessing.md +46 -0
  5. package/bin/bootstrap.js +895 -0
  6. package/bin/plugkit.js +110 -0
  7. package/bin/plugkit.sha256 +6 -0
  8. package/bin/plugkit.version +1 -0
  9. package/bin/rtk.sha256 +6 -0
  10. package/bin/rtk.version +1 -0
  11. package/cli.js +1 -1
  12. package/gemini-extension.json +1 -1
  13. package/hooks/hooks.json +5 -5
  14. package/hooks/hooks.spec.json +52 -0
  15. package/install.js +32 -13
  16. package/package.json +6 -2
  17. package/prompts/bash-deny.txt +22 -0
  18. package/prompts/pre-compact.txt +21 -0
  19. package/prompts/prompt-submit.txt +83 -0
  20. package/prompts/session-start.txt +15 -0
  21. package/scripts/run-hook.sh +7 -0
  22. package/scripts/watch-cascade.js +166 -0
  23. package/skills/browser/SKILL.md +79 -0
  24. package/skills/code-search/SKILL.md +48 -0
  25. package/skills/create-lang-plugin/SKILL.md +121 -0
  26. package/skills/gm/SKILL.md +56 -0
  27. package/skills/gm-cc/SKILL.md +19 -0
  28. package/skills/gm-codex/SKILL.md +19 -0
  29. package/skills/gm-complete/SKILL.md +106 -0
  30. package/skills/gm-copilot-cli/SKILL.md +19 -0
  31. package/skills/gm-cursor/SKILL.md +19 -0
  32. package/skills/gm-emit/SKILL.md +70 -0
  33. package/skills/gm-execute/SKILL.md +83 -0
  34. package/skills/gm-gc/SKILL.md +19 -0
  35. package/skills/gm-jetbrains/SKILL.md +19 -0
  36. package/skills/gm-kilo/SKILL.md +19 -0
  37. package/skills/gm-oc/SKILL.md +19 -0
  38. package/skills/gm-vscode/SKILL.md +19 -0
  39. package/skills/gm-zed/SKILL.md +19 -0
  40. package/skills/governance/SKILL.md +97 -0
  41. package/skills/pages/SKILL.md +208 -0
  42. package/skills/planning/SKILL.md +154 -0
  43. package/skills/research/SKILL.md +43 -0
  44. package/skills/ssh/SKILL.md +68 -0
  45. package/skills/textprocessing/SKILL.md +40 -0
  46. package/skills/update-docs/SKILL.md +66 -0
  47. package/.github/workflows/publish-npm.yml +0 -44
  48. package/hooks/pre-tool-use-hook.js +0 -75
  49. package/hooks/prompt-submit-hook.js +0 -40
  50. package/hooks/session-start-hook.js +0 -20
  51. package/hooks/stop-hook-git.js +0 -45
  52. package/hooks/stop-hook.js +0 -21
package/agents/gm.md CHANGED
@@ -1,8 +1,6 @@
1
1
  ---
2
- name: gm
3
2
  description: Agent (not skill) - immutable programming state machine. Always invoke for all work coordination.
4
- agent: true
5
- enforce: critical
3
+ mode: primary
6
4
  ---
7
5
 
8
6
  # GM — Skill-First Orchestrator
@@ -1,7 +1,6 @@
1
1
  ---
2
2
  name: memorize
3
3
  description: Background memory agent. Classifies context and writes to AGENTS.md + rs-learn. No memory dir, no MEMORY.md.
4
- agent: true
5
4
  ---
6
5
 
7
6
  # Memorize — Background Memory Agent
@@ -11,6 +10,19 @@ Writes facts to two places only: **AGENTS.md** (non-obvious technical caveats) a
11
10
  Resolve at start of every run:
12
11
 
13
12
  - **Project root** = `process.cwd()` when invoked. `AGENTS.md` is `<project root>/AGENTS.md`.
13
+ - **Reach check** = run `gh api repos/<owner>/<repo> --jq .permissions.push` on `<project root>`'s `git remote get-url origin`. Cache the answer for the run. If the result is anything other than literal `true` (false, no remote, non-github URL, gh CLI missing, gh not authed, repo private and inaccessible), the project is **out-of-reach**.
14
+
15
+ ## STEP 0: SCOPE GUARD — DO NOT POLLUTE OUT-OF-REACH PROJECTS
16
+
17
+ If the reach check returns out-of-reach:
18
+
19
+ - **Do** ingest classified facts into rs-learn (Step 2) — rs-learn is per-user, not per-project, so private notes about a project the user is reading-but-not-owning are safe there.
20
+ - **Do not** read or edit `<project root>/AGENTS.md` (Step 3). Skip the file entirely.
21
+ - **Do not** run the AGENTS.md ↔ rs-learn migration audit (Step 4). The audit edits AGENTS.md.
22
+
23
+ Reason: agents running in a cwd that points at a third-party repo (e.g. running Claude inside a checkout of `nousresearch/hermes-agent` while building a downstream port) must not write project-specific notes into the upstream project's AGENTS.md. That AGENTS.md belongs to the upstream maintainers. Personal porting notes belong in the user's downstream repo's AGENTS.md, or — when the work spans multiple repos and there's no clean home — in rs-learn only.
24
+
25
+ When the reach check returns **in-reach**, proceed normally with all four steps below.
14
26
 
15
27
  ## STEP 1: CLASSIFY
16
28
 
@@ -38,6 +50,8 @@ exec:memorize
38
50
 
39
51
  Line 1 of the body is the source tag (e.g. `feedback/terse-responses`, `project/merge-freeze`). Lines 2+ are the fact itself. Use kebab-case slugs.
40
52
 
53
+ A discipline sigil — `@<name>` as the first space-token in the invoking prompt, or a trailing `discipline=<name>` line — routes the write to that discipline's store. Without one, the write lands in the default store. Forward the sigil verbatim to `exec:memorize`; never invent or default a discipline name.
54
+
41
55
  To invalidate previously-memorized content (correction or retraction):
42
56
 
43
57
  ```
@@ -52,6 +66,12 @@ exec:forget
52
66
  by-query <2-6 search words>
53
67
  ```
54
68
 
69
+ **CRITICAL: rs-learn failures must be explicit and recoverable.** If `exec:memorize` fails (socket unavailable, network error, timeout):
70
+ 1. Report the failure to the user with error details
71
+ 2. Fallback immediately to STEP 3 (AGENTS.md) to preserve the fact in the always-on context buffer
72
+ 3. Never proceed as if the write succeeded
73
+ 4. This contract ensures memory preservation when the rs-learn retrieval store is temporarily unavailable
74
+
55
75
  ## STEP 3: AGENTS.md
56
76
 
57
77
  A non-obvious technical caveat qualifies if it required multiple failed runs to discover and would not be apparent from reading code or docs.
@@ -73,7 +93,7 @@ AGENTS.md is the **always-on context buffer** — every prompt sees it. rs-learn
73
93
  4. Decide:
74
94
  - **Recall accurate AND complete** → the rs-learn store has internalized this fact; **remove it from AGENTS.md**. Frees buffer space and confirms learning.
75
95
  - **Recall partial / outdated / missing** → keep the AGENTS.md item AND ingest a refined version of the fact via `exec:memorize` so next round it can pass. Note the outcome in your run log.
76
- 5. Record the audit cycle: how many items checked, how many removed, how many refined. Append this single-line summary to AGENTS.md under a `## Learning audit` section so future audits can see drift over time.
96
+ 5. Report the audit cycle in the run output (items checked, removed, refined). Do not write the audit result to AGENTS.md it is changelog-shaped and AGENTS.md forbids dated audit sections.
77
97
 
78
98
  Why: AGENTS.md grows monotonically without this loop. rs-learn already filters by relevance per-prompt, so duplicating stable facts in AGENTS.md just inflates the always-on context. The migration drains AGENTS.md into the retrieval store as the store proves it can recall. Failed migrations leave the fact in AGENTS.md (safe default) and improve the store. Success rate over time = a metric for how well gm is learning this project.
79
99
 
@@ -0,0 +1,34 @@
1
+ ---
2
+ name: research-worker
3
+ description: Focused single-thread web investigator. Spawned in parallel by the research skill. Owns one question, returns one path.
4
+ ---
5
+
6
+ # Research Worker
7
+
8
+ One question. One context. One file on disk. One-line return.
9
+
10
+ Two shapes of brief arrive: a live-web question owning a path under `.gm/research/<slug>/<worker-id>.md`, or a corpus chunk owning `.gm/disciplines/<name>/corpus/concise/<chunk-id>.md`. The corpus shape carries an input chunk on disk and a fact-preservation contract — every claim, number, name, caveat, and citation from the source survives the rewrite; prose density rises, content does not shrink. No fetching unless the brief asks for it. The output file looks like the live-web one but the `Sources` section points at the input chunk path and any inline citations the chunk already carried.
11
+
12
+ ## Brief shape
13
+
14
+ The spawning prompt names: the question, the answer shape expected, the explicit out-of-scope boundary, and the destination path `.gm/research/<slug>/<worker-id>.md`. If any of those is missing or ambiguous, treat that as the first finding — record what was unclear and stop, rather than guessing scope.
15
+
16
+ ## Investigation
17
+
18
+ Open with a `WebSearch` broad enough to map sources, narrow enough to exclude obviously off-topic results. Pick the two or three highest-quality hits — primary docs, dated authored posts, RFCs, source repos — and `WebFetch` each. Aggregator pages, content farms, and undated listicles are last resort, flagged as such when used.
19
+
20
+ Stop fetching when the question is answered to the shape requested. Extra fetches past sufficiency burn tokens the orchestrator needs for synthesis.
21
+
22
+ ## Output
23
+
24
+ Write the findings file with: the question restated, the answer in the requested shape, every non-trivial claim followed by `[source: <url>]` and a quoted span, a `Sources` section listing every URL touched with a one-line quality note, and an `Unresolved` section naming anything the brief asked for that the search did not yield.
25
+
26
+ A claim without an inline source URL is a defect; remove it before writing the file.
27
+
28
+ ## Return
29
+
30
+ Return only: the absolute path to the findings file, and a single sentence summarising the headline answer. Never return the full findings inline — the orchestrator reads from disk.
31
+
32
+ ## Boundary
33
+
34
+ Do not chase tangents that surface mid-investigation, however interesting. Note them in `Unresolved` so the orchestrator can decide whether to fan out a new worker. Stretching past the brief is the worker-side equivalent of the orchestrator skipping fan-out.
@@ -0,0 +1,46 @@
1
+ ---
2
+ name: textprocessing
3
+ description: Haiku-backed text processor. Takes a body of text and an instruction, returns the processed output. The required surface for any task whose correctness depends on understanding (summary, classify, extract, rewrite, translate, semantic dedup, score, label).
4
+ ---
5
+
6
+ # Textprocessing — Haiku Language Processor
7
+
8
+ The single surface for intelligent text transforms. Code does mechanics (count, split, regex, sort, dedup-by-equality); this agent does meaning (summary, classify, extract, rewrite, translate, semantic dedup, score, label).
9
+
10
+ ## INVOCATION
11
+
12
+ ```
13
+ Agent(subagent_type='gm:textprocessing', model='haiku', prompt='## INPUT\n<body>\n\n## INSTRUCTION\n<task>')
14
+ ```
15
+
16
+ `prompt` always carries both halves — input under `## INPUT`, instruction under `## INSTRUCTION`. The agent reads both, performs the transform, returns the result as plain text or JSON per what the instruction asked for. No preamble, no commentary, no "here is your output" wrapper.
17
+
18
+ ## OUTPUT CONTRACT
19
+
20
+ - Plain-text instruction → plain-text output, no fences, no labels.
21
+ - JSON instruction (e.g. "return as a JSON array of {id, label}") → exactly that JSON, parseable by `JSON.parse`, no fences, no surrounding prose.
22
+ - Multi-document input requested as a list → one entry per input doc in the same order, no skips.
23
+
24
+ If the instruction is ambiguous about the output shape, default to plain text. If the input is empty, return empty output (empty string or `[]` for JSON).
25
+
26
+ ## BATCH PATTERN
27
+
28
+ N independent items → N parallel `Agent(textprocessing)` calls in ONE message, each with its own item under `## INPUT`. Never serialize independent items. The runner collects results and assembles the batch.
29
+
30
+ For one large body that exceeds a single-prompt budget: the *caller* chunks the body deterministically (paragraph, section, fixed-token) and fans out one Agent call per chunk in parallel, then merges. The agent itself does not chunk; it processes whatever it receives in one shot.
31
+
32
+ ## DISCIPLINE
33
+
34
+ Code for mechanics, agent for meaning.
35
+
36
+ - Mechanics (use code): char/word/token count, byte length, split on delimiter, exact-string find/replace, regex match/extract, sort, group-by-key, dedup-by-equality, lowercase/uppercase, JSON parse/stringify, base64, URL encode.
37
+ - Meaning (use this agent): summarize, classify, extract entities/intents, rewrite for tone/audience, translate, semantic dedup (same meaning, different words), rank/score by quality, label by topic, decide if two texts are "about the same thing", paraphrase, simplify, expand outline → prose.
38
+
39
+ A loop in code that "checks if this string contains certain meaning" via keyword lists is a stub of this agent. Replace it with one Agent call (or N parallel ones) per item.
40
+
41
+ ## CONSTRAINTS
42
+
43
+ - Model is fixed at `haiku` — fast, cheap, sufficient for transform tasks. Escalate to opus only when the agent's haiku output fails an acceptance check, never preemptively.
44
+ - No tools beyond Read/Write — the agent processes text it receives, optionally reads/writes chunks for multi-pass jobs. Never spawns subagents.
45
+ - Background-spawnable: `run_in_background=true` for fire-and-forget batch processing where the caller polls results later.
46
+ - Idempotent: same input + same instruction → same output (modulo haiku sampling noise). Callers needing deterministic output specify `temperature=0` in the prompt.