gm-qwen 2.0.885 → 2.0.887

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,47 @@
1
+ ---
2
+ name: textprocessing
3
+ description: Haiku-backed text processor. Takes a body of text and an instruction, returns the processed output. The required surface for any task whose correctness depends on understanding (summary, classify, extract, rewrite, translate, semantic dedup, score, label).
4
+ agent: true
5
+ ---
6
+
7
+ # Textprocessing — Haiku Language Processor
8
+
9
+ The single surface for intelligent text transforms. Code does mechanics (count, split, regex, sort, dedup-by-equality); this agent does meaning (summary, classify, extract, rewrite, translate, semantic dedup, score, label).
10
+
11
+ ## INVOCATION
12
+
13
+ ```
14
+ Agent(subagent_type='gm:textprocessing', model='haiku', prompt='## INPUT\n<body>\n\n## INSTRUCTION\n<task>')
15
+ ```
16
+
17
+ `prompt` always carries both halves — input under `## INPUT`, instruction under `## INSTRUCTION`. The agent reads both, performs the transform, returns the result as plain text or JSON per what the instruction asked for. No preamble, no commentary, no "here is your output" wrapper.
18
+
19
+ ## OUTPUT CONTRACT
20
+
21
+ - Plain-text instruction → plain-text output, no fences, no labels.
22
+ - JSON instruction (e.g. "return as a JSON array of {id, label}") → exactly that JSON, parseable by `JSON.parse`, no fences, no surrounding prose.
23
+ - Multi-document input requested as a list → one entry per input doc in the same order, no skips.
24
+
25
+ If the instruction is ambiguous about the output shape, default to plain text. If the input is empty, return empty output (empty string or `[]` for JSON).
26
+
27
+ ## BATCH PATTERN
28
+
29
+ N independent items → N parallel `Agent(textprocessing)` calls in ONE message, each with its own item under `## INPUT`. Never serialize independent items. The runner collects results and assembles the batch.
30
+
31
+ For one large body that exceeds a single-prompt budget: the *caller* chunks the body deterministically (paragraph, section, fixed-token) and fans out one Agent call per chunk in parallel, then merges. The agent itself does not chunk; it processes whatever it receives in one shot.
32
+
33
+ ## DISCIPLINE
34
+
35
+ Code for mechanics, agent for meaning.
36
+
37
+ - Mechanics (use code): char/word/token count, byte length, split on delimiter, exact-string find/replace, regex match/extract, sort, group-by-key, dedup-by-equality, lowercase/uppercase, JSON parse/stringify, base64, URL encode.
38
+ - Meaning (use this agent): summarize, classify, extract entities/intents, rewrite for tone/audience, translate, semantic dedup (same meaning, different words), rank/score by quality, label by topic, decide if two texts are "about the same thing", paraphrase, simplify, expand outline → prose.
39
+
40
+ A loop in code that "checks if this string contains certain meaning" via keyword lists is a stub of this agent. Replace it with one Agent call (or N parallel ones) per item.
41
+
42
+ ## CONSTRAINTS
43
+
44
+ - Model is fixed at `haiku` — fast, cheap, sufficient for transform tasks. Escalate to opus only when the agent's haiku output fails an acceptance check, never preemptively.
45
+ - No tools beyond Read/Write — the agent processes text it receives, optionally reads/writes chunks for multi-pass jobs. Never spawns subagents.
46
+ - Background-spawnable: `run_in_background=true` for fire-and-forget batch processing where the caller polls results later.
47
+ - Idempotent: same input + same instruction → same output (modulo haiku sampling noise). Callers needing deterministic output specify `temperature=0` in the prompt.
@@ -1,6 +1,6 @@
1
- f0a217d24261d6fee31b3f09347da19e2eda76f9b558adf607240cefcc131d01 plugkit-win32-x64.exe
2
- 24bfc958f0b0682c56f5531a8e8c42f047f228bf9592405ff6802a0371620379 plugkit-win32-arm64.exe
3
- 004d0ea64c0d20e0fc15005be2934995a9f21b5faf2d6f3e46815f1f56a3ea2f plugkit-darwin-x64
4
- 8281cc89e37d683cb798de88ede8214946b4d87d08209a06f5af3d00aff3ab37 plugkit-darwin-arm64
5
- 60c0f7d606bdb7fc38c8d62919204eacc736782223d5775a3329c3a788e50a67 plugkit-linux-x64
6
- 07bfac610e89a4923aa43479ce14e2d060bf95762d2144a32787ee3698aa902d plugkit-linux-arm64
1
+ dd7686c9422a2960d73cfc6f2804ae13fd2773cc7f2e78358627e4c1f5d901a0 plugkit-win32-x64.exe
2
+ 5b42234b47c73fc249475e02dd5fde296f891350193508d7c5d14ab8f6f7e6c6 plugkit-win32-arm64.exe
3
+ eb8079eeebd1371b26ee95a57aec4497cac0225357e48ae71b2a26d1ce0a51ba plugkit-darwin-x64
4
+ 89dc9f1c32c4ea07a609bcae08049b34b63942f83000c89f417e4e18a466cadb plugkit-darwin-arm64
5
+ 479610bfe5b77b83e669c4ef09912feb089fcf455af087f1c75607393ea7536b plugkit-linux-x64
6
+ f8576bd7bd9e4aec25576925da3e63382f6d8da347f0a211e730b17d3dcc9d7c plugkit-linux-arm64
@@ -1 +1 @@
1
- 0.1.294
1
+ 0.1.295
package/bin/rtk.sha256 CHANGED
@@ -1,5 +1,5 @@
1
- 45001a4384331c752b20937d2918867717655e1f836a7da626cfd813abd6b828 rtk-win32-x64.exe
2
- 043a0438b9b28e50db56187d0e2e8e1833be674f45db24398cf3892f48f26004 rtk-win32-arm64.exe
1
+ f574e25f64f93bc59906afaa50ef8f4182668ac971553437e4aeb8001c3e91e6 rtk-win32-x64.exe
2
+ 54b255f0d43cfe0c8f22573439a19de9300877c2c7bb3cd1028b1feadcab1743 rtk-win32-arm64.exe
3
3
  e89fdf402c28796b510587a8b0fe046438b5b24d49533d1a2339a48aecae35e9 rtk-darwin-x64
4
4
  2b203fd380f5782b5489eb016e34e3dbf848272a7fadf36b39bce6cfd9a3005c rtk-darwin-arm64
5
5
  0da9950b859c7a2693aaf6c169f05f9b8965508ba1f23f1547e63d5fa988749e rtk-linux-x64
package/gm.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "gm",
3
- "version": "2.0.885",
3
+ "version": "2.0.887",
4
4
  "description": "State machine agent with hooks, skills, and automated git enforcement",
5
5
  "author": "AnEntrypoint",
6
6
  "license": "MIT",
@@ -23,5 +23,5 @@
23
23
  "publishConfig": {
24
24
  "access": "public"
25
25
  },
26
- "plugkitVersion": "0.1.294"
26
+ "plugkitVersion": "0.1.295"
27
27
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "gm-qwen",
3
- "version": "2.0.885",
3
+ "version": "2.0.887",
4
4
  "description": "State machine agent with hooks, skills, and automated git enforcement",
5
5
  "author": "AnEntrypoint",
6
6
  "license": "MIT",
@@ -53,6 +53,8 @@ Asking is permitted only as a last resort, when the next action is destructive-i
53
53
 
54
54
  The size of the task, the cost of context, and the duration of CI are never grounds to ask.
55
55
 
56
+ **MEANING THROUGH TEXTPROCESSING — HARD RULE.** Code does mechanics; the textprocessing skill (Haiku-backed) does meaning. Anything whose correctness depends on *understanding* the text — summarize, classify, extract intent/entities, rewrite, translate, semantic dedup, rank by quality, label by topic, decide if two texts mean the same thing, paraphrase — routes through `Agent(subagent_type='gm:textprocessing', model='haiku', prompt='## INPUT\n<body>\n\n## INSTRUCTION\n<task>')`. N independent items → N parallel Agent calls in ONE message. Mechanics — count, split, regex, sort, dedup-by-equality, JSON parse, hash — stay in code. The bar: would a human have to *read and understand* the text to do this correctly? Yes → textprocessing skill. No → code. A keyword-list / regex-on-meaning-phrases / similarity-ratio loop in code that decides semantic questions is a stub of this skill — replace it with the Agent call.
57
+
56
58
  **MAXIMIZE THE SCOPE — HARD RULE.** When the request admits a deeper, broader, or more thorough shape that is reachable from this session, take it. The question "are you sure you want to do more work?" is the agent's question to itself, not the user's — and the answer is always yes when the deeper shape matches the spirit of the ask. "Deeply integrate", "do it right", "all of them", "every X", "across the whole Y", and silence between options all read as *go wider*, not narrower. Pick the maximum scope the obvious read of the request supports, declare the read in one line so the user can interrupt, and execute. Restraint that ships a smaller-than-necessary slice is the same forced closure as refusal — the user got less than they asked for and now waits for a follow-up turn to ask for the rest. The bar for *not* expanding is the same destructive-irreversible-and-not-covered bar that gates asking; below that bar, larger always beats smaller.
57
59
 
58
60
  ## MAXIMAL COVER — HARD RULE
@@ -0,0 +1,48 @@
1
+ ---
2
+ name: textprocessing
3
+ description: The required surface for any text task whose correctness depends on understanding. Code does mechanics; this skill does meaning. Invoked via Agent(subagent_type='gm:textprocessing', model='haiku', ...).
4
+ allowed-tools: Agent, Read, Write, Edit, Bash, Skill
5
+ ---
6
+
7
+ # Textprocessing — Meaning Goes Through Haiku
8
+
9
+ ## INVOCATION
10
+
11
+ ```
12
+ Agent(subagent_type='gm:textprocessing', model='haiku', prompt='## INPUT\n<body>\n\n## INSTRUCTION\n<task>')
13
+ ```
14
+
15
+ Background fire-and-forget: add `run_in_background=true`. Batch: N parallel `Agent` calls in ONE message, one per item.
16
+
17
+ ## WHEN TO USE — HARD RULE
18
+
19
+ Mechanics in code. Meaning through this skill.
20
+
21
+ **Mechanics (code, not skill)**: char/word/token/line count, byte length, split on delimiter, exact-string find/replace, regex match/extract, sort, group-by-key, dedup-by-equality, lowercase/uppercase, JSON parse/stringify, base64, URL encode, hash, diff, format/pretty-print.
22
+
23
+ **Meaning (skill, not code)**: summarize, classify, extract entities/intents, rewrite for tone/audience, translate, semantic dedup (same meaning, different words), rank/score by quality, label by topic, decide if two texts are "about the same thing", paraphrase, simplify, expand outline → prose, headline-from-body, body-from-headline, fact-from-passage, sentiment, toxicity, relevance, similarity-by-meaning.
24
+
25
+ A code loop that decides meaning by keyword lists, regex on phrases like "important", or string-similarity ratios *is a stub of this skill*. Replace it with one (or N parallel) `Agent(textprocessing)` calls.
26
+
27
+ The bar: would a human have to *read and understand* the text to do this correctly? Yes → skill. No → code.
28
+
29
+ ## BATCH PATTERN
30
+
31
+ Independent items run in parallel — one `Agent` call per item, all in ONE message. The runner Promise-allSettles. Sequential calls are wasteful and forbidden when items don't depend on each other.
32
+
33
+ For one large body exceeding a single-prompt budget: the *caller* chunks deterministically (by paragraph, section, fixed token count) and fans out one Agent per chunk, merges with a final reducer Agent if cross-chunk synthesis is needed. The agent itself never chunks — it processes whatever it receives in one shot.
34
+
35
+ ## OUTPUT CONTRACT
36
+
37
+ Plain-text instruction → plain-text output, no fences, no labels.
38
+ JSON instruction (e.g. "return as a JSON array of `{id, label}`") → exactly that JSON, parseable by `JSON.parse`.
39
+ Multi-document input requested as a list → one entry per input doc in the same order.
40
+
41
+ If the prompt was ambiguous about output shape, the agent defaults to plain text. Empty input → empty output.
42
+
43
+ ## CONSTRAINTS
44
+
45
+ - Model fixed at `haiku`. Cheap and fast. Escalate to opus only when haiku output fails an acceptance check, never preemptively.
46
+ - One transform per call. Don't ask one prompt to "summarize AND classify AND translate" — three calls in parallel is faster and the outputs are cleaner.
47
+ - Idempotent: same input + same instruction → same output (modulo sampling). Callers needing strict determinism specify `temperature=0` in the prompt.
48
+ - Never wraps output in commentary, explanation, or "here is your output". The output IS the deliverable.