@femtomc/mu-agent 26.3.4 → 26.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -45,7 +45,7 @@ They are organized as category meta-skills plus subskills:
45
45
  - `setup-discord`
46
46
  - `setup-telegram`
47
47
  - `setup-neovim`
48
- - `writing`
48
+ - `technical-writing`
49
49
 
50
50
  Starter skills are version-synced by CLI bootstrap. Initial bootstrap seeds missing
51
51
  skills; bundled-version changes refresh installed starter skill files.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@femtomc/mu-agent",
3
- "version": "26.3.4",
3
+ "version": "26.3.5",
4
4
  "description": "Shared operator runtime for mu assistant sessions and serve extensions.",
5
5
  "keywords": [
6
6
  "mu",
@@ -24,7 +24,7 @@
24
24
  "themes/**"
25
25
  ],
26
26
  "dependencies": {
27
- "@femtomc/mu-core": "26.3.4",
27
+ "@femtomc/mu-core": "26.3.5",
28
28
  "@mariozechner/pi-agent-core": "^0.54.2",
29
29
  "@mariozechner/pi-ai": "^0.54.2",
30
30
  "@mariozechner/pi-coding-agent": "^0.54.2",
@@ -263,4 +263,4 @@ For wall-clock schedules (one-shot, interval, cron-expression), use `crons`.
263
263
  - **`setup-discord`**
264
264
  - **`setup-telegram`**
265
265
  - **`setup-neovim`**
266
- - Technical writing/docs polish: **`writing`**
266
+ - Technical writing/docs polish: **`technical-writing`**
@@ -0,0 +1,289 @@
1
+ ---
2
+ name: technical-writing
3
+ description: "Produces clear, argument-driven technical prose. Use when drafting or reviewing systems papers, design docs, READMEs, PR descriptions, error messages, API references, or other technical communication."
4
+ ---
5
+
6
+ # technical writing
7
+
8
+ Use this skill when asked to write, edit, or review technical prose. This includes research/systems papers, design docs, RFCs, READMEs, PR descriptions, commit messages, error messages, API references, and postmortems.
9
+
10
+ This skill emphasizes argument quality (why the work matters, why choices are defensible, what evidence supports claims), not just sentence polish.
11
+
12
+ ## Source foundations
13
+
14
+ This guidance synthesizes:
15
+
16
+ - Dan Cosley, *Writing more useful systems papers, maybe* (reader value + "why" over "what")
17
+ - Kayvon Fatahalian, *What Makes a (Graphics) Systems Paper Beautiful* (goals/constraints, organizing insight, design decisions, evaluation causality)
18
+ - Daniel Ritchie, *Three-phase Paper Writing* (phase-based drafting and feedback loops)
19
+ - Roy Levin and David Redell, *How (and How Not) to Write a Good Systems Paper* (originality/reality/lessons/choices/context/focus/presentation rubric)
20
+
21
+ ## Contents
22
+
23
+ - [Core contract](#core-contract)
24
+ - [Core argument spine](#core-argument-spine)
25
+ - [Writing workflows](#writing-workflows)
26
+ - [Section checklists (papers and deep technical docs)](#section-checklists-papers-and-deep-technical-docs)
27
+ - [Common crash landings](#common-crash-landings)
28
+ - [Editing and review workflow](#editing-and-review-workflow)
29
+ - [Evaluation scenarios](#evaluation-scenarios)
30
+ - [Quality bar](#quality-bar)
31
+
32
+ ## Core contract
33
+
34
+ 1. **Reader value over author chronology**
35
+ - Do not write "what I did" narratives.
36
+ - Write for what the reader should understand, decide, or do after reading.
37
+
38
+ 2. **Lead with why**
39
+ - State why the problem matters before deep implementation detail.
40
+ - Make stakes explicit: who benefits, what improves, what becomes possible.
41
+
42
+ 3. **Define problem shape explicitly**
43
+ - State goals, non-goals, constraints, and assumptions.
44
+ - If these are unclear, the design cannot be evaluated fairly.
45
+
46
+ 4. **Name the central insight**
47
+ - State the organizing principle in plain language.
48
+ - A system artifact is not the contribution by itself; the transferable insight is.
49
+
50
+ 5. **Explain choices, not just outcomes**
51
+ - Highlight key design decisions and why they were chosen.
52
+ - Discuss meaningful alternatives and tradeoffs.
53
+ - Separate core decisions from incidental implementation details.
54
+
55
+ 6. **Attach evidence to claims**
56
+ - For each major claim, provide concrete evidence.
57
+ - Evaluate whether the proposed decisions caused the observed results.
58
+ - Prefer claim-by-claim validation over a single undifferentiated benchmark dump.
59
+
60
+ 7. **State lessons and limits**
61
+ - Tell readers what generalizes and what does not.
62
+ - Be explicit about assumptions and boundary conditions.
63
+
64
+ 8. **Avoid revisionist history**
65
+ - Do not present a magically linear path if work was iterative.
66
+ - Briefly documenting failed paths and surprises increases credibility and usefulness.
67
+
68
+ 9. **Write for scanability and precision**
69
+ - Strong headings, short paragraphs, concrete nouns, consistent terminology.
70
+ - Define terms before use; minimize forward references.
71
+
72
+ 10. **Polish matters**
73
+ - Clear grammar, correct spelling, and coherent figure captions are part of technical quality, not cosmetics.
74
+
75
+ ## Core argument spine
76
+
77
+ Use this skeleton for most substantial technical writing:
78
+
79
+ 1. **Problem and stakes**: What problem exists, and why should this reader care now?
80
+ 2. **Gap**: Why current approaches are insufficient under real constraints.
81
+ 3. **Insight**: The key idea / organizing observation.
82
+ 4. **Approach**: What was built/proposed and the major decisions.
83
+ 5. **Evidence**: What data, experiments, deployments, or analysis support each claim.
84
+ 6. **Implications**: What new capability or practical outcome this enables.
85
+ 7. **Limits**: Assumptions, non-goals, failure modes, and open questions.
86
+
87
+ If a section cannot map to this spine, it is often off-topic, under-motivated, or prematurely detailed.
88
+
89
+ ## Writing workflows
90
+
91
+ ### A) Systems/research paper workflow (phase-based)
92
+
93
+ #### Phase 0: contribution test (before full drafting)
94
+
95
+ - Can the central idea be stated in one short paragraph?
96
+ - Is the problem specific and meaningful?
97
+ - Is the contribution significant enough for the venue?
98
+ - Is related work understood deeply enough to establish novelty?
99
+ - Is the work implemented or otherwise justified at the level needed for credibility?
100
+
101
+ If these answers are weak, improve the work/story before expanding prose.
102
+
103
+ #### Phase I: section-level outline
104
+
105
+ Use a stable scaffold:
106
+
107
+ 1. Introduction
108
+ 2. Related Work / Context
109
+ 3. Approach Overview
110
+ 4. Method / Design sections
111
+ 5. Results / Evaluation
112
+ 6. Discussion / Limitations / Future Work
113
+
114
+ At outline time:
115
+ - List explicit contributions in the introduction.
116
+ - Name evaluation subsections early (for example: ablations, latency, usability, robustness).
117
+ - Capture known limitations as a running list.
118
+
119
+ #### Phase II: sentence-level skeleton
120
+
121
+ - Write one line per intended sentence.
122
+ - Focus on information content, not elegance.
123
+ - Ensure each section’s opening paragraph states what is in the section and why it matters.
124
+ - For every important design decision, add explicit justification and mention alternatives.
125
+ - Draft figure/table placeholders early so missing evidence is visible.
126
+
127
+ #### Phase III: polish and technical precision
128
+
129
+ - Convert skeleton text into clear prose.
130
+ - Tighten wording, remove repetition, standardize terminology.
131
+ - Upgrade captions so figures stand alone.
132
+ - Add citations and math formatting.
133
+ - Finalize abstract once argument and evidence are stable.
134
+
135
+ Note: drafting a short contribution summary early is useful; final abstract text should still be revised late.
136
+
137
+ #### Phase IV+: feedback loops
138
+
139
+ - Early feedback: argument, framing, contribution clarity.
140
+ - Late feedback: explanation gaps, ambiguity, wording, factual correctness.
141
+ - Iterate until reviewers can identify contributions and evidence without guessing.
142
+
143
+ ### B) Design docs and RFCs
144
+
145
+ Recommended structure:
146
+
147
+ 1. Problem context and impact
148
+ 2. Goals, non-goals, and constraints
149
+ 3. Proposed design and key decisions
150
+ 4. Alternatives considered (and why rejected)
151
+ 5. Validation plan and success metrics
152
+ 6. Rollout/migration plan
153
+ 7. Risks, failure modes, and mitigations
154
+ 8. Open questions
155
+
156
+ Treat design docs as decision records, not implementation diaries.
157
+
158
+ ### C) PR descriptions and commit messages
159
+
160
+ Use this order:
161
+ 1. **What changed**
162
+ 2. **Why this change was needed**
163
+ 3. **How to verify** (exact commands/tests)
164
+ 4. **Risk / rollout / breaking changes**
165
+
166
+ Keep summaries imperative and concrete.
167
+
168
+ ### D) Error messages and user-facing diagnostics
169
+
170
+ Include:
171
+ 1. What happened
172
+ 2. Why (if known)
173
+ 3. What the user can do next
174
+ 4. Context IDs/paths/log pointers
175
+
176
+ Bad: "Operation failed"
177
+
178
+ Better:
179
+ ```
180
+ Error: Could not load config from /etc/mu/config.json.
181
+ Cause: JSON parse error at line 48 (trailing comma).
182
+ Fix: Remove the trailing comma and re-run `mu control reload`.
183
+ Details: parser_error_code=EJSON_TRAILING_COMMA
184
+ ```
185
+
186
+ ## Section checklists (papers and deep technical docs)
187
+
188
+ ### Introduction
189
+
190
+ - Does it clearly establish problem importance?
191
+ - Are constraints and context realistic?
192
+ - Are explicit contributions listed?
193
+ - Can a busy reader understand the paper’s value from this section alone?
194
+
195
+ ### Related work / context
196
+
197
+ - Is comparison explicit (similarities and differences), not name-dropping?
198
+ - Are prior works treated respectfully and accurately?
199
+ - Does this section sharpen the novelty claim?
200
+
201
+ ### Design / approach
202
+
203
+ - Are key decisions distinguished from implementation detail?
204
+ - Are alternatives and tradeoffs discussed?
205
+ - Are assumptions explicit?
206
+ - Is there a clear organizing principle?
207
+
208
+ ### Evaluation
209
+
210
+ - Is each major claim tested directly?
211
+ - Do results explain *why* outcomes occurred, not only *that* they occurred?
212
+ - Are baselines/comparators fair and clearly described?
213
+ - Are limitations and threats to validity disclosed?
214
+
215
+ ### Discussion / conclusion
216
+
217
+ - Are lessons explicit and transferable?
218
+ - Are boundaries and non-generalizable aspects named?
219
+ - Are future directions grounded in observed limits (not generic filler)?
220
+
221
+ ## Common crash landings
222
+
223
+ - Leading with implementation detail before problem significance.
224
+ - "Summer vacation" narrative: chronology without transferable lessons.
225
+ - Listing features instead of defending decisions.
226
+ - Claiming novelty without explicit comparison to prior work.
227
+ - Hiding assumptions or omitting non-goals.
228
+ - Evaluating only aggregate outcomes while ignoring causal design claims.
229
+ - Presenting a cleaned-up fictional process (revisionist history).
230
+ - Related-work sections that only attack others or only list citations.
231
+ - Forward-reference overload and undefined terms.
232
+ - Sloppy grammar/formatting that signals weak rigor.
233
+
234
+ ## Editing and review workflow
235
+
236
+ When reviewing existing prose:
237
+
238
+ 1. **Structural pass**
239
+ - Is the document organized around reader questions?
240
+ - Are sections ordered by decision value (why -> what -> evidence -> implications)?
241
+
242
+ 2. **Argument pass**
243
+ - Highlight major claims.
244
+ - Verify each claim has evidence and scope.
245
+ - Flag assertions without backing.
246
+
247
+ 3. **Design-decision pass**
248
+ - Confirm key decisions are explicit.
249
+ - Check whether alternatives and tradeoffs are discussed.
250
+
251
+ 4. **Evidence pass**
252
+ - Validate that evaluation supports claimed contributions.
253
+ - Check reproducibility details (versions, commands, datasets, parameters).
254
+
255
+ 5. **Language pass**
256
+ - Convert vague phrases to measurable statements.
257
+ - Remove redundancy and filler.
258
+ - Ensure consistent terminology and active voice where useful.
259
+
260
+ 6. **Final polish**
261
+ - Verify figures/tables/captions stand on their own.
262
+ - Fix grammar/spelling/format consistency.
263
+ - Read key sections aloud for clarity.
264
+
265
+ ## Evaluation scenarios
266
+
267
+ 1. **Systems paper draft review**
268
+ - Prompt: user asks why reviews say "interesting system, weak paper".
269
+ - Expected: identify missing problem framing, unclear central insight, absent design-rationale discussion, or evaluation-claim mismatch.
270
+
271
+ 2. **Design doc quality upgrade**
272
+ - Prompt: user provides an implementation-heavy RFC.
273
+ - Expected: restructure around goals/constraints, key decisions, alternatives, and validation plan.
274
+
275
+ 3. **PR description improvement**
276
+ - Prompt: user shares a vague PR summary.
277
+ - Expected: rewrite into what/why/how-to-verify/risk with concrete commands.
278
+
279
+ 4. **Diagnostic message rewrite**
280
+ - Prompt: user shares generic errors.
281
+ - Expected: produce actionable what/why/fix/details messages with identifiers and next steps.
282
+
283
+ ## Quality bar
284
+
285
+ - Reader can state the problem, insight, and contribution after a quick skim.
286
+ - Major claims are backed by explicit evidence.
287
+ - Key decisions and tradeoffs are visible.
288
+ - Limits and assumptions are acknowledged honestly.
289
+ - Prose is clear, concrete, and respectful of reader time.
@@ -1,180 +0,0 @@
1
- ---
2
- name: writing
3
- description: "Crafts clear, precise technical documentation. Use when writing or reviewing docs, PR descriptions, error messages, READMEs, API references, or any technical prose."
4
- ---
5
-
6
- # writing
7
-
8
- Use this skill when asked to write, edit, or review technical prose. This includes documentation, READMEs, PR descriptions, error messages, comments, API references, and commit messages.
9
-
10
- ## Contents
11
-
12
- - [Core contract](#core-contract)
13
- - [Writing workflows](#writing-workflows)
14
- - [Common patterns by document type](#common-patterns-by-document-type)
15
- - [Editing and review workflow](#editing-and-review-workflow)
16
- - [Evaluation scenarios](#evaluation-scenarios)
17
- - [Quality bar](#quality-bar)
18
-
19
- ## Core contract
20
-
21
- 1. **Audience first**
22
- - Identify the reader's baseline knowledge before writing.
23
- - Write for the busiest reader who needs this information.
24
- - Honor their time: front-load the essential information.
25
-
26
- 2. **Clarity over style**
27
- - One idea per sentence. Complex concepts deserve their own space.
28
- - Active voice: "The system returns an error" not "An error is returned by the system."
29
- - Precise terminology: use the same word for the same concept throughout.
30
- - Concrete over abstract: "200ms latency" beats "fast performance."
31
-
32
- 3. **Structure for scanability**
33
- - Headings should communicate the document structure without reading prose.
34
- - Lists for parallel items (bullets for unordered, numbers for sequences).
35
- - Code blocks and tables over prose descriptions.
36
- - Inverted pyramid: conclusion, supporting details, background.
37
-
38
- 4. **Actionability**
39
- - Imperative for procedures: "Run the command" not "The command should be run."
40
- - Explicit consequences: state what happens if the user does X.
41
- - Anticipate failure modes in troubleshooting sections.
42
-
43
- 5. **Accessibility**
44
- - Plain language: avoid Latin abbreviations, buzzwords, metaphor-heavy descriptions.
45
- - Sentence length: average 15-20 words. Vary rhythm but never confuse length with sophistication.
46
- - Context for jargon: define domain-specific terms on first use or link to definitions.
47
-
48
- 6. **Verify by reading aloud**
49
- - Awkward phrasing surfaces when spoken.
50
- - Test instructions by following them exactly as written.
51
- - Delete mercilessly: if a sentence doesn't inform or direct, cut it.
52
-
53
- ## Writing workflows
54
-
55
- ### A) Documentation from scratch
56
-
57
- 1. **Identify the audience and goal**
58
- - Who will read this? What do they know? What must they do after reading?
59
-
60
- 2. **Outline the structure**
61
- - Opening paragraph: what this document covers and why it matters.
62
- - Body: group related concepts, sequence procedures in order of execution.
63
- - Closing: next steps, related resources, or troubleshooting.
64
-
65
- 3. **Draft with constraints**
66
- - Maximum 25 words per sentence on average.
67
- - Active voice for all instructions.
68
- - Code examples for any behavior described.
69
-
70
- 4. **Review against the contract**
71
- - Scan test: can a reader grasp structure from headings alone?
72
- - Action test: can a reader execute procedures without asking questions?
73
- - Deletion pass: remove sentences that don't inform or direct.
74
-
75
- ### B) PR/commit description
76
-
77
- 1. **What changed** (imperative, present tense)
78
- 2. **Why it changed** (context, motivation)
79
- 3. **How to verify** (testing steps, expected outcomes)
80
- 4. **Breaking changes** (if any, with explicit adoption guidance)
81
-
82
- Keep under 80 characters per line in the summary. Body wraps at 72 characters.
83
-
84
- ### C) Error messages
85
-
86
- 1. **State what happened** (not what didn't)
87
- 2. **Explain why** (the root cause, if known)
88
- 3. **Provide the fix** (concrete next step, not generic advice)
89
- 4. **Include identifiers** (error codes, relevant IDs, log locations)
90
-
91
- Example:
92
- ```
93
- Error: Connection refused to database 'prod-db' on port 5432.
94
- Cause: The database service is not running or firewall blocks port 5432.
95
- Fix: Start the service with 'sudo systemctl start postgresql' or verify firewall rules.
96
- Log: See /var/log/postgresql/postgresql-14-main.log for details.
97
- ```
98
-
99
- ## Common patterns by document type
100
-
101
- ### README.md
102
-
103
- Structure:
104
- 1. One-line description of what this is
105
- 2. Installation/usage (minimal working example)
106
- 3. Key features (bullet list)
107
- 4. Configuration/options
108
- 5. Contributing/troubleshooting links
109
-
110
- ### API documentation
111
-
112
- Per endpoint:
113
- - Purpose (one sentence)
114
- - HTTP method and path
115
- - Parameters (name, type, required, description)
116
- - Request/response examples
117
- - Error codes and meanings
118
-
119
- ### Inline code comments
120
-
121
- - **Why**, not what: explain intent, not obvious behavior.
122
- - Non-obvious side effects or assumptions.
123
- - TODO/FIXME with issue references, not vague notes.
124
- - Public APIs: docstrings with parameters, returns, raises.
125
-
126
- ### Configuration docs
127
-
128
- - Default values explicitly stated.
129
- - Units for all numeric values (ms, bytes, percent).
130
- - Validation constraints (min/max, allowed values).
131
- - Impact of changing the value (what breaks, what improves).
132
-
133
- ## Editing and review workflow
134
-
135
- When reviewing existing prose:
136
-
137
- 1. **Structural audit**
138
- - Does the outline serve the reader's goal?
139
- - Are headings descriptive? Is sequencing logical?
140
-
141
- 2. **Sentence-level edits**
142
- - Convert passive to active voice.
143
- - Replace vague quantifiers ("some", "many") with specifics.
144
- - Break long sentences (\> 25 words) into two.
145
-
146
- 3. **Accuracy check**
147
- - Verify all code examples execute as written.
148
- - Confirm version numbers, paths, and URLs are current.
149
- - Check that error messages match actual output.
150
-
151
- 4. **Final polish**
152
- - Read aloud for awkward rhythm.
153
- - Consistent formatting (punctuation in lists, code fences with languages).
154
- - Spelling and grammar (but prioritize clarity over grammatical perfection).
155
-
156
- ## Evaluation scenarios
157
-
158
- 1. **Drafting documentation for a new feature**
159
- - Prompt: user asks for docs for a feature they've implemented.
160
- - Expected: skill identifies audience, structures around user goals not implementation details, includes working examples, and ends with verification steps.
161
-
162
- 2. **Reviewing a PR description**
163
- - Prompt: user shares a draft PR description for feedback.
164
- - Expected: skill checks for imperative summary line, clear what/why/how structure, and explicit breaking change notice if applicable.
165
-
166
- 3. **Improving error messages**
167
- - Prompt: user shares error handling code or current error text.
168
- - Expected: skill transforms vague messages into specific what/why/fix format with actionable next steps and relevant identifiers.
169
-
170
- 4. **Editing README for clarity**
171
- - Prompt: user asks for help with a project's README.
172
- - Expected: skill restructures for inverted pyramid, adds minimal working example, replaces feature paragraphs with scannable lists, and ensures installation steps are complete and ordered.
173
-
174
- ## Quality bar
175
-
176
- - Every sentence earns its place: informs or directs.
177
- - No sentence requires a second reading to understand.
178
- - A reader can act on instructions without asking clarifying questions.
179
- - Code examples execute without modification.
180
- - A skim reader grasps the document's purpose and structure.