aiwg 2026.1.3 → 2026.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CLAUDE.md CHANGED
@@ -157,3 +157,40 @@ npm exec markdownlint-cli2 "**/*.md"
157
157
 
158
158
  <!-- TEAM DIRECTIVES: Add project-specific guidance below this line -->
159
159
 
160
+ ## Release Documentation Requirements
161
+
162
+ **CRITICAL**: Every release MUST be documented in ALL of these locations:
163
+
164
+ | Location | Purpose | Format |
165
+ |----------|---------|--------|
166
+ | `CHANGELOG.md` | Technical changelog | Keep a Changelog format with highlights table |
167
+ | `docs/releases/vX.X.X-announcement.md` | Release announcement | Full feature documentation with examples |
168
+ | `package.json` | Version bump | CalVer: `YYYY.MM.PATCH` |
169
+ | GitHub Release | Public release notes | Condensed highlights + install instructions |
170
+ | Gitea Release | Internal release notes | Same as GitHub |
171
+
172
+ ### Release Checklist
173
+
174
+ Before pushing a version tag:
175
+
176
+ 1. **Update `package.json`** - Bump version following CalVer
177
+ 2. **Update `CHANGELOG.md`** - Add new version section with:
178
+ - Highlights table (What changed | Why you care)
179
+ - Detailed Added/Changed/Fixed sections
180
+ - Link to previous version
181
+ 3. **Create `docs/releases/vX.X.X-announcement.md`** - Full release document with:
182
+ - Feature highlights
183
+ - Code examples
184
+ - Migration notes (if applicable)
185
+ - Links to relevant documentation
186
+ 4. **Commit and tag** - `git tag -m "vX.X.X" vX.X.X`
187
+ 5. **Push to both remotes** - `git push origin main --tags && git push github main --tags`
188
+ 6. **Update GitHub Release** - Add proper release notes via `gh release edit`
189
+ 7. **Create Gitea Release** - Via MCP tool or web UI
190
+
191
+ ### Version Format
192
+
193
+ - **CalVer**: `YYYY.MM.PATCH` (e.g., `2026.01.3`)
194
+ - PATCH resets each month
195
+ - Tag format: `vYYYY.MM.PATCH` (e.g., `v2026.01.3`)
196
+
@@ -143,11 +143,28 @@ See `docs/examples/` for detailed walkthroughs:
143
143
 
144
144
  Ralph inverts traditional AI optimization from "unpredictable success" to "predictable failure with automatic recovery."
145
145
 
146
+ ## Important: When to Use Ralph
147
+
148
+ **Ralph is a power tool.** Used correctly, it delivers overnight. Used incorrectly, it burns tokens producing junk.
149
+
150
+ | Situation | Use Ralph? | Instead |
151
+ |-----------|------------|---------|
152
+ | Greenfield with no docs | **NO** | Use AIWG intake/flows first |
153
+ | Vague requirements | **NO** | Write use cases first |
154
+ | Clear spec, need implementation | **YES** | - |
155
+ | Tests failing, need fixes | **YES** | - |
156
+ | Migration with clear rules | **YES** | - |
157
+
158
+ **The key insight**: Ralph excels at HOW to build, but thrashes on WHAT to build. Define your requirements first, then let Ralph implement.
159
+
160
+ See [When to Use Ralph](docs/when-to-use-ralph.md) for detailed guidance on avoiding the token-burning trap.
161
+
146
162
  ## Related
147
163
 
148
- - [Quickstart Guide](docs/quickstart.md)
149
- - [Best Practices](docs/best-practices.md)
150
- - [Troubleshooting](docs/troubleshooting.md)
164
+ - [When to Use Ralph](docs/when-to-use-ralph.md) - **Start here** - Understanding Ralph's sweet spot
165
+ - [Quickstart Guide](docs/quickstart.md) - Getting started
166
+ - [Best Practices](docs/best-practices.md) - Writing effective tasks
167
+ - [Troubleshooting](docs/troubleshooting.md) - Common issues
151
168
 
152
169
  ## Credits
153
170
 
@@ -2,6 +2,23 @@
2
2
 
3
3
  Get started with iterative AI task execution in 5 minutes.
4
4
 
5
+ ## Before You Start: Is Ralph Right for This Task?
6
+
7
+ **Ralph is a power tool.** Before invoking it, ask yourself:
8
+
9
+ | Question | If NO |
10
+ |----------|-------|
11
+ | Is my task well-defined with clear requirements? | Document requirements first |
12
+ | Can I write a command that verifies success? | Ralph can't help with subjective goals |
13
+ | Do I have tests/linting to validate correctness? | Add verification first |
14
+ | Is this implementation work, not exploration? | Use Discovery Track for research |
15
+
16
+ **The token-burning trap**: Ralph excels at HOW to implement but thrashes on WHAT to build. If you don't have clear requirements, Ralph will hallucinate features, contradict itself, and burn tokens producing junk.
17
+
18
+ **Safe to proceed?** Read on. **Unsure?** See [When to Use Ralph](when-to-use-ralph.md) first.
19
+
20
+ ---
21
+
5
22
  ## What is Ralph?
6
23
 
7
24
  Ralph (from the "Ralph Wiggum methodology") executes AI tasks in a loop until completion criteria are met:
@@ -183,6 +200,7 @@ Ralph stores state and reports in `.aiwg/ralph/`:
183
200
 
184
201
  ## Next Steps
185
202
 
203
+ - Read [When to Use Ralph](when-to-use-ralph.md) to understand Ralph's sweet spot
186
204
  - Read [Best Practices](best-practices.md) for effective prompt engineering
187
205
  - See [Examples](examples/) for common patterns
188
206
  - Check [Troubleshooting](troubleshooting.md) if you get stuck
@@ -0,0 +1,348 @@
1
+ # When to Use Ralph (And When Not To)
2
+
3
+ Understanding Ralph's sweet spot and avoiding the token-burning trap.
4
+
5
+ ## The Controversy
6
+
7
+ Ralph divides people. Some swear by it. Others have war stories about it running all night, burning tokens, producing junk. Both are right - Ralph is a power tool, and like any power tool, it can build or destroy depending on how you use it.
8
+
9
+ **The truth**: Ralph's effectiveness is directly proportional to how well-defined your project is before you invoke it.
10
+
11
+ ## The Two Extremes
12
+
13
+ ### The Disaster Case: Greenfield + Vague Directive
14
+
15
+ ```bash
16
+ # DON'T DO THIS
17
+ /ralph "make me a baking app" --completion "app works"
18
+ ```
19
+
20
+ What happens:
21
+
22
+ 1. AI has no context about what "baking app" means
23
+ 2. No architecture decisions have been made
24
+ 3. No requirements exist
25
+ 4. AI hallucinates features, changes direction, contradicts itself
26
+ 5. Each iteration builds on shaky foundations
27
+ 6. Thrashing intensifies as hallucinated components conflict
28
+ 7. Token usage explodes
29
+ 8. Result: A mess that barely runs, if at all
30
+
31
+ **Why this fails**: The AI is trying to answer "WHAT to build" while simultaneously trying to figure out "HOW to build it." These are fundamentally different problems. Mixing them creates chaos.
32
+
33
+ ### The Success Case: Documented Project + Implementation Focus
34
+
35
+ ```bash
36
+ # DO THIS
37
+ /ralph "Implement UC-AUTH-001 user login per the architecture doc" \
38
+ --completion "npm test -- auth passes AND npx tsc --noEmit passes"
39
+ ```
40
+
41
+ What happens:
42
+
43
+ 1. UC-AUTH-001 defines exactly what login should do
44
+ 2. Architecture doc specifies technology choices
45
+ 3. AI knows the patterns, conventions, dependencies
46
+ 4. Each iteration focuses purely on implementation details
47
+ 5. Failures are specific: wrong import, missing mock, edge case
48
+ 6. AI learns from specific failures and fixes them
49
+ 7. Convergence to working code is predictable
50
+
51
+ **Why this works**: The "WHAT" is settled. Ralph focuses entirely on "HOW" - the implementation mechanics where iteration genuinely helps.
52
+
53
+ ## The AIWG + Ralph Synergy
54
+
55
+ AIWG was designed with Ralph in mind. The entire SDLC framework exists to create a corpus so complete that an AI can't thrash on what to build - it can only focus on how.
56
+
57
+ ### What AIWG Provides
58
+
59
+ | AIWG Artifact | Eliminates This Uncertainty |
60
+ |---------------|----------------------------|
61
+ | Project Intake | What problem are we solving? |
62
+ | Requirements (UC-*, US-*) | What features do we need? |
63
+ | Software Architecture Doc | What tech stack, patterns, structure? |
64
+ | ADRs | What decisions were made, and why? |
65
+ | NFR modules | What are the quality requirements? |
66
+ | Pseudo-code / interface specs | What's the API shape? |
67
+
68
+ ### The Transformation
69
+
70
+ ```
71
+ Without AIWG:
72
+ ┌─────────────────────────────────────────────────────────┐
73
+ │ Ralph → "What to build?" → Hallucinate → Thrash → $$$ │
74
+ └─────────────────────────────────────────────────────────┘
75
+
76
+ With AIWG:
77
+ ┌─────────────────────────────────────────────────────────┐
78
+ │ AIWG → Defines "What" → Ralph → "How to build" → Done │
79
+ └─────────────────────────────────────────────────────────┘
80
+ ```
81
+
82
+ ### Documentation as Specification
83
+
84
+ By the time you've completed AIWG's Discovery Track:
85
+
86
+ - Every feature is defined in a use case
87
+ - Every decision is recorded in an ADR
88
+ - The architecture is documented with component diagrams
89
+ - Non-functional requirements are explicit
90
+ - Even pseudo-code or interface shapes may exist
91
+
92
+ **The docs are one step away from code.** Ralph's job becomes mechanical: translate this specification into working code, iterate on the implementation details until tests pass.
93
+
94
+ ## When Ralph Excels
95
+
96
+ ### Implementation of Well-Defined Features
97
+
98
+ ```bash
99
+ /ralph "Implement @.aiwg/requirements/UC-PAY-003.md" \
100
+ --completion "npm test -- payment passes"
101
+ ```
102
+
103
+ The use case document tells Ralph exactly what to build. Ralph figures out the implementation.
104
+
105
+ ### Mechanical Transformations
106
+
107
+ ```bash
108
+ /ralph "Convert src/utils/*.js to TypeScript per @.aiwg/architecture/adr-012-typescript.md" \
109
+ --completion "npx tsc --noEmit passes"
110
+ ```
111
+
112
+ The ADR defines the transformation rules. Ralph applies them iteratively.
113
+
114
+ ### Test-Driven Fixes
115
+
116
+ ```bash
117
+ /ralph "Fix all failing tests in src/auth/" \
118
+ --completion "npm test -- auth passes"
119
+ ```
120
+
121
+ Tests define expected behavior. Ralph makes code match expectations.
122
+
123
+ ### Dependency Resolution
124
+
125
+ ```bash
126
+ /ralph "Update to React 19 and fix all breaking changes" \
127
+ --completion "npm test passes AND npm run build succeeds"
128
+ ```
129
+
130
+ Ralph excels at the tedious iteration of finding compatible versions and fixing API changes.
131
+
132
+ ### Code Quality Gates
133
+
134
+ ```bash
135
+ /ralph "Achieve 80% test coverage in src/services/" \
136
+ --completion "coverage report shows src/services >80%"
137
+ ```
138
+
139
+ Clear metric, well-defined scope. Ralph adds tests until the threshold is met.
140
+
141
+ ## When NOT to Use Ralph
142
+
143
+ ### Greenfield Without Documentation
144
+
145
+ If you have no architecture doc, no requirements, no design - **stop**. Don't invoke Ralph. Use the AIWG intake process first.
146
+
147
+ ```bash
148
+ # First: Define what you're building
149
+ /intake-wizard
150
+ /flow-concept-to-inception
151
+ /flow-inception-to-elaboration
152
+
153
+ # Then: Build it
154
+ /ralph "Implement UC-001" --completion "tests pass"
155
+ ```
156
+
157
+ ### Vague or Subjective Goals
158
+
159
+ ```bash
160
+ # BAD - cannot verify, no clear target
161
+ /ralph "make the code better" --completion "code is good"
162
+ /ralph "improve UX" --completion "users are happy"
163
+ /ralph "optimize performance" --completion "app is fast"
164
+ ```
165
+
166
+ If you can't write a command that verifies success, Ralph can't iterate toward it.
167
+
168
+ ### Research or Exploration
169
+
170
+ ```bash
171
+ # BAD - this isn't an implementation task
172
+ /ralph "figure out how authentication should work" --completion "auth design is done"
173
+ ```
174
+
175
+ Use `/flow-discovery-track` or manual exploration for research. Ralph is for implementation.
176
+
177
+ ### Undefined Scope
178
+
179
+ ```bash
180
+ # BAD - how many features is "complete"?
181
+ /ralph "finish the app" --completion "app is complete"
182
+ ```
183
+
184
+ Break this into specific, documented features first.
185
+
186
+ ## Ralph for Documentation (Carefully Scoped)
187
+
188
+ Ralph can help with documentation itself - but only with specific, verifiable scope:
189
+
190
+ ```bash
191
+ # GOOD - specific, verifiable
192
+ /ralph "Generate ADRs for all undocumented technical decisions in src/" \
193
+ --completion ".aiwg/architecture/adr-*.md exists for each major pattern"
194
+
195
+ # GOOD - specific output
196
+ /ralph "Create use cases from the feature list in product-brief.md" \
197
+ --completion "Each feature in product-brief.md has a corresponding .aiwg/requirements/UC-*.md"
198
+ ```
199
+
200
+ ```bash
201
+ # BAD - too vague
202
+ /ralph "document the project" --completion "docs are complete"
203
+ ```
204
+
205
+ ## Warning Signs: Is Ralph Thrashing?
206
+
207
+ Watch for these indicators:
208
+
209
+ | Sign | What It Means |
210
+ |------|---------------|
211
+ | Same error repeating | Structural problem, not implementation detail |
212
+ | Contradictory changes | No clear requirements to guide decisions |
213
+ | Growing file count | Hallucinating features not in scope |
214
+ | Unrelated files changing | Lost context, working on wrong problem |
215
+ | "Refactoring" without tests | No verification, just churning |
216
+
217
+ **If you see these**: Abort Ralph, create documentation, then resume.
218
+
219
+ ```bash
220
+ /ralph-abort
221
+ # Create/update requirements docs
222
+ # Define architecture decisions
223
+ /ralph "Implement [specific, documented feature]" --completion "tests pass"
224
+ ```
225
+
226
+ ## The Ralph Readiness Checklist
227
+
228
+ Before invoking Ralph, ask:
229
+
230
+ - [ ] Is the feature documented in a use case or user story?
231
+ - [ ] Is the architecture defined (or simple enough to be implicit)?
232
+ - [ ] Can I write a command that verifies success?
233
+ - [ ] Is the scope specific enough to complete in <20 iterations?
234
+ - [ ] Are tests available to validate correctness?
235
+
236
+ **If any answer is "no"**: Document first, Ralph second.
237
+
238
+ ## Summary
239
+
240
+ | Situation | Action |
241
+ |-----------|--------|
242
+ | Greenfield, no docs | Use AIWG intake/flows first |
243
+ | Vague requirements | Write use cases first |
244
+ | No architecture | Create SAD/ADRs first |
245
+ | Clear spec, need implementation | **Use Ralph** |
246
+ | Tests failing, need fixes | **Use Ralph** |
247
+ | Migration with clear rules | **Use Ralph** |
248
+ | Coverage gap with clear target | **Use Ralph** |
249
+
250
+ **The formula**: AIWG defines WHAT. Ralph implements HOW. Together they work. Apart, Ralph thrashes.
251
+
252
+ ## Industry Perspectives and Research
253
+
254
+ The debate around iterative AI execution isn't unique to Ralph. Here's what the broader industry has learned.
255
+
256
+ ### The Context Problem
257
+
258
+ [Augment Code's research](https://www.augmentcode.com/learn/agentic-swarm-vs-spec-driven-coding) found that both agentic swarms and specification-driven development fail for the same reason: they assume the hard problem is coordination or planning, not context understanding.
259
+
260
+ > "Context understanding trumps coordination strategy... Perfect coordination doesn't help when agents are coordinating around incomplete information. Comprehensive specifications don't help when you can't specify what you don't fully understand."
261
+
262
+ **AIWG's answer**: Create comprehensive context *first* through documentation. Ralph then operates in a rich-context environment where iteration actually helps.
263
+
264
+ ### Loop Drift and Thrashing
265
+
266
+ [Research into agent loops](https://www.fixbrokenaiapps.com/blog/ai-agents-infinite-loops) identified "Loop Drift" as a core failure mode - agents misinterpreting termination signals, generating repetitive actions, or suffering from inconsistent internal state.
267
+
268
+ **Why this matters for Ralph**: Clear completion criteria with objective verification commands (exit codes, test results) prevent drift. Subjective criteria like "code is good" invite drift.
269
+
270
+ ### Context Window Degradation
271
+
272
+ [Token cost research](https://agentsarcade.com/blog/reducing-token-costs-long-running-agent-workflows) confirms that context windows have a quality curve:
273
+
274
+ > "Early in the window, Claude is sharp. As tokens accumulate, quality degrades. If you try to cram multiple features into one iteration, you're working in the degraded part of the curve."
275
+
276
+ **Best practice**: Keep iterations focused on single changes. Ralph's git-based state persistence lets each iteration start with fresh context while inheriting the work from prior iterations.
277
+
278
+ ### The Double-Loop Alternative
279
+
280
+ [Test Double's "double-loop model"](https://testdouble.com/insights/youre-holding-it-wrong-the-double-loop-model-for-agentic-coding) argues against prescriptive prompts entirely:
281
+
282
+ > "If you have to be super prescriptive with the AI agent, I might as well write the damn code."
283
+
284
+ Their approach: First loop for exploration (treat implementation as disposable), second loop for polish (traditional code review).
285
+
286
+ **AIWG's response**: Both models can work. Double-loop suits exploratory greenfield work where you're discovering requirements. Ralph + AIWG suits implementation of known requirements. The key is recognizing which phase you're in.
287
+
288
+ ### Security Concerns
289
+
290
+ [NVIDIA's security research](https://developer.nvidia.com/blog/how-code-execution-drives-key-risks-in-agentic-ai-systems/) warns:
291
+
292
+ > "AI-generated code is inherently untrusted. Systems that execute LLM-generated code must treat that code with the same caution as user-supplied inputs."
293
+
294
+ **Ralph's safeguards**: Auto-commit creates rollback points. Tests verify correctness. Iteration limits prevent runaway execution. But the warning is real - always review final output before production.
295
+
296
+ ### Success Stories
297
+
298
+ The Ralph methodology has proven effective for:
299
+
300
+ - **React v16 to v19 migration**: 14 hours autonomous, no human intervention ([source](https://sidbharath.com/blog/ralph-wiggum-claude-code/))
301
+ - **Overnight multi-repo delivery**: "Ship 6 repos overnight. $50k contract for $297 in API costs" ([source](https://venturebeat.com/technology/how-ralph-wiggum-went-from-the-simpsons-to-the-biggest-name-in-ai-right-now))
302
+ - **Test coverage improvement**: Iterative test addition until threshold met
303
+
304
+ The common thread: objectively verifiable goals with clear completion criteria.
305
+
306
+ ### Expert Consensus
307
+
308
+ Industry practitioners have converged on these principles:
309
+
310
+ | Principle | Source |
311
+ |-----------|--------|
312
+ | Verification is mandatory | Anthropic research: "models tend to declare victory without proper verification" |
313
+ | Context beats coordination | Augment Code: "context understanding as the prerequisite for everything else" |
314
+ | Small iterations work better | Oreate AI: "context windows have a quality curve" |
315
+ | Safety limits are non-negotiable | Multiple sources: cap iterations, monitor costs, use sandboxes |
316
+ | Boring technologies work better | Oreate AI: stable APIs and mature toolchains outperform trendy alternatives |
317
+
318
+ ### Contrary Views
319
+
320
+ Not everyone agrees Ralph is the answer:
321
+
322
+ **The "double-loop" camp** argues iteration should be exploratory first, not implementation-focused. They embrace disposable code during discovery.
323
+
324
+ **The "context-first" camp** argues that understanding existing systems matters more than any coordination strategy. They focus on codebase comprehension tools.
325
+
326
+ **The "human-in-the-loop" camp** argues autonomous execution is inherently risky. They prefer checkpoints and approval gates.
327
+
328
+ **AIWG's synthesis**: All three camps make valid points. AIWG addresses them by:
329
+ 1. Supporting exploration during Discovery Track (not Ralph)
330
+ 2. Building rich context through documentation before implementation
331
+ 3. Providing iteration limits, auto-commits, and clear abort paths
332
+
333
+ Ralph isn't for every phase of development - it's for the implementation phase after discovery is complete.
334
+
335
+ ## Related
336
+
337
+ - [Quickstart Guide](quickstart.md) - Getting started with Ralph
338
+ - [Best Practices](best-practices.md) - Writing effective tasks and criteria
339
+ - [AIWG SDLC Framework](../../frameworks/sdlc-complete/README.md) - Documentation-first development
340
+ - [Discovery Track](../../frameworks/sdlc-complete/docs/phases/discovery-track.md) - How to document before you build
341
+
342
+ ## External Resources
343
+
344
+ - [The Ralph Wiggum Breakdown](https://dev.to/ibrahimpima/the-ralf-wiggum-breakdown-3mko) - Original methodology explanation
345
+ - [VentureBeat: Ralph Wiggum in AI](https://venturebeat.com/technology/how-ralph-wiggum-went-from-the-simpsons-to-the-biggest-name-in-ai-right-now) - Industry adoption
346
+ - [Test Double: Double Loop Model](https://testdouble.com/insights/youre-holding-it-wrong-the-double-loop-model-for-agentic-coding) - Alternative approach
347
+ - [Augment Code: Spec-Driven vs Agentic](https://www.augmentcode.com/learn/agentic-swarm-vs-spec-driven-coding) - Context-first perspective
348
+ - [Reducing Token Costs](https://agentsarcade.com/blog/reducing-token-costs-long-running-agent-workflows) - Cost management strategies
@@ -0,0 +1,188 @@
1
+ # CI/CD Secrets Configuration
2
+
3
+ **Version:** 1.0
4
+ **Last Updated:** 2026-01-14
5
+ **Target Audience:** Repository maintainers and administrators
6
+
7
+ ## Overview
8
+
9
+ This document describes the secrets required for CI/CD workflows in the AIWG repository. Secrets are used for authentication with package registries and external services.
10
+
11
+ ## Required Secrets
12
+
13
+ ### NPM_TOKEN
14
+
15
+ **Purpose:** Authenticate with Gitea's npm package registry for publishing.
16
+
17
+ **Required Scopes:**
18
+ - `package:write` - Required to publish packages
19
+ - `package:read` - Required to verify published packages
20
+
21
+ **Used In:**
22
+ - `.gitea/workflows/npm-publish.yml` - Publishing to Gitea npm registry
23
+ - Creating Gitea releases via API
24
+
25
+ ### Setting Up NPM_TOKEN
26
+
27
+ #### Step 1: Create a Gitea Access Token
28
+
29
+ 1. Log in to [git.integrolabs.net](https://git.integrolabs.net)
30
+ 2. Navigate to **Settings** → **Applications** → **Access Tokens**
31
+ - Direct URL: https://git.integrolabs.net/user/settings/applications
32
+ 3. Create a new token with:
33
+ - **Token Name:** `ci-npm-publish` (or descriptive name)
34
+ - **Select Scopes:**
35
+ - ✅ `write:package` (includes read:package)
36
+ - ✅ `read:repository` (for checkout operations)
37
+ - **Expiration:** Set according to your security policy (recommend 1 year max)
38
+ 4. Click **Generate Token**
39
+ 5. **IMPORTANT:** Copy the token immediately - it won't be shown again
40
+
41
+ #### Step 2: Add Secret to Gitea Repository
42
+
43
+ 1. Navigate to the repository: https://git.integrolabs.net/roctinam/ai-writing-guide
44
+ 2. Go to **Settings** → **Actions** → **Secrets**
45
+ 3. Click **Add Secret**
46
+ 4. Configure:
47
+ - **Name:** `NPM_TOKEN`
48
+ - **Value:** Paste the token from Step 1
49
+ 5. Click **Add Secret**
50
+
51
+ #### Step 3: Verify Configuration
52
+
53
+ Trigger a manual workflow run to verify:
54
+
55
+ ```bash
56
+ # Push a test tag (can be deleted after)
57
+ git tag v9999.99.99-test
58
+ git push origin v9999.99.99-test
59
+
60
+ # Watch the workflow at:
61
+ # https://git.integrolabs.net/roctinam/ai-writing-guide/actions
62
+
63
+ # Clean up test tag
64
+ git tag -d v9999.99.99-test
65
+ git push origin :refs/tags/v9999.99.99-test
66
+ ```
67
+
68
+ Or use the workflow dispatch with dry_run enabled.
69
+
70
+ ## Troubleshooting
71
+
72
+ ### Error: 401 Unauthorized
73
+
74
+ ```
75
+ npm error code E401
76
+ npm error 401 Unauthorized - PUT https://git.integrolabs.net/api/packages/roctinam/npm/aiwg
77
+ ```
78
+
79
+ **Causes:**
80
+ 1. **Token expired** - Create a new token and update the secret
81
+ 2. **Token missing** - Verify NPM_TOKEN secret exists in repository settings
82
+ 3. **Wrong scopes** - Token must have `write:package` scope
83
+ 4. **Token revoked** - Check if token still exists in user settings
84
+
85
+ **Resolution:**
86
+ 1. Go to https://git.integrolabs.net/user/settings/applications
87
+ 2. Check if the token exists and hasn't expired
88
+ 3. If expired/missing, create a new token with `write:package` scope
89
+ 4. Update the repository secret with the new token
90
+
91
+ ### Error: 403 Forbidden
92
+
93
+ **Causes:**
94
+ 1. Token belongs to user without package write permissions
95
+ 2. Repository doesn't allow package publishing
96
+
97
+ **Resolution:**
98
+ 1. Ensure token owner has write access to the repository
99
+ 2. Check organization/repository package settings
100
+
101
+ ### Token Not Being Used
102
+
103
+ If the workflow isn't picking up the secret:
104
+
105
+ 1. Verify secret name is exactly `NPM_TOKEN` (case-sensitive)
106
+ 2. Check workflow file references `${{ secrets.NPM_TOKEN }}`
107
+ 3. Ensure workflow has appropriate permissions in `permissions:` block
108
+
109
+ ## Security Best Practices
110
+
111
+ ### Token Management
112
+
113
+ - **Rotation:** Rotate tokens annually or when team members leave
114
+ - **Scope:** Use minimum required scopes (write:package, read:repository)
115
+ - **Naming:** Use descriptive names like `ci-npm-publish-2026`
116
+ - **Audit:** Periodically review active tokens
117
+
118
+ ### Secret Storage
119
+
120
+ - Never commit tokens to the repository
121
+ - Use repository/organization secrets, not environment variables in code
122
+ - Don't echo or log token values in workflows
123
+
124
+ ### Workflow Security
125
+
126
+ ```yaml
127
+ # Good: Token passed via secrets
128
+ env:
129
+ NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
130
+
131
+ # Bad: Token hardcoded or echoed
132
+ run: echo ${{ secrets.NPM_TOKEN }} # NEVER do this
133
+ ```
134
+
135
+ ## Workflow Architecture
136
+
137
+ ### npm-publish.yml Flow
138
+
139
+ ```
140
+ [Tag Push v*] → [Checkout] → [Configure npm] → [Build] → [Publish to Gitea] → [Verify]
141
+
142
+ Uses NPM_TOKEN for:
143
+ - .npmrc authentication
144
+ - npm publish command
145
+ - Gitea release API
146
+ ```
147
+
148
+ ### Secret Usage in Workflow
149
+
150
+ ```yaml
151
+ # .npmrc configuration (line 55-56)
152
+ //git.integrolabs.net/api/packages/roctinam/npm/:_authToken=${{ secrets.NPM_TOKEN }}
153
+
154
+ # Publish command (line 107-109)
155
+ npm publish --registry=${{ env.GITEA_NPM_REGISTRY }}
156
+ env:
157
+ NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
158
+
159
+ # Release creation (line 137)
160
+ -H "Authorization: token ${{ secrets.NPM_TOKEN }}"
161
+ ```
162
+
163
+ ## Additional Secrets (Optional)
164
+
165
+ ### NPMJS_TOKEN (for public npm)
166
+
167
+ If publishing to public npmjs.org:
168
+
169
+ 1. Create token at https://www.npmjs.com/settings/tokens
170
+ 2. Select "Automation" token type
171
+ 3. Add as secret named `NPMJS_TOKEN`
172
+ 4. Update workflow to use separate token for public registry
173
+
174
+ ### GITHUB_TOKEN (for GitHub mirror)
175
+
176
+ For GitHub Actions (`.github/workflows/`):
177
+
178
+ - Automatically provided by GitHub Actions
179
+ - No manual configuration needed
180
+ - Used for GitHub Releases and npm publish to GitHub Packages
181
+
182
+ ## References
183
+
184
+ - [Gitea Package Registry Documentation](https://docs.gitea.com/usage/packages/npm)
185
+ - [Gitea Actions Secrets](https://docs.gitea.com/usage/actions/secrets)
186
+ - [npm Authentication](https://docs.npmjs.com/using-private-packages-in-a-ci-cd-workflow)
187
+ - @.gitea/workflows/npm-publish.yml - Main publish workflow
188
+ - @.claude/rules/token-security.md - Token security rules