waypoint-codex 0.17.0 → 0.18.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -269,6 +269,7 @@ Waypoint ships a strong default skill pack for real coding work:
269
269
  - `workspace-compress`
270
270
  - `pre-pr-hygiene`
271
271
  - `pr-review`
272
+ - `agi-help`
272
273
 
273
274
  These are repo-local, so the workflow travels with the project.
274
275
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "waypoint-codex",
3
- "version": "0.17.0",
3
+ "version": "0.18.0",
4
4
  "description": "Make Codex better by default with stronger planning, code quality, reviews, tracking, and repo guidance.",
5
5
  "license": "MIT",
6
6
  "type": "module",
@@ -0,0 +1,259 @@
1
+ ---
2
+ name: agi-help
3
+ description: Prepare a complete external handoff package for GPT-5.4-Pro in ChatGPT when a task is unusually high-stakes, ambiguous, leverage-heavy, or quality-sensitive and one excellent answer is worth a slower manual loop. Use for greenfield project starts, major refactors, architecture rethinks, migration strategy, big-feature planning, hard product or strategy decisions, and other work where the external model needs full relevant context because it has no access to the repo, files, history, or local tools.
4
+ ---
5
+
6
+ # AGI-Help
7
+
8
+ Use this skill to prepare a high-quality manual handoff for GPT-5.4-Pro.
9
+
10
+ GPT-5.4-Pro is an external thinking partner, not a connected coding agent. It cannot see the repo, local files, prior discussion, current state, or failed attempts unless you package that context for Mark to send manually in ChatGPT.
11
+
12
+ The job of this skill is to create a complete handoff bundle that gives GPT-5.4-Pro the best possible chance of producing one exceptional answer.
13
+
14
+ ## What This Skill Owns
15
+
16
+ This skill owns the preparation step:
17
+
18
+ - decide whether GPT-5.4-Pro is justified for this task
19
+ - collect all relevant context in full
20
+ - copy the relevant files into a temporary handoff folder
21
+ - write a strong prompt for an external model with zero local context
22
+ - tell Mark exactly what to send
23
+ - stop and wait for the external response
24
+
25
+ This skill does **not** send anything itself.
26
+
27
+ ## When To Use This Skill
28
+
29
+ Use AGI-Help when:
30
+
31
+ - the task is high-stakes and a weak answer would be costly
32
+ - one strong answer is more valuable than a fast back-and-forth loop
33
+ - deep synthesis, architecture judgment, strategy, or reframing quality matters more than local tool execution speed
34
+ - the task is large enough or important enough that a manual GPT-5.4-Pro pass is worth 20-50 minutes
35
+
36
+ Typical examples:
37
+
38
+ - starting a project from scratch
39
+ - major refactors or system redesigns
40
+ - architecture or migration strategy
41
+ - planning a large feature or multi-phase initiative
42
+ - resolving hard tradeoffs across product, UX, engineering, and operations
43
+ - reshaping positioning, messaging, or strategy where synthesis quality matters a lot
44
+ - any other difficult task where Mark explicitly wants the strongest available single response
45
+
46
+ ## When Not To Use This Skill
47
+
48
+ Do not use it for:
49
+
50
+ - small or routine edits
51
+ - local debugging where filesystem access matters more than abstract reasoning
52
+ - simple implementation tasks that are already clear
53
+ - requests where a normal answer or normal planning pass is sufficient
54
+
55
+ ## Output
56
+
57
+ Create a handoff bundle at one of these locations:
58
+
59
+ - prefer `tmp/agi-help/<timestamp>/` inside the current workspace when that is practical
60
+ - otherwise use `~/.codex/tmp/agi-help/<timestamp>/`
61
+
62
+ The bundle should contain:
63
+
64
+ ```text
65
+ tmp/agi-help/<timestamp>/
66
+ ├── prompt.md
67
+ ├── manifest.md
68
+ ├── request-summary.md
69
+ └── files/
70
+ └── ...copied source files...
71
+ ```
72
+
73
+ ### prompt.md
74
+
75
+ The exact prompt Mark should paste into GPT-5.4-Pro.
76
+
77
+ ### manifest.md
78
+
79
+ A file-by-file list of what is included and why each file matters.
80
+
81
+ ### request-summary.md
82
+
83
+ A short operator note for Mark that explains:
84
+
85
+ - why AGI-Help was used here
86
+ - what to paste
87
+ - which files to attach
88
+ - what kind of answer to ask for
89
+
90
+ ### files/
91
+
92
+ Copies of every relevant file that should be attached.
93
+
94
+ ## Core Rule: Include All Relevant Context
95
+
96
+ Do not optimize for brevity by dropping relevant material.
97
+
98
+ If a file is relevant, include it in full.
99
+ If multiple files are relevant, include all of them.
100
+ If prior plans, failed attempts, docs, architecture notes, or state files materially change the answer, include them too.
101
+
102
+ The bottleneck here is not token thrift inside Codex. The bottleneck is giving GPT-5.4-Pro enough real context to reason well.
103
+
104
+ Curate relevance aggressively. Compress relevance only by excluding things that truly do not matter.
105
+ Do **not** compress relevant context just because it is long.
106
+
107
+ ## Workflow
108
+
109
+ ### 1. Justify AGI-Help
110
+
111
+ Before building the bundle, write 3-6 bullets in `request-summary.md` explaining why GPT-5.4-Pro is warranted here.
112
+
113
+ Focus on why:
114
+
115
+ - the task is unusually important, difficult, or leverage-heavy
116
+ - the answer needs deep synthesis, design judgment, or strategy
117
+ - normal local iteration is likely to be weaker than one high-quality external pass
118
+
119
+ ### 2. Reconstruct The Full Situation
120
+
121
+ Assume GPT-5.4-Pro knows nothing.
122
+
123
+ Collect the context it would need to reason well, such as:
124
+
125
+ - what the project, company, system, or situation is
126
+ - who the users are
127
+ - what the current state is
128
+ - what we want to achieve
129
+ - why this matters now
130
+ - what constraints exist
131
+ - what tradeoffs matter
132
+ - what has already been tried
133
+ - what is blocked, unclear, risky, or contentious
134
+ - what a successful answer would help us decide or do next
135
+
136
+ This is not a fixed checklist. Include whatever materially changes the quality of the answer.
137
+
138
+ ### 3. Copy The Relevant Files
139
+
140
+ Create `files/` and copy in the relevant source material.
141
+
142
+ Examples of relevant files:
143
+
144
+ - core implementation files
145
+ - architecture docs
146
+ - plans
147
+ - tracker files
148
+ - config files
149
+ - failing or partial implementations
150
+ - screenshots or exported artifacts when available through the current tool surface
151
+ - strategy docs, briefs, drafts, notes, or prior outputs that define the problem
152
+
153
+ Preserve relative structure inside `files/` when it helps orientation.
154
+
155
+ ### 4. Write manifest.md
156
+
157
+ For each included file, list:
158
+
159
+ - copied path inside the bundle
160
+ - original path
161
+ - why the file matters
162
+ - any brief note about how GPT-5.4-Pro should interpret it
163
+
164
+ Keep this concise but useful.
165
+
166
+ ### 5. Write prompt.md
167
+
168
+ Write the prompt as if briefing a world-class expert who has zero implicit context.
169
+
170
+ The prompt should usually include:
171
+
172
+ 1. **Role / framing**
173
+ - who GPT-5.4-Pro should act as for this task
174
+ 2. **Project or situation context**
175
+ - what this is, who it serves, and how to think about it
176
+ 3. **Current state**
177
+ - what exists today and what is happening now
178
+ 4. **Objective**
179
+ - what we need help with
180
+ 5. **Constraints and tradeoffs**
181
+ - technical, product, operational, organizational, or personal constraints
182
+ 6. **What has already been tried or considered**
183
+ - prior attempts, rejected options, partial work, or known problems
184
+ 7. **Attached materials**
185
+ - tell it that files are attached and should be read before answering
186
+ 8. **Specific request**
187
+ - the concrete question or task
188
+ 9. **Desired output shape**
189
+ - exactly how the answer should be structured
190
+
191
+ ## Prompt Writing Rules
192
+
193
+ ### Be Exhaustive About Relevant Context
194
+
195
+ Write enough that GPT-5.4-Pro can reason without guessing the basics.
196
+
197
+ ### Ask For A Concrete Deliverable
198
+
199
+ Do not ask vague questions like "thoughts?"
200
+
201
+ Ask for something concrete, such as:
202
+
203
+ - a recommendation with reasoning
204
+ - a detailed architecture proposal
205
+ - a refactor or migration plan
206
+ - a critique of the current direction
207
+ - a better strategy or positioning approach
208
+ - a decision memo with tradeoffs and risks
209
+
210
+ ### Specify The Output Format
211
+
212
+ Tell GPT-5.4-Pro how to respond.
213
+
214
+ Good example shapes:
215
+
216
+ - recommendation first, then reasoning, then alternatives, then risks, then implementation plan
217
+ - diagnosis, root causes, proposed direction, concrete changes, failure modes, validation plan
218
+ - executive summary, strategic recommendation, tradeoffs, suggested next steps, open questions
219
+
220
+ ### Tell It To Read The Attachments First
221
+
222
+ Explicitly instruct it to review the attached files before answering.
223
+
224
+ ## Final Handoff To Mark
225
+
226
+ When the bundle is ready, report:
227
+
228
+ - the bundle path
229
+ - why AGI-Help was used here
230
+ - the exact file to paste: `prompt.md`
231
+ - which files to attach from `files/`
232
+ - any note about what kind of response will be most useful when Mark pastes it back
233
+
234
+ Do not continue into implementation as if GPT-5.4-Pro already answered.
235
+ Stop and wait for Mark.
236
+
237
+ ## After Mark Returns With The Response
238
+
239
+ Once Mark pastes the GPT-5.4-Pro response back into the conversation:
240
+
241
+ - treat it as a strong external input, not automatic truth
242
+ - compare it against the actual repo and current state
243
+ - identify where it fits reality, where it conflicts, and what needs adaptation
244
+ - turn the useful parts into a concrete plan, decision, or implementation path
245
+
246
+ ## Gotchas
247
+
248
+ - Do not use this skill just because a task is non-trivial. Use it when answer quality is worth the slower manual loop.
249
+ - Do not assume GPT-5.4-Pro knows the repo, current state, history, or constraints.
250
+ - Do not omit relevant files just because they are large.
251
+ - Do not give GPT-5.4-Pro a vague prompt when a concrete deliverable is needed.
252
+ - Do not bury the actual question under context; the prompt needs both deep context and a crisp ask.
253
+ - Do not continue as though the external answer has already arrived.
254
+
255
+ ## Keep This Skill Sharp
256
+
257
+ - Tighten the trigger description if it fires on normal planning or routine coding tasks.
258
+ - Add new gotchas when a GPT-5.4-Pro handoff fails because context, constraints, or the requested output shape were incomplete.
259
+ - If the same bundle structure or prompt sections keep recurring, strengthen this skill around those patterns instead of rediscovering them each time.
@@ -0,0 +1,4 @@
1
+ interface:
2
+ display_name: "AGI-Help"
3
+ short_description: "Prepare a full GPT-5.4-Pro handoff package for high-stakes work"
4
+ default_prompt: "Use $agi-help when this task is unusually high-stakes, ambiguous, or leverage-heavy and the best next move is to prepare a complete GPT-5.4-Pro handoff package with full relevant context, copied source files, and a strong external prompt for Mark to send manually in ChatGPT."