waypoint-codex 1.0.2 → 1.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "waypoint-codex",
3
- "version": "1.0.2",
3
+ "version": "1.0.4",
4
4
  "description": "Make Codex better by default with stronger planning, code quality, reviews, tracking, and repo guidance.",
5
5
  "license": "MIT",
6
6
  "type": "module",
@@ -98,12 +98,47 @@ Do not optimize for brevity by dropping relevant material.
98
98
  If a file is relevant, include it in full.
99
99
  If multiple files are relevant, include all of them.
100
100
  If prior plans, failed attempts, docs, architecture notes, or state files materially change the answer, include them too.
101
+ Do not trim the bundle down to only the files that seem to directly answer the question. Include files that define constraints, history, surrounding system behavior, rejected approaches, current state, or any other context that materially changes how GPT-5.4-Pro should reason.
101
102
 
102
103
  The bottleneck here is not token thrift inside Codex. The bottleneck is giving GPT-5.4-Pro enough real context to reason well.
103
104
 
104
105
  Curate relevance aggressively. Compress relevance only by excluding things that truly do not matter.
105
106
  Do **not** compress relevant context just because it is long.
106
107
 
108
+ ## Core Rule: Make The Request Standalone
109
+
110
+ Write the handoff so GPT-5.4-Pro can produce a strong answer in a brand new session with no prior context and no follow-up.
111
+
112
+ Do not rely on:
113
+
114
+ - prior chat history
115
+ - "as discussed above"
116
+ - local repo knowledge
117
+ - implicit shared assumptions
118
+ - GPT-5.4-Pro asking clarifying questions before it can reason well
119
+
120
+ The request should contain the full problem, the full context, the real objective, and the expected output shape.
121
+
122
+ ## Core Rule: Ask Outcome-Level Questions
123
+
124
+ Frame the request around the real goal, source of truth, or decision to be made.
125
+
126
+ Prefer:
127
+
128
+ - "Given the attached context, propose the best architecture for this system"
129
+ - "Evaluate the current implementation against the attached north-star document and identify all important mismatches"
130
+ - "Given the current product, constraints, and code, recommend the best migration strategy"
131
+
132
+ Avoid narrow, history-led framing unless the task is intentionally narrow.
133
+
134
+ Avoid prompts like:
135
+
136
+ - "We recently changed X, can you check if this looks right?"
137
+ - "Did we fix issue Y?"
138
+ - "Can you look at this follow-up from the last pass?"
139
+
140
+ Prior findings, recent changes, and local debates can be included as background, but they should not become the main frame unless that is truly the task.
141
+
107
142
  ## Workflow
108
143
 
109
144
  ### 1. Justify AGI-Help
@@ -132,6 +167,7 @@ Collect the context it would need to reason well, such as:
132
167
  - what has already been tried
133
168
  - what is blocked, unclear, risky, or contentious
134
169
  - what a successful answer would help us decide or do next
170
+ - what source of truth, target state, desired behavior, or decision standard should govern the answer
135
171
 
136
172
  This is not a fixed checklist. Include whatever materially changes the quality of the answer.
137
173
 
@@ -139,9 +175,13 @@ This is not a fixed checklist. Include whatever materially changes the quality o
139
175
 
140
176
  Create `files/` and copy in the relevant source material.
141
177
 
178
+ The standard is completeness, not minimality.
179
+ If a file materially affects the question, the answer, the constraints, or the reasoning path, include it even when it is only indirectly relevant.
180
+
142
181
  Examples of relevant files:
143
182
 
144
183
  - core implementation files
184
+ - supporting files that materially change the reasoning, even if they do not directly answer the question
145
185
  - architecture docs
146
186
  - plans
147
187
  - active plan or workspace files
@@ -149,6 +189,8 @@ Examples of relevant files:
149
189
  - failing or partial implementations
150
190
  - screenshots or exported artifacts when available through the current tool surface
151
191
  - strategy docs, briefs, drafts, notes, or prior outputs that define the problem
192
+ - source-of-truth documents such as specs, north-star docs, acceptance criteria, or decision memos
193
+ - surrounding files that materially affect the verdict even if they were not changed recently
152
194
 
153
195
  Preserve relative structure inside `files/` when it helps orientation.
154
196
 
@@ -167,6 +209,8 @@ Keep this concise but useful.
167
209
 
168
210
  Write the prompt as if briefing a world-class expert who has zero implicit context.
169
211
 
212
+ The prompt should ask for a complete answer to the actual problem, not a reaction to the latest local narrative.
213
+
170
214
  The prompt should usually include:
171
215
 
172
216
  1. **Role / framing**
@@ -184,7 +228,7 @@ The prompt should usually include:
184
228
  7. **Attached materials**
185
229
  - tell it that files are attached and should be read before answering
186
230
  8. **Specific request**
187
- - the concrete question or task
231
+ - the concrete task framed at the outcome/spec/decision level
188
232
  9. **Desired output shape**
189
233
  - exactly how the answer should be structured
190
234
 
@@ -207,6 +251,14 @@ Ask for something concrete, such as:
207
251
  - a better strategy or positioning approach
208
252
  - a decision memo with tradeoffs and risks
209
253
 
254
+ When the task is evaluative, ask for a complete evaluation against the governing standard, not confirmation of recent fixes.
255
+
256
+ Examples:
257
+
258
+ - "Does the current system satisfy the attached target architecture? If not, identify all important mismatches with evidence."
259
+ - "Given the attached implementation and requirements, what is the best recommendation and why?"
260
+ - "Using the attached code, docs, and constraints, produce a complete architecture proposal for the new system."
261
+
210
262
  ### Specify The Output Format
211
263
 
212
264
  Tell GPT-5.4-Pro how to respond.
@@ -221,6 +273,17 @@ Good example shapes:
221
273
 
222
274
  Explicitly instruct it to review the attached files before answering.
223
275
 
276
+ ### Do Not Depend On Follow-Up
277
+
278
+ Assume Mark wants one strong answer, not a clarification loop.
279
+
280
+ So:
281
+
282
+ - front-load the necessary context
283
+ - state the objective precisely
284
+ - include the governing constraints and source of truth
285
+ - ask for the full answer in one pass
286
+
224
287
  ## Final Handoff To Mark
225
288
 
226
289
  When the bundle is ready, report: