odd-studio 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/plugin.json +19 -0
- package/README.md +229 -0
- package/bin/odd-studio.js +212 -0
- package/hooks/odd-destructive-guard.sh +98 -0
- package/hooks/odd-git-safety.sh +138 -0
- package/hooks/odd-outcome-quality.sh +84 -0
- package/hooks/odd-pre-build.sh +57 -0
- package/hooks/odd-session-save.sh +38 -0
- package/hooks/odd-ui-check.sh +69 -0
- package/package.json +43 -0
- package/scripts/install-skill.js +30 -0
- package/scripts/postinstall.js +28 -0
- package/scripts/scaffold-project.js +61 -0
- package/scripts/setup-hooks.js +105 -0
- package/skill/SKILL.md +464 -0
- package/skill/docs/build/build-protocol.md +532 -0
- package/skill/docs/kb/odd-kb.md +462 -0
- package/skill/docs/planning/build-planner.md +315 -0
- package/skill/docs/planning/outcome-writer.md +328 -0
- package/skill/docs/planning/persona-architect.md +258 -0
- package/skill/docs/planning/systems-mapper.md +270 -0
- package/skill/docs/ui/accessibility.md +415 -0
- package/skill/docs/ui/component-guide.md +356 -0
- package/skill/docs/ui/design-system.md +403 -0
- package/templates/.odd/state.json +16 -0
- package/templates/CLAUDE.md +93 -0
- package/templates/docs/contract-map.md +60 -0
- package/templates/docs/outcomes/.gitkeep +0 -0
- package/templates/docs/outcomes/example-outcome.md +104 -0
- package/templates/docs/personas/.gitkeep +0 -0
- package/templates/docs/personas/example-persona.md +108 -0
- package/templates/docs/plan.md +73 -0
- package/templates/docs/ui/.gitkeep +0 -0
|
@@ -0,0 +1,462 @@
|
|
|
1
|
+
# ODD Studio — Knowledge Base
|
|
2
|
+
|
|
3
|
+
This is the complete ODD methodology reference. Read it when you need to understand why the method works, not just how to use it. The practices in ODD Studio are not arbitrary — each one exists because a specific kind of failure was common and costly.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## 1. Why the 60-Year Assumption Collapsed
|
|
8
|
+
|
|
9
|
+
For sixty years, software development operated on a single assumption: to build software, you needed to know how to code.
|
|
10
|
+
|
|
11
|
+
This was not a choice. It was a constraint. The only people who could translate a requirement into working software were people who had spent years learning the languages, frameworks, and patterns of software engineering. Domain experts — teachers who understood pedagogy, clinicians who understood patient care, lawyers who understood contract law — could describe what they needed, but they could not build it. They needed an intermediary: a developer who would translate their domain knowledge into code.
|
|
12
|
+
|
|
13
|
+
This translation process was the source of most software project failures. The developer did not understand the domain deeply enough to ask the right questions. The domain expert did not understand software deeply enough to describe their needs precisely. The resulting system was always a partial translation — correct in structure, wrong in substance.
|
|
14
|
+
|
|
15
|
+
AI-assisted development has collapsed this assumption. A domain expert who can describe precisely what they need — using structured, unambiguous specifications — can now direct an AI coding assistant to build it. The translation step still exists, but the domain expert is doing the translating, not delegating it.
|
|
16
|
+
|
|
17
|
+
This is not the same as saying anyone can build anything. It means that domain expertise, combined with a structured methodology for describing systems, is now sufficient to build serious software. The methodology is ODD.
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
## 2. Writing Code vs Building Systems
|
|
22
|
+
|
|
23
|
+
There is a distinction that most guides to AI-assisted development miss. They teach you how to write code with AI. ODD teaches you how to build systems.
|
|
24
|
+
|
|
25
|
+
Writing code means producing correct syntax. It means a function that takes an input and returns an output. It means a component that renders correctly. Writing code is a craft skill. AI is extraordinarily good at it.
|
|
26
|
+
|
|
27
|
+
Building systems means producing software where the parts connect correctly, where the data flows as intended, where each capability depends on the ones before it, where the whole is coherent. Building systems is a design skill. AI is only as good at it as the design you give it.
|
|
28
|
+
|
|
29
|
+
The failure mode of AI-assisted development without a methodology is this: you describe what you want, the AI writes it, and you get something that looks like software. It has screens and buttons and a database. But it does not hold together. The data produced in one part does not reach the part that needs it. The rules you care about are not enforced. The people who should be able to do things cannot. The system works, technically. But it does not work as a system.
|
|
30
|
+
|
|
31
|
+
ODD's job is to give you the design skill before you approach the AI.
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
## 3. Why Agile Fails Non-Technical Builders
|
|
36
|
+
|
|
37
|
+
Agile methodology was designed for professional software teams. It assumes:
|
|
38
|
+
|
|
39
|
+
- Multiple full-time developers with deep technical knowledge
|
|
40
|
+
- A product owner who can make technical trade-off decisions rapidly
|
|
41
|
+
- Sprint ceremonies that presuppose software engineering vocabulary
|
|
42
|
+
- Continuous integration and deployment infrastructure
|
|
43
|
+
- A shared codebase that a team maintains and evolves together
|
|
44
|
+
|
|
45
|
+
Non-technical domain experts building with AI have none of these. They work alone or in very small groups. They are building something that does not yet exist. They cannot make technical trade-off decisions because they do not know what the trade-offs are. And the "team" is an AI that has no memory between sessions and no stake in the coherence of the whole.
|
|
46
|
+
|
|
47
|
+
When non-technical builders try to apply agile to AI-assisted development, they discover that the ceremonies produce no shared understanding (because there is no shared technical context), the sprints produce disconnected pieces (because there is no architectural continuity), and the retrospectives identify the same problems every cycle (because the root cause — an absence of explicit design — is never addressed).
|
|
48
|
+
|
|
49
|
+
ODD is not anti-agile. It is a pre-agile methodology. It produces the specification that agile teams would normally grow incrementally. But it produces it upfront, in a form that an AI can build from without ambiguity.
|
|
50
|
+
|
|
51
|
+
---
|
|
52
|
+
|
|
53
|
+
## 4. Why User Stories Leave the Hard Part Unspecified
|
|
54
|
+
|
|
55
|
+
User stories have the form: "As a [role], I want to [action] so that [goal]."
|
|
56
|
+
|
|
57
|
+
This is useful for describing intent. It is useless for describing a system.
|
|
58
|
+
|
|
59
|
+
Consider: "As a teacher, I want to approve student applications so that I can manage my cohort."
|
|
60
|
+
|
|
61
|
+
This says nothing about:
|
|
62
|
+
- What information the teacher needs to see to make an approval decision
|
|
63
|
+
- What happens to the application when it is approved (which other parts of the system are affected?)
|
|
64
|
+
- What the student sees or receives as a result
|
|
65
|
+
- Whether approval is reversible
|
|
66
|
+
- What happens if the teacher approves when the cohort is already full
|
|
67
|
+
- What data is produced by the approval that other outcomes depend on
|
|
68
|
+
|
|
69
|
+
These omissions are not oversights. They are structural. The user story format is designed to express desire, not to specify behaviour. Developers fill in the gaps. Domain experts never see the decisions being made. The resulting software reflects the developer's assumptions, not the domain expert's knowledge.
|
|
70
|
+
|
|
71
|
+
ODD replaces user stories with outcomes. Outcomes specify every detail that a user story leaves out.
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## 5. What ODD Is and Why It Works
|
|
76
|
+
|
|
77
|
+
Outcome-Driven Development is a methodology for designing software from the user's experience backward to the technical implementation.
|
|
78
|
+
|
|
79
|
+
It starts with a concrete question: what does each type of person in your system actually experience? Not what they want in the abstract, but what they see, what they do, and what changes as a result of what they do.
|
|
80
|
+
|
|
81
|
+
From those experiences, it derives everything else: the data that must exist, the logic that must run, the interfaces that must connect. Architecture is not designed separately and then populated with features. Architecture emerges from the aggregation of precisely-described outcomes.
|
|
82
|
+
|
|
83
|
+
ODD works for non-technical builders for three reasons:
|
|
84
|
+
|
|
85
|
+
**First, it works in domain language.** You do not need to know about APIs or database schemas or React components to write an outcome. You write what a teacher does when they approve a student. You write what the student sees when they receive confirmation. The technical translation is handled by AI.
|
|
86
|
+
|
|
87
|
+
**Second, it makes the implicit explicit.** The hard work in software design is not writing code — it is deciding what the code should do. ODD forces those decisions to be made before building begins. A well-written set of outcomes contains every decision that matters. Nothing is left to the AI's interpretation.
|
|
88
|
+
|
|
89
|
+
**Third, it creates a specification that AI can build from.** Claude Code, given a well-written outcome with all 6 fields complete, can build the implementation. Given a user story with 3 fields, it has to guess. ODD closes the specification gap.
|
|
90
|
+
|
|
91
|
+
---
|
|
92
|
+
|
|
93
|
+
## 6. The Outcome Template — All 6 Fields
|
|
94
|
+
|
|
95
|
+
Every outcome in an ODD plan has exactly 6 fields. All 6 are required. A missing field is not a shortcut — it is an ambiguity that will manifest as a bug or a rebuild.
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
**OUTCOME NAME**
|
|
100
|
+
|
|
101
|
+
A short, active-voice phrase that describes what happens from the system's point of view. Not "Teacher Dashboard" (that is a screen, not an outcome). Not "Manage Applications" (that is a category, not an outcome). Good names: "Teacher approves student application", "Student receives enrolment confirmation", "Admin generates monthly attendance report".
|
|
102
|
+
|
|
103
|
+
The name should make clear: who does it, what they do, and (implicitly) what changes as a result.
|
|
104
|
+
|
|
105
|
+
---
|
|
106
|
+
|
|
107
|
+
**PERSONA**
|
|
108
|
+
|
|
109
|
+
Which of your defined personas does this outcome belong to? Write the persona name and, briefly, their defining characteristic: "Maya, the cohort teacher — permanent staff who manages day-to-day student relationships".
|
|
110
|
+
|
|
111
|
+
One outcome = one primary persona. If multiple personas are involved, you probably have multiple outcomes that need to be separated.
|
|
112
|
+
|
|
113
|
+
---
|
|
114
|
+
|
|
115
|
+
**TRIGGER**
|
|
116
|
+
|
|
117
|
+
What causes this outcome to begin? The trigger is the moment the persona enters the experience described in this outcome.
|
|
118
|
+
|
|
119
|
+
Triggers are either:
|
|
120
|
+
- An action by the persona: "Maya opens the Applications tab in her dashboard"
|
|
121
|
+
- An event in the system: "A student submits their application form" (which triggers an outcome for Maya as well as one for the student)
|
|
122
|
+
- A scheduled event: "At 9am on Monday, the weekly summary email is generated"
|
|
123
|
+
|
|
124
|
+
The trigger is the entry point. Without it, the outcome has no context.
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
**EXPERIENCE**
|
|
129
|
+
|
|
130
|
+
The walkthrough. This is the longest and most important field. It describes, step by step, what the persona sees and does from trigger to completion.
|
|
131
|
+
|
|
132
|
+
Write it as a numbered list:
|
|
133
|
+
1. Maya sees the Applications tab showing 3 new applications since her last visit.
|
|
134
|
+
2. She clicks the first application — Jordan Chen's. A detail panel opens to the right.
|
|
135
|
+
3. She reads Jordan's personal statement, previous qualifications, and teacher reference.
|
|
136
|
+
4. She clicks Approve. A confirmation dialog asks: "Approve Jordan Chen for [Course Name]?" with an optional notes field.
|
|
137
|
+
5. She adds a note: "Strong application, particularly the project work experience."
|
|
138
|
+
6. She clicks Confirm Approval.
|
|
139
|
+
7. The application moves from the New Applications list to the Approved list.
|
|
140
|
+
8. A success notification appears: "Jordan Chen's application approved."
|
|
141
|
+
9. Jordan receives an email within 60 seconds: "Congratulations — your application has been approved."
|
|
142
|
+
|
|
143
|
+
The experience must be complete enough that someone who had never seen the system could use it as a test script. If a step is ambiguous, it will be built wrong.
|
|
144
|
+
|
|
145
|
+
---
|
|
146
|
+
|
|
147
|
+
**CONTRACTS CONSUMED**
|
|
148
|
+
|
|
149
|
+
What data does this outcome need that it did not itself create? Contracts consumed are the inputs.
|
|
150
|
+
|
|
151
|
+
List them by name and describe what is needed:
|
|
152
|
+
- `student-application`: The submitted application data — name, contact details, personal statement, qualifications, teacher reference.
|
|
153
|
+
- `course-configuration`: The course details — name, capacity, start date, requirements.
|
|
154
|
+
- `teacher-session`: The authenticated teacher's identity and permissions.
|
|
155
|
+
|
|
156
|
+
Contracts consumed connect this outcome to the outcomes that produce its inputs. If an outcome tries to consume a contract that no other outcome produces, there is a gap in the plan.
|
|
157
|
+
|
|
158
|
+
---
|
|
159
|
+
|
|
160
|
+
**CONTRACTS EXPOSED**
|
|
161
|
+
|
|
162
|
+
What data does this outcome produce that other outcomes will need? Contracts exposed are the outputs.
|
|
163
|
+
|
|
164
|
+
List them by name and describe what is produced:
|
|
165
|
+
- `application-decision`: The approval record — application ID, teacher ID, decision (approved/rejected), notes, timestamp.
|
|
166
|
+
- `student-notification-trigger`: Event that triggers the student notification system — student email, course name, decision.
|
|
167
|
+
|
|
168
|
+
Contracts exposed are the connective tissue of the system. An outcome that exposes nothing is a dead end. Most outcomes should expose something.
|
|
169
|
+
|
|
170
|
+
---
|
|
171
|
+
|
|
172
|
+
## 7. Writing Good Outcomes — The 4 Traps
|
|
173
|
+
|
|
174
|
+
Most outcome-writing mistakes fall into one of four categories. Knowing them prevents most rework.
|
|
175
|
+
|
|
176
|
+
---
|
|
177
|
+
|
|
178
|
+
**Trap 1: The Screen Trap**
|
|
179
|
+
|
|
180
|
+
The screen trap is describing a UI layout instead of an experience. "The teacher sees a dashboard with a sidebar, a header with their name, and a main area with three tabs: Applications, Students, Reports."
|
|
181
|
+
|
|
182
|
+
This describes where things are. It says nothing about what happens when the teacher interacts with anything. The AI will build a layout. It will not know what any of the elements do.
|
|
183
|
+
|
|
184
|
+
**Fix:** Start from action. "The teacher opens the Applications tab and sees a list of pending applications. She clicks an application to review it." Describe behaviour, not layout. Layout is derived from behaviour.
|
|
185
|
+
|
|
186
|
+
---
|
|
187
|
+
|
|
188
|
+
**Trap 2: The Ambiguous Verb Trap**
|
|
189
|
+
|
|
190
|
+
"The teacher manages applications." Manages how? Approves? Rejects? Puts on hold? Archives? Reassigns?
|
|
191
|
+
|
|
192
|
+
Every verb in an outcome experience must be concrete enough to be tested. If you cannot write a pass/fail test for a verb, the verb is too ambiguous.
|
|
193
|
+
|
|
194
|
+
**Fix:** Replace abstract management verbs (manage, handle, oversee, administer) with specific action verbs (approve, reject, put on hold, move to, send, generate, publish, archive).
|
|
195
|
+
|
|
196
|
+
---
|
|
197
|
+
|
|
198
|
+
**Trap 3: The Missing Failure Trap**
|
|
199
|
+
|
|
200
|
+
Happy-path only. "The teacher fills in the form and submits it. They see a success message." What happens if the form is incomplete? What happens if the student's application is already approved by someone else? What happens if the cohort is full?
|
|
201
|
+
|
|
202
|
+
A system that only handles happy paths is not a finished system. It is a prototype.
|
|
203
|
+
|
|
204
|
+
**Fix:** For every action in the experience, ask: what can go wrong? Add the failure cases to the walkthrough. "If the cohort is already at capacity when Maya clicks Confirm Approval, a dialog appears: 'This cohort is now full. You can approve Jordan to the waitlist or to a different cohort.'"
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
**Trap 4: The Floating Outcome Trap**
|
|
209
|
+
|
|
210
|
+
An outcome that does not connect to anything. No contracts consumed (it appears from nowhere). No contracts exposed (it produces nothing). Or contracts that reference data that does not exist in any other outcome.
|
|
211
|
+
|
|
212
|
+
The floating outcome is symptom of planning in isolation — designing screens without thinking about where the data comes from or where it goes.
|
|
213
|
+
|
|
214
|
+
**Fix:** For every outcome, ask: where does the data in this experience come from? Where does the data produced here go? Draw the arrows. If an arrow has no destination, the plan is incomplete.
|
|
215
|
+
|
|
216
|
+
---
|
|
217
|
+
|
|
218
|
+
## 8. The 7-Dimension Persona
|
|
219
|
+
|
|
220
|
+
A persona in ODD is not a user type. It is a precise description of a person's relationship with your system: what they know, what they can do, what they cannot see, and what motivates them.
|
|
221
|
+
|
|
222
|
+
ODD uses a 7-dimension persona model. Each dimension is a constraint on the outcomes that belong to this persona.
|
|
223
|
+
|
|
224
|
+
---
|
|
225
|
+
|
|
226
|
+
**1. Identity**
|
|
227
|
+
|
|
228
|
+
Name, role, relationship to the organisation. Specific enough to be real. "Maya Thompson, Cohort Teacher at Westfield Sixth Form College, permanent staff, 8 years in post."
|
|
229
|
+
|
|
230
|
+
The specificity matters because it prevents scope creep. The outcomes for "Maya" are outcomes for a permanent, experienced teacher with full cohort permissions — not outcomes for a supply teacher, not outcomes for a trainee, not outcomes for a teacher at a different organisation.
|
|
231
|
+
|
|
232
|
+
---
|
|
233
|
+
|
|
234
|
+
**2. Goal**
|
|
235
|
+
|
|
236
|
+
The single thing this persona is trying to accomplish when they engage with your system. One goal, clearly stated. "Maya wants to manage her cohort without administrative overhead consuming her teaching time."
|
|
237
|
+
|
|
238
|
+
If you cannot state the goal in one sentence, you probably have two personas.
|
|
239
|
+
|
|
240
|
+
---
|
|
241
|
+
|
|
242
|
+
**3. Context**
|
|
243
|
+
|
|
244
|
+
When, where, and how this persona typically uses the system. "Maya uses the system during form period (20 minutes), during free periods (40 minutes maximum), and occasionally at home in the evening. She uses a school laptop (Chrome browser) and sometimes her personal iPhone."
|
|
245
|
+
|
|
246
|
+
Context shapes UI decisions: mobile-first for evening use, keyboard shortcuts for power users, accessible font sizes for older displays.
|
|
247
|
+
|
|
248
|
+
---
|
|
249
|
+
|
|
250
|
+
**4. Knowledge**
|
|
251
|
+
|
|
252
|
+
What does this persona already know that affects how they use the system? "Maya understands the college's admissions policies in detail. She does not know what happens technically after she submits a decision — she expects the system to handle notifications and record-keeping."
|
|
253
|
+
|
|
254
|
+
Knowledge dimensions prevent you from over-explaining domain concepts the persona already understands, or under-explaining system concepts they do not.
|
|
255
|
+
|
|
256
|
+
---
|
|
257
|
+
|
|
258
|
+
**5. Permissions**
|
|
259
|
+
|
|
260
|
+
What is this persona allowed to do? What are they explicitly not allowed to do?
|
|
261
|
+
|
|
262
|
+
"Maya can: view all applications for her cohort, approve and reject applications, add notes to student records, view attendance data for her cohort.
|
|
263
|
+
Maya cannot: view applications for other cohorts, change course configurations, access financial data, approve applications for courses she does not teach."
|
|
264
|
+
|
|
265
|
+
Permissions define data boundaries. They are the input to your authentication and authorisation logic.
|
|
266
|
+
|
|
267
|
+
---
|
|
268
|
+
|
|
269
|
+
**6. Pain Points**
|
|
270
|
+
|
|
271
|
+
What is currently broken or frustrating about how this person does their work? What does the current state cost them?
|
|
272
|
+
|
|
273
|
+
"Maya currently receives application PDFs by email. She has to open each one, copy information into a spreadsheet, reply manually to each applicant, and update a shared tracker. This takes her 3 hours every application round."
|
|
274
|
+
|
|
275
|
+
Pain points anchor outcomes to real needs. They prevent you from building technically correct but practically useless software.
|
|
276
|
+
|
|
277
|
+
---
|
|
278
|
+
|
|
279
|
+
**7. Success State**
|
|
280
|
+
|
|
281
|
+
What does a good outcome look like for this persona? How will they know the system is working?
|
|
282
|
+
|
|
283
|
+
"Maya can process 20 applications in under 30 minutes, is confident that every applicant has been notified, and never needs to check whether her decisions were recorded correctly."
|
|
284
|
+
|
|
285
|
+
Success states become the benchmarks for outcome verification. They are the answer to "how good is good enough?"
|
|
286
|
+
|
|
287
|
+
---
|
|
288
|
+
|
|
289
|
+
## 9. The Acid-Test Persona
|
|
290
|
+
|
|
291
|
+
Every ODD plan should include at least one acid-test persona. This is a persona at the edge of your user base — someone whose needs are non-standard, whose access is restricted, or whose context is unusual.
|
|
292
|
+
|
|
293
|
+
The acid-test persona is not the primary user. They are the stress test. If the system works correctly for the acid-test persona, it is almost certainly working correctly for everyone.
|
|
294
|
+
|
|
295
|
+
**Examples:**
|
|
296
|
+
|
|
297
|
+
A teacher who has just been hired and has not yet been assigned a cohort. Can they log in? What do they see? Can they accidentally access another teacher's data?
|
|
298
|
+
|
|
299
|
+
A student who began an application but never submitted it. Does the draft persist? Can they resume it? Is the partial application visible to teachers?
|
|
300
|
+
|
|
301
|
+
An admin who covers for a teacher who is on leave. Do they have the correct temporary permissions? Are those permissions revoked automatically when the teacher returns?
|
|
302
|
+
|
|
303
|
+
Writing acid-test personas forces you to think about states your system will definitely encounter. Ignoring them means discovering them in production.
|
|
304
|
+
|
|
305
|
+
---
|
|
306
|
+
|
|
307
|
+
## 10. Paired Personas and Data Boundaries
|
|
308
|
+
|
|
309
|
+
Many ODD systems involve paired personas: two personas who interact with the same data but from different perspectives and with different permissions.
|
|
310
|
+
|
|
311
|
+
Teacher / Student. Mentor / Mentee. Admin / Applicant. Provider / Client.
|
|
312
|
+
|
|
313
|
+
Paired personas create a specific design challenge: the same entity (an application, a booking, an assessment) must look different depending on who is viewing it.
|
|
314
|
+
|
|
315
|
+
The student's application looks like: a form they filled in, a status, a message from the teacher.
|
|
316
|
+
The teacher's view of the same application looks like: the form content, the student's history, action buttons to approve or reject.
|
|
317
|
+
The admin's view looks like: the application plus the teacher's decision plus an audit trail.
|
|
318
|
+
|
|
319
|
+
These are not three different objects. They are three views of one object, filtered by persona and permissions.
|
|
320
|
+
|
|
321
|
+
When you write outcomes for paired personas, write them separately. Outcome 4: Student submits application. Outcome 7: Teacher reviews application. Outcome 12: Admin audits application decisions. Three outcomes, one underlying entity.
|
|
322
|
+
|
|
323
|
+
The contracts exposed by Outcome 4 (the application data) are consumed by Outcomes 7 and 12. The contracts exposed by Outcome 7 (the decision data) are consumed by Outcome 12 (the audit trail) and also by the notification outcomes for the student.
|
|
324
|
+
|
|
325
|
+
---
|
|
326
|
+
|
|
327
|
+
## 11. Contracts — What They Are and Why They Matter
|
|
328
|
+
|
|
329
|
+
A contract is a named, structured piece of data that one outcome produces and another outcome consumes.
|
|
330
|
+
|
|
331
|
+
The junction box analogy: imagine your house has electricity. The wiring runs through junction boxes in the walls. Each junction box has a defined input (the cable coming from the fuse box) and defined outputs (the cables running to sockets and switches). The junction box does not generate electricity — it routes it. And it does so through a defined interface that every electrician knows.
|
|
332
|
+
|
|
333
|
+
Contracts are the junction boxes of your software system. An outcome that processes student applications produces an `application-decision` contract. It does not matter how the approval was triggered, who the teacher was, or what the UI looked like — what matters is the shape of the data that comes out. Any outcome that needs to know whether a student was approved reads the `application-decision` contract. It reads from a defined interface, not from a specific implementation.
|
|
334
|
+
|
|
335
|
+
Why does this matter for non-technical builders? Because it means you can change the implementation of one outcome without breaking every outcome that depends on it. If you redesign the teacher's approval UI, the contract it produces is unchanged. The notification system, the record system, and the reporting system continue to work because they depend on the contract, not on the UI.
|
|
336
|
+
|
|
337
|
+
---
|
|
338
|
+
|
|
339
|
+
## 12. Every Outcome Produces and Consumes
|
|
340
|
+
|
|
341
|
+
The rhythm of a connected system: every outcome, except the first, consumes something. Every outcome, except the last, produces something.
|
|
342
|
+
|
|
343
|
+
When you find an outcome that consumes nothing (except perhaps an authenticated session), it is either the first step in a user journey or it is producing data from scratch (a creation outcome: student creates an application). That is fine — but make it explicit in the contracts.
|
|
344
|
+
|
|
345
|
+
When you find an outcome that produces nothing, ask: where does the data from this experience go? If a teacher approves an application and that approval goes nowhere — does not trigger a notification, does not update the student's record, does not inform the reporting system — then what is the approval for?
|
|
346
|
+
|
|
347
|
+
Every meaningful action in a system has consequences. Those consequences must be modelled as contracts. An outcome that appears to produce nothing is either a genuine dead end (the action has no downstream effects — this is rare in real systems) or a sign that some downstream outcomes have not been written yet.
|
|
348
|
+
|
|
349
|
+
Mapping the produce/consume rhythm across your outcomes is how you discover gaps in your plan before you build.
|
|
350
|
+
|
|
351
|
+
---
|
|
352
|
+
|
|
353
|
+
## 13. The "Two Architects, One Door" Problem
|
|
354
|
+
|
|
355
|
+
Imagine two architects are each hired to design one half of a building. They work independently and produce excellent plans. When the plans are combined, there is one door where the two halves meet — but the first architect designed it at 900mm wide and the second architect designed it at 800mm wide.
|
|
356
|
+
|
|
357
|
+
Neither plan is wrong in isolation. The problem is the interface between them.
|
|
358
|
+
|
|
359
|
+
This is the most common failure mode in AI-assisted parallel development. When you ask two agents to build two outcomes simultaneously, each agent makes decisions about shared infrastructure: the database table that both outcomes read from, the API endpoint that both outcomes call, the authentication check that both outcomes rely on. Without coordination, the agents make different decisions. The outcomes do not connect.
|
|
360
|
+
|
|
361
|
+
The solution is shared contracts. Before any parallel build begins, a Coordinator agent reads all the outcomes in a phase, identifies all shared infrastructure, and publishes a canonical definition of each shared element. Both building agents read this definition and conform to it.
|
|
362
|
+
|
|
363
|
+
If Agent A builds the `applications` table with a column called `teacher_id` and Agent B builds a query that looks for `approver_id`, the handshake fails. The shared contract says the column is `teacher_id`. Both agents use `teacher_id`. The handshake succeeds.
|
|
364
|
+
|
|
365
|
+
---
|
|
366
|
+
|
|
367
|
+
## 14. Reading a Dependency Graph
|
|
368
|
+
|
|
369
|
+
A dependency graph shows the relationships between your outcomes. It answers the question: what must exist before this can be built?
|
|
370
|
+
|
|
371
|
+
Reading a dependency graph:
|
|
372
|
+
- An arrow from Outcome A to Outcome B means "B depends on something A produces"
|
|
373
|
+
- An outcome with no incoming arrows is a starting point (nothing else must be built first)
|
|
374
|
+
- An outcome with many incoming arrows is a dependency (many things depend on it — build it carefully and test it thoroughly)
|
|
375
|
+
- A cycle (A depends on B, B depends on A) is a design error — it means you have a circular dependency that cannot be resolved by linear building
|
|
376
|
+
|
|
377
|
+
The dependency graph tells you the build order. You cannot build Outcome 7 (teacher reviews application) before Outcome 4 (student submits application) because 7 consumes what 4 produces. You can build Outcomes 4 and 5 in parallel if they share no dependencies.
|
|
378
|
+
|
|
379
|
+
When you brief Claude Code or a Ruflo swarm for a phase, the dependency graph determines what gets built first and what can be built simultaneously.
|
|
380
|
+
|
|
381
|
+
---
|
|
382
|
+
|
|
383
|
+
## 15. Architecture Is Derived, Not Designed
|
|
384
|
+
|
|
385
|
+
In traditional software development, an architect designs the system structure first and developers build within it. The architecture precedes the implementation.
|
|
386
|
+
|
|
387
|
+
In ODD, it is reversed. Architecture is derived from outcomes.
|
|
388
|
+
|
|
389
|
+
When you have written all your outcomes and mapped all your contracts, the architecture is implicit in the contracts. The data layer is the set of contracts that must be persisted. The API layer is the set of contracts that must be exchanged between client and server. The authentication layer is the intersection of permissions across all personas. The notification layer is the set of contracts that trigger external communications.
|
|
390
|
+
|
|
391
|
+
You do not design these layers. You read them from your outcome documents and your dependency graph. Claude Code then implements them.
|
|
392
|
+
|
|
393
|
+
This reversal matters because it prevents gold-plating and under-building. Traditional architecture tends toward over-engineering (building infrastructure for features that do not exist yet) or under-engineering (not seeing the full scope of what will eventually be needed). ODD-derived architecture builds exactly what the outcomes require, because the outcomes are the specification.
|
|
394
|
+
|
|
395
|
+
When the plan is complete, ask Claude Code: "Read all the outcomes and contracts in this plan. Derive the architecture: what database tables are needed, what API routes, what authentication model, what external services. Do not build anything yet — just derive and describe."
|
|
396
|
+
|
|
397
|
+
---
|
|
398
|
+
|
|
399
|
+
## 16. The 4-Level Build Protocol
|
|
400
|
+
|
|
401
|
+
See `build/build-protocol.md` for the full protocol. Summary:
|
|
402
|
+
|
|
403
|
+
- **Level 1 — Session Protocol**: Re-orient Claude Code at every session start. Restore Ruflo state. Write handover notes. Close sessions with commits and saved state.
|
|
404
|
+
- **Level 2 — Phase Protocol**: Verify previous dependencies. Identify shared infrastructure. Run the Coordinator agent before parallel building. Run Integration Protocol after all outcomes complete.
|
|
405
|
+
- **Level 3 — Outcome Protocol**: Brief Claude Code with all 6 fields. Run shape check before verification. Verify using the walkthrough as a literal test script. Describe failures in domain language. Re-verify the entire outcome after fixes. Commit when verified.
|
|
406
|
+
- **Level 4 — Integration Protocol**: Handshake tests between connected outcomes. Data flow trace for primary entities. Cross-persona checks. Fix in domain language. Phase commit.
|
|
407
|
+
|
|
408
|
+
---
|
|
409
|
+
|
|
410
|
+
## 17. Common Mistakes and How to Avoid Them
|
|
411
|
+
|
|
412
|
+
**Building before the plan is complete**
|
|
413
|
+
|
|
414
|
+
The pressure to start building is real. Resist it. Every hour of upfront planning saves two hours of rework. An incomplete plan built at pace produces a system that looks finished but does not work. A complete plan built carefully produces a system that works.
|
|
415
|
+
|
|
416
|
+
The test for plan completeness: can you trace any entity (an application, a booking, a student) from its creation through every outcome that touches it to its final state? If you hit a gap — a moment where the entity should change state but no outcome handles it — the plan is incomplete.
|
|
417
|
+
|
|
418
|
+
**Describing the wrong level of detail**
|
|
419
|
+
|
|
420
|
+
Outcomes can be too abstract ("the admin manages the system") or too specific ("the admin clicks the blue button in the top-right corner of the screen"). The right level of detail is behavioural: what the persona does, what they see, what changes. Not what colour the button is, but what happens when they press it.
|
|
421
|
+
|
|
422
|
+
**Forgetting about the second persona**
|
|
423
|
+
|
|
424
|
+
Most outcomes exist in pairs. When you write the outcome for a teacher approving an application, you are describing the teacher's experience. Somewhere in that outcome, an event occurs that the student should experience. That student experience is a separate outcome. Do not collapse it into the teacher outcome.
|
|
425
|
+
|
|
426
|
+
**Treating errors as edge cases**
|
|
427
|
+
|
|
428
|
+
Error handling is not an edge case. It is a first-class concern. In any real system, users will submit incomplete forms, will attempt actions they are not permitted to perform, will encounter states they were not expecting. If your outcomes only describe the happy path, your system will behave unpredictably when anything goes wrong.
|
|
429
|
+
|
|
430
|
+
**Building outcomes in the wrong order**
|
|
431
|
+
|
|
432
|
+
The dependency graph tells you the build order. If you build Outcome 12 before Outcome 4, you are building something that depends on data that does not yet exist. The build may appear to succeed (the AI will mock the missing data) but the integration will fail. Always build in dependency order.
|
|
433
|
+
|
|
434
|
+
---
|
|
435
|
+
|
|
436
|
+
## 18. The Hand-Off to Claude Code
|
|
437
|
+
|
|
438
|
+
When your ODD plan is complete and the Build Protocol is running, your relationship with Claude Code is that of a domain expert briefing an implementation team. You know what the system must do. Claude Code knows how to build it.
|
|
439
|
+
|
|
440
|
+
The quality of the build depends on the quality of the brief.
|
|
441
|
+
|
|
442
|
+
**A complete brief for an outcome includes:**
|
|
443
|
+
1. All 6 fields from the outcome document — pasted verbatim, not summarised
|
|
444
|
+
2. The stack to use (Next.js, TypeScript, Tailwind CSS v4, shadcn/ui, Framer Motion, PostgreSQL/Prisma, NextAuth.js, Stripe where applicable, Resend for email)
|
|
445
|
+
3. The domain language to use for files, variables, and components
|
|
446
|
+
4. The verification walkthrough to treat as a test script
|
|
447
|
+
5. Any rules from CLAUDE.md that are relevant
|
|
448
|
+
|
|
449
|
+
**What you do not tell Claude Code:**
|
|
450
|
+
- How to implement it (that is Claude Code's job)
|
|
451
|
+
- What files to create (Claude Code decides, based on the stack and the requirement)
|
|
452
|
+
- What the database schema should look like (Claude Code derives this from the contracts)
|
|
453
|
+
|
|
454
|
+
**What to do when the build does not match the brief:**
|
|
455
|
+
|
|
456
|
+
Describe the discrepancy in domain language. "The application detail panel should show the student's teacher reference, but I can see the personal statement twice and the teacher reference is missing." Then ask Claude Code to re-read the outcome document and correct the discrepancy.
|
|
457
|
+
|
|
458
|
+
Do not attempt to describe the fix in technical terms. You do not know whether the problem is in the database query, the API route, or the UI component. Claude Code does. Your job is to describe what is wrong from the user's perspective — Claude Code's job is to find and fix the cause.
|
|
459
|
+
|
|
460
|
+
**The final test of a complete system:**
|
|
461
|
+
|
|
462
|
+
Run the acid-test persona. Log in as the person at the edge of your user base — the newly-hired teacher, the partially-completed application, the admin covering for someone on leave. If the system handles their experience correctly, it is almost certainly handling everyone else's correctly too.
|