odd-studio 1.0.1 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,31 @@
1
+ # Chapter 1: It's All About Clarity
2
+
3
+ The fundamental problem of AI-assisted development is specification precision, not technical skill. The AI can build from a clear specification. It cannot invent domain knowledge.
4
+
5
+ This is the single most important idea in the entire methodology. You are not learning to code. You are learning to specify — to describe what people need to do, what should happen when they do it, and how you will know it worked. The AI handles everything else.
6
+
7
+ ## Key Principles
8
+
9
+ - **Specification precision is the bottleneck, not technical ability.** The AI can generate code, configure databases, wire up APIs. What it cannot do is know that your customers need to see their booking confirmation before they leave the page.
10
+
11
+ - **Domain knowledge is yours alone.** You know your users, your workflows, your edge cases. No model, however capable, has sat in your meetings, handled your customer complaints, or watched someone struggle with your competitor's product.
12
+
13
+ - **Clarity is measurable.** A clear specification can be read by someone unfamiliar with the project and understood without follow-up questions. If it requires a conversation to interpret, it is not yet a specification.
14
+
15
+ - **The AI will fill gaps with guesses.** When a specification is vague, the AI does not stop and ask. It makes a plausible assumption and builds from it. The result looks correct until a real user encounters the assumption.
16
+
17
+ ## Red Flags
18
+
19
+ - Vague requirements treated as specifications. "Users should be able to manage their profile" is not a specification. It is a wish. A specification describes what managing a profile looks like step by step.
20
+
21
+ - Assuming the AI will figure out domain details. If you do not state that a cancelled booking should trigger a refund within 48 hours, it will not happen. The AI does not know your refund policy.
22
+
23
+ - Confusing technical fluency with specification quality. You do not need to know how a database works. You need to know that when a teacher marks an assignment, the student should see the grade within five seconds.
24
+
25
+ ## What This Means for You
26
+
27
+ Your job is not to learn the technology. Your job is to be ruthlessly specific about what people need to do in your system and what should happen when they do it. Every chapter that follows builds on this foundation.
28
+
29
+ If you have already started writing outcomes for your project, review them now. Can someone who knows nothing about your domain read each outcome and understand exactly what should happen? If not, the specification needs work — and that work is yours to do.
30
+
31
+ Next: Chapter 2 explains how to collaborate on the technical decisions that sit between your specification and the built system.
@@ -0,0 +1,31 @@
1
+ # Chapter 10: The Build Protocol
2
+
3
+ ODD Studio handles all mechanics — context loading, contract validation, re-briefing, committing. You do three things: type /odd, type *build, verify the result. The tool handles the rest.
4
+
5
+ The Build Protocol is the repeating rhythm of every session. It is deliberately simple because the complexity belongs in the specification, not in the process. If the specification is precise, the build protocol is almost mechanical. If the build goes wrong, the cause is almost always in the specification, not in the process.
6
+
7
+ ## Key Principles
8
+
9
+ - **The tool handles the mechanics. You handle the judgment.** ODD Studio loads context from ruflo, reads the contract map, identifies the next outcome to build, briefs the AI, waits for the result, and presents you with a verification checklist. Your job is to follow that checklist as the persona and judge whether the result is correct.
10
+
11
+ - **The session rhythm is: /odd, *build, verify, confirm.** `/odd` loads the skill and restores project state from ruflo. `*build` starts the next outcome. You verify the result against the checklist. `confirm` commits the verified outcome and advances to the next one. That is it.
12
+
13
+ - **Re-briefing is automatic.** You do not need to remind the AI what your project is, what has been built, or what comes next. Ruflo stores all of this. ODD Studio reads it at the start of every session. If you find yourself explaining context, something is wrong with the state — not with the process.
14
+
15
+ - **One outcome per build cycle.** Each *build targets one outcome. Build it, verify it, confirm it, move on. Trying to build multiple outcomes in one cycle defeats the purpose of outcome-level verification.
16
+
17
+ ## Red Flags
18
+
19
+ - Re-briefing the AI manually. If you are pasting specifications, explaining context, or reminding the AI about previous decisions, the state management is not working. Check ruflo. Check CLAUDE.md.
20
+
21
+ - Tracking state in a separate document. Notes, spreadsheets, or documents that track what has been built and what comes next are a sign that you are not trusting the tool to manage state. The tool does this. Let it.
22
+
23
+ - Building without verifying. Typing `confirm` without following the verification checklist is the fastest way to accumulate hidden defects. Every unverified outcome is a risk to every outcome that depends on it.
24
+
25
+ - Batching multiple outcomes into one build. Each outcome has its own verification checklist for a reason. Mixing them makes it impossible to know which outcome caused a failure.
26
+
27
+ ## What This Means for You
28
+
29
+ Your next session: type `/odd`. Read what ODD Studio tells you about where you are. Type `*build`. Follow the checklist. If it passes, type `confirm`. If it fails, describe the failure in your own words — not in technical terms. ODD Studio handles the fix.
30
+
31
+ Next: Chapter 11 explains why verification is your job and no tool can replace your judgment.
@@ -0,0 +1,35 @@
1
+ # Chapter 11: Verification Is Your Job
2
+
3
+ Automated tests confirm the code does what it was told. Verification confirms it was told to do the right thing. Only you can do this. No tool can check whether an email has the right tone, whether a booking flow makes sense to a first-time user, or whether a dashboard shows what a manager actually needs.
4
+
5
+ ## Key Principles
6
+
7
+ - **Automated tests confirm the code does what it was told. Verification confirms it was told to do the right thing.** Tests check that the system behaves as specified. Verification checks that the specification was correct. A perfectly tested system built from a flawed specification is a perfectly built wrong thing.
8
+
9
+ - **An outcome is verified when every step passes on a single complete run.** Not "I checked the main bits." Every step in the checklist, followed in order, as the persona, from start to finish, in one run. If you restart partway through, the run does not count.
10
+
11
+ - **Verification is done as the persona.** You are not checking as yourself — you are checking as the persona in their situation. If the persona is on a phone with limited time, verify on a phone. If the persona is a first-time user, approach the interface as if you have never seen it.
12
+
13
+ ## Three Rules of Verification
14
+
15
+ - **Don't skip steps.** Every step in the checklist tests a specific part of the outcome. Skipping a step means that part is unverified. Unverified parts fail in production.
16
+
17
+ - **Don't approximate.** "The confirmation email probably looks fine" is not verification. Open it. Read it. Does it contain the right information? Does the link work? Check the actual result, not your assumption about it.
18
+
19
+ - **Verify the failure paths.** If the outcome includes what happens when something goes wrong — payment declined, event sold out, invalid input — verify those paths too. Failure paths are where most real-world problems hide because they are the paths least often tested.
20
+
21
+ ## Red Flags
22
+
23
+ - "The build looks fine." This is not verification. It is a feeling. Follow the checklist.
24
+
25
+ - "I tested the main flow." The main flow is the happy path. What about failure paths and edge cases? If they are in the checklist, they need to be checked.
26
+
27
+ - "It worked when I tried it." Once is not a verification. Did you follow every step? As the persona? In their situation? On one complete run?
28
+
29
+ - Verification delegated to someone who does not know the domain. Verification requires domain judgment. Someone unfamiliar with your users and workflows cannot judge whether the result is correct.
30
+
31
+ ## What This Means for You
32
+
33
+ Next time you reach verification, slow down. Open the checklist. Follow every step as the persona. When something is wrong, describe what you expected and what actually happened — in domain language. ODD Studio takes it from there.
34
+
35
+ Next: Chapter 12 walks through a complete single-outcome build cycle from start to finish.
@@ -0,0 +1,39 @@
1
+ # Chapter 12: Building One Outcome
2
+
3
+ One command, one build, one verification. This chapter is the methodology in action — a complete cycle from `*build` to `confirm`, showing exactly what happens at each stage and what your role is at each moment.
4
+
5
+ No new principles here. Everything has been introduced in earlier chapters. This chapter shows them working together in sequence.
6
+
7
+ ## The Cycle
8
+
9
+ **Step 1: *build.** You type `*build`. ODD Studio reads the next outcome from your phase plan, loads the contract map, checks which contracts are available from previously built outcomes, and briefs the AI. The AI builds the outcome. You wait.
10
+
11
+ **Step 2: The checklist appears.** When the build completes, ODD Studio presents the verification checklist derived from the outcome's verification field. Each step corresponds to something you specified. The checklist is your specification, turned into a sequence of checks.
12
+
13
+ **Step 3: You verify as the persona.** Step into the persona's situation. Follow each checklist item in order. Check the actual result, not your expectation of it. If the persona is on a phone, verify on a phone.
14
+
15
+ **Step 4: Pass or fail.** If every step passes on a single complete run, the outcome is verified. If any step fails, describe the failure in domain language — what you expected to see and what you actually saw. Not "the API returned a 500." Instead: "I completed the booking but no confirmation email arrived."
16
+
17
+ **Step 5: Failure handling.** When you describe a failure, ODD Studio handles the fix. It re-reads the specification, makes the correction, and presents the checklist again. You verify again from the beginning — a partial re-verification does not count.
18
+
19
+ **Step 6: confirm.** When every step passes, you type `confirm`. ODD Studio commits the verified outcome to git, updates ruflo with the new project state, and advances the plan to the next outcome. The cycle is complete.
20
+
21
+ ## What You Do vs. What the Tool Does
22
+
23
+ **You do:** verify the result as the persona. Describe failures in domain language. Confirm when satisfied.
24
+
25
+ **The tool does:** load context, brief the AI, run the build, present the checklist, handle fixes, commit, update state.
26
+
27
+ ## Red Flags
28
+
29
+ - Describing failures in technical language. "The database query is wrong" is not your job to diagnose. "The booking page shows last month's events instead of upcoming ones" is.
30
+
31
+ - Confirming without completing verification. Every unverified step is a hidden risk. Do not confirm until every step passes.
32
+
33
+ - Modifying the checklist during verification. The checklist comes from your specification. If the checklist is wrong, the specification needs updating — that is a separate step, not something to fix mid-verification.
34
+
35
+ ## What This Means for You
36
+
37
+ This is the rhythm you will repeat for every outcome in your project. It is simple by design. The complexity is in the specification — which you have already done. The build cycle is just the execution.
38
+
39
+ Next: Chapter 13 introduces the swarm — the same cycle, applied to multiple outcomes simultaneously.
@@ -0,0 +1,31 @@
1
+ # Chapter 13: Concurrent Outcomes and the Swarm
2
+
3
+ `*swarm` builds all independent outcomes in the current phase simultaneously. Four agents — coordinator, backend, UI, QA — work from the same contract map. Your role is unchanged: verify each outcome, describe failures in domain language.
4
+
5
+ The swarm is not a different methodology. It is the same methodology — build, verify, confirm — applied to several outcomes at once. The contract map guarantees that outcomes within a phase have no dependencies on each other, so they can be built in parallel without conflicts.
6
+
7
+ ## Key Principles
8
+
9
+ - **The swarm is not an advanced feature. It is the same methodology applied to several outcomes at once.** If you can build one outcome, you can use the swarm. The process is identical: each outcome gets built, each outcome gets a verification checklist, you verify each one. The only difference is that the builds happen in parallel instead of sequentially.
10
+
11
+ - **Four agents, clear roles.** The coordinator manages build order and resolves conflicts. The backend agent builds data and logic. The UI agent builds the interface. The QA agent runs automated checks. They work from your specification automatically.
12
+
13
+ - **Contract conflicts are resolved by the coordinator.** When two outcomes touch related contracts, the coordinator ensures consistency. You see the result in verification, not during the build.
14
+
15
+ ## Red Flags
16
+
17
+ - Trying to prevent contract conflicts before the swarm runs. The coordinator handles this. Your job is to write precise specifications and verify the results. Trying to micromanage the agents' work is solving a problem that the tool already solves.
18
+
19
+ - Routing failures to specific agents. If a verification fails, describe what went wrong in domain language. "The booking confirmation shows the wrong date." Do not say "the backend agent stored the wrong value." The coordinator determines which agent needs to address the issue.
20
+
21
+ - Using the swarm for outcomes with dependencies. The swarm builds outcomes within a single phase — outcomes that are independent by definition. If you try to swarm outcomes from different phases, dependent outcomes will fail because the contracts they consume do not exist yet.
22
+
23
+ - Skipping verification because the swarm "handles quality." The QA agent runs automated tests. Automated tests confirm the code does what it was told. You still need to confirm it was told the right thing. Verification is still your job.
24
+
25
+ ## What This Means for You
26
+
27
+ When you reach a phase with multiple outcomes, type `*swarm` instead of `*build`. The swarm builds all outcomes in the phase. You receive a verification checklist for each one. Verify each outcome individually, in any order. Describe failures in domain language. Confirm when all outcomes pass.
28
+
29
+ The swarm does not change what you do. It changes how much gets done in each session.
30
+
31
+ Next: Chapter 14 addresses the parts of building software that feel intimidating — security, authentication, and data protection — and shows they are just outcomes like any other.
@@ -0,0 +1,31 @@
1
+ # Chapter 14: The Things That Scare You
2
+
3
+ Security, authentication, and data protection are not technical mysteries. They are constraints expressed in outcome language. "This person should be able to do this. This person should not be able to do this other thing." Write them as outcomes. Verify them.
4
+
5
+ The feeling that security is "too technical" is understandable but incorrect. Security requirements are domain requirements. You know who should access what. You know which data is sensitive. That knowledge is the specification. The implementation is the AI's job.
6
+
7
+ ## Key Principles
8
+
9
+ - **Every security requirement can be expressed as a pair of statements: "this person should be able to do this" and "this person should not be able to do this other thing."** A teacher should be able to see their students' grades. A student should not be able to see other students' grades. That is a security specification. Write it as an outcome. Build it. Verify it.
10
+
11
+ - **Prohibition outcomes are as important as permission outcomes.** Most people naturally write outcomes about what users can do. Security requires outcomes about what users cannot do. If you only specify what is allowed, everything else is undefined — and undefined behaviour is where vulnerabilities live.
12
+
13
+ - **Verification of security outcomes means testing the prohibition.** Log in as the wrong persona. Try to access the restricted page directly by URL. Try to see another user's data. If the system stops you, it passes. If not, describe what happened.
14
+
15
+ ## Red Flags
16
+
17
+ - An outcome with no prohibition. If a permission outcome ("teachers can view grades") exists without a corresponding prohibition outcome ("students cannot view other students' grades"), the boundary is unspecified.
18
+
19
+ - No verification step for direct URL access. If your verification only checks that the navigation does not show a link, it misses the case where someone types the URL directly. Always verify that direct access is blocked, not just that the link is hidden.
20
+
21
+ - A data collection outcome with no deletion outcome. If you collect personal data, you need an outcome for deleting it. This is a legal requirement in most jurisdictions. If data comes in, there must be a way for it to go out.
22
+
23
+ - Security treated as a separate phase. Security outcomes are not an add-on. They belong in the same phase as the outcomes they constrain. The "view grades" permission outcome and the "block unauthorised grade access" prohibition outcome should be built and verified together.
24
+
25
+ ## What This Means for You
26
+
27
+ For every outcome that involves access to data or actions restricted to certain personas, write the corresponding prohibition outcome. Then verify both: check that the right persona can do the thing, and check that the wrong persona cannot.
28
+
29
+ You already know who should see what in your system. Write it down in outcome format. That is all security requires from you.
30
+
31
+ Next: Chapter 15 shows that interface quality is a specification problem — and gives you five principles to specify it.
@@ -0,0 +1,35 @@
1
+ # Chapter 15: Good Interfaces Are Specified, Not Designed
2
+
3
+ Interface quality is a specification problem. The walkthrough already describes behaviour — the missing piece is being explicit about visible state, feedback, and what the persona should understand at each moment. `*ui` loads five principles and component defaults.
4
+
5
+ You do not need to be a designer. You need to be specific about what the persona should see and do at every step. "Sarah sees her bookings" is vague. "Sarah sees upcoming bookings sorted by date, each showing event name, date, and ticket count, with a cancel button" is a specification.
6
+
7
+ ## Key Principles
8
+
9
+ - **Every interface element that communicates status, confirms an action, or guides the next step is a specification problem, not a design problem.** You do not need to choose colours or fonts. You need to specify what information is visible, what feedback the persona receives, and what they should do next.
10
+
11
+ ## The Five Principles
12
+
13
+ - **Visible state.** The persona should always be able to tell what state the system is in. Is the booking confirmed? Is the payment processing? Is the form saved? If the state is not visible, the persona is guessing.
14
+
15
+ - **Single focus.** Each screen should ask the persona to do one thing. If a screen asks for payment details while also showing event recommendations, the persona's attention is split. One action per screen.
16
+
17
+ - **No dead ends.** Every screen should offer a clear next step. If the persona completes an action and sees no indication of what to do next, the interface has a dead end.
18
+
19
+ - **Clear confirmation.** When the persona completes an action, they should see explicit confirmation. Not a subtle change — a clear message. "Your booking is confirmed" with the relevant details. Ambiguous feedback creates anxiety and repeat actions.
20
+
21
+ - **Recoverable errors.** When something goes wrong, the persona should see what went wrong and how to fix it. "Something went wrong" is not recoverable. "Your payment was declined — please check your card details or try a different card" is recoverable.
22
+
23
+ ## Red Flags
24
+
25
+ - Verification steps that can only be followed visually. If a verification step says "check that the page looks correct," it is not a verification step. Specify what should be visible: which elements, which data, which state.
26
+
27
+ - No keyboard-only step in the checklist. Some personas use keyboards, screen readers, or assistive technology. If your verification checklist does not include a step for keyboard navigation, you are excluding those personas.
28
+
29
+ - Feedback described as "appropriate" or "clear." These are subjective. Specify the exact message, the exact information displayed, the exact next step offered.
30
+
31
+ ## What This Means for You
32
+
33
+ Review your outcome walkthroughs. At each step, ask: what does the persona see? What do they understand about the system's state? What do they do next? If any of these are ambiguous, make them explicit. That is interface specification.
34
+
35
+ Next: Chapter 16 explains how ODD handles change — and why it is cheaper than you expect.
@@ -0,0 +1,33 @@
1
+ # Chapter 16: Managing Change
2
+
3
+ Change in ODD is cheap because the scope is explicit. ODD Studio knows which outcomes consume which contracts. A changed contract flags only consuming outcomes for re-verification. Non-consuming outcomes are untouched.
4
+
5
+ Change is inevitable. Requirements evolve, users give feedback, you learn things during the build that you could not have known before. The question is not whether change will happen but how expensive it will be. In ODD, the answer is: it depends on contract dependencies, and those are mapped.
6
+
7
+ ## Key Principles
8
+
9
+ - **The scope of a change is determined by contract dependencies, not by intuition.** When something changes, your instinct might be to re-check everything. But the contract map tells you exactly what is affected. If you change a booking contract, only outcomes that consume the booking contract need re-verification. Everything else is structurally unaffected.
10
+
11
+ - **Change is expensive without a methodology because nothing records what depends on what.** In a traditional project, a change to one part might affect any other part. In ODD, the contract map is the dependency record. It tells you the blast radius of every change.
12
+
13
+ - **Specification changes and verification failures are different things.** A specification change means the requirement has changed. A verification failure means the build did not match the specification. A specification change triggers an outcome update and rebuild; a verification failure triggers a fix and re-verification.
14
+
15
+ - **Changed outcomes propagate through the contract map.** If a changed outcome produces a different contract than before, every outcome consuming that contract is flagged for re-verification. ODD Studio handles this tracking automatically.
16
+
17
+ ## Red Flags
18
+
19
+ - Re-verifying outcomes that don't consume the changed contract. This is wasted effort. Trust the contract map. If an outcome does not consume the changed contract, it is not affected.
20
+
21
+ - Treating a specification gap as a verification failure. If during verification you realise the specification itself is wrong — not that the build failed to match it, but that the specification does not capture what you actually need — that is a specification change. Update the outcome, rebuild, then re-verify.
22
+
23
+ - Avoiding change because it feels expensive. Check the contract map first. The actual scope may be much smaller than the felt scope.
24
+
25
+ - Making changes without updating the specification. If you fix something without updating the outcome, the specification and system are out of sync. The next build will reintroduce the problem.
26
+
27
+ ## What This Means for You
28
+
29
+ When a change is needed — and it will be — start with the contract map. Identify which contract is affected. Update the specification for the outcome that produces it. Rebuild. Then re-verify only the outcomes that consume it. ODD Studio flags these for you.
30
+
31
+ Change is not a failure of planning. It is a sign that you are learning. The methodology makes it cheap.
32
+
33
+ Next: Chapter 17 goes deeper into the swarm — how it works, why it fails, and what to do about it.
@@ -0,0 +1,35 @@
1
+ # Chapter 17: The Swarm in Depth
2
+
3
+ The ruflo swarm is the Build Protocol run in parallel by four specialised agents sharing a common specification. When the swarm produces wrong output, the cause is almost always a specification problem — incomplete contract map, vague walkthrough, ambiguous verification steps.
4
+
5
+ This chapter is for when the swarm produced wrong output. The instinct is to blame the tool. In almost every case, the root cause is the specification, not the agents.
6
+
7
+ ## Key Principles
8
+
9
+ - **Specification quality determines swarm quality.** The swarm reads your outcomes, your contract map, and your verification steps. It builds from what you wrote. If what you wrote is vague, the swarm produces vague results. If the contract map has gaps, the agents make different assumptions about the missing data. Precise specifications produce precise builds.
10
+
11
+ - **The four agents have clear boundaries.** The coordinator manages sequencing and conflict resolution. The backend agent builds data and logic. The UI agent builds the interface. The QA agent runs automated verification. These boundaries are fixed.
12
+
13
+ - **Agent coordination follows the contract map.** The agents do not negotiate with each other. They read the contract map. If it says a booking record contains a customer name, event date, and ticket count, both backend and UI agents build to that specification. Conflicts come from ambiguous or missing contracts.
14
+
15
+ ## Three Failure Sources
16
+
17
+ - **Incomplete contract map.** If a contract between two outcomes is not explicitly mapped, the agents producing and consuming that data have nothing to align on. They will each make assumptions. Those assumptions will conflict. The fix is not agent configuration — it is completing the contract map.
18
+
19
+ - **Vague outcome specification.** If the walkthrough says "the user sees their information," what information? The backend agent guesses one set of fields. The UI agent guesses another. The result is a mismatch. The fix is making the walkthrough specific.
20
+
21
+ - **Ambiguous verification steps.** If the verification says "check that the page displays correctly," the QA agent cannot write meaningful automated checks, and you cannot verify meaningfully either. Specify what "correctly" means: which elements, which data, which state.
22
+
23
+ ## Red Flags
24
+
25
+ - Blaming agent configuration for incorrect output. If the output is wrong, the specification is the first place to look. Fix the specification first.
26
+
27
+ - Manually coordinating between agents. The contract map is the single source of truth. Bypassing it creates conflicts.
28
+
29
+ - Increasing the number of agents to solve quality problems. More agents do not fix specification gaps. They amplify them.
30
+
31
+ ## What This Means for You
32
+
33
+ When a swarm build fails or produces unexpected results, resist the urge to troubleshoot the agents. Go back to the specification. Check the contract map for gaps. Check the walkthroughs for vagueness. Check the verification steps for ambiguity. Fix those, run the swarm again.
34
+
35
+ Next: Chapter 18 wraps up — the fundamental problem, what has changed, and where you go from here.
@@ -0,0 +1,31 @@
1
+ # Chapter 18: Conclusion
2
+
3
+ Building software that correctly encodes domain knowledge is hard. What changed is who can solve it. The domain expert who writes precise outcome specifications can now build production-grade software — not by learning to code, but by learning to specify.
4
+
5
+ This is the argument of the entire book in one sentence. Everything you have read — outcomes, personas, contracts, verification, the swarm — serves a single purpose: turning your domain knowledge into precise specifications that an AI can build from correctly.
6
+
7
+ ## What Has Not Changed
8
+
9
+ The fundamental problem is the same one that has existed since the first line of code was written: the person who understands the domain is not the person who builds the software, and something is always lost in translation. Every failed software project in history failed because the built system did not match what was actually needed. The technology was rarely the problem. The translation was.
10
+
11
+ ## What Has Changed
12
+
13
+ The translation gap has collapsed. You no longer need a developer to interpret your requirements. You write the specification. The AI builds from it. ODD Studio manages the mechanics. The only remaining question is whether the specification is precise enough — and that is a question you are now equipped to answer.
14
+
15
+ ## The Key Insight
16
+
17
+ As AI models improve, they become better at building from precise specifications and worse at compensating for vague ones. A better model does not make vague requirements work — it makes precise requirements produce even better results. The gap between a clear specification and a vague one produces an increasingly large difference in output quality.
18
+
19
+ This means specification precision will not become less valuable over time. It will become more valuable. The skill you have developed through this methodology — describing what people need to do with enough precision that a system can be built correctly — is a skill that compounds in value as the tools improve.
20
+
21
+ ## Where You Are Now
22
+
23
+ You know how to write personas that constrain design. You know how to write outcomes that specify behaviour. You know how to map contracts that reveal dependencies. You know how to verify that what was built matches what was needed. You know how to manage change through contract dependencies. You know how to use the swarm for parallel builds.
24
+
25
+ None of this required you to learn a programming language, understand a framework, or configure a server. All of it required you to be precise about your domain.
26
+
27
+ ## What Comes Next
28
+
29
+ Build. Verify. Confirm. Repeat. The methodology does not change as your project grows. The outcomes become more numerous, the contract map becomes more detailed, the phases become more layered — but the rhythm stays the same. You specify. The AI builds. You verify. The system grows.
30
+
31
+ Your domain knowledge is the specification. ODD Studio is the tool. The rest is practice.
@@ -0,0 +1,27 @@
1
+ # Chapter 2: The Right Division of Labour
2
+
3
+ Technical decisions have domain consequences. Collaborate on them — understand the options and tradeoffs, decide with understanding, record the reason in CLAUDE.md. Don't delegate blindly and don't try to make them alone.
4
+
5
+ You are not a developer. But you are the person who will live with the consequences of every technical choice. The right division of labour is not "you do the domain, the AI does the tech." It is: the AI presents options with tradeoffs explained in domain terms, and you decide.
6
+
7
+ ## Key Principles
8
+
9
+ - **Technical decisions have domain consequences.** Choosing a database is not a technical detail — it determines how fast your users see their data, how much you pay per month, and what happens when a thousand people log in at once. You need to understand the consequences, not the implementation.
10
+
11
+ - **Collaboration on technical decisions produces better outcomes than delegation.** When you say "just pick whatever works," you lose the ability to understand why something was chosen. When it causes a problem later, you cannot reason about it. Ask for options. Ask what changes for your users under each option. Then decide.
12
+
13
+ - **Record decisions in CLAUDE.md.** Every technical decision should be written down with the reason it was made. Not the technical justification — the domain reason. "We chose X because our users need Y and X supports that better than Z." Future sessions read CLAUDE.md. If the reason is not there, the decision may be silently reversed.
14
+
15
+ ## Red Flags
16
+
17
+ - Accepting a technical choice without understanding why. If the AI recommends something and you cannot explain the reason to a colleague in plain language, you do not yet understand the decision. Ask again.
18
+
19
+ - Making technical decisions without asking for options. You might have strong instincts about what your system should do. But instinct without options is guessing. Ask the AI to present two or three approaches with their tradeoffs described in terms of what your users will experience.
20
+
21
+ - Decisions made but not recorded. A decision that exists only in the conversation is a decision that will be forgotten. ODD Studio reads CLAUDE.md at the start of every session. If a decision is not there, it does not exist.
22
+
23
+ ## What This Means for You
24
+
25
+ When a technical question comes up during a build, do not wave it through and do not try to answer it from first principles. Ask the AI to explain the options in terms you understand — what changes for your users, what costs more, what limits you later. Decide. Record the decision and the reason.
26
+
27
+ Next: Chapter 3 draws the line between features and outcomes — the difference between a category and a specification.
@@ -0,0 +1,27 @@
1
+ # Chapter 3: Features Aren't Enough
2
+
3
+ A feature describes a capability. An outcome describes a result. "Event booking" is a feature. "A returning customer books a ticket for a sold-out event and receives a confirmation with a calendar invitation" is an outcome. The difference is the difference between something that sounds right and something that can be built.
4
+
5
+ ## Key Principles
6
+
7
+ - **A feature describes a capability. An outcome describes a result.** Features are categories — useful for a slide deck, useless for a build. An outcome names a specific person, a specific trigger, a specific sequence of steps, and a specific result. That is what the AI needs to build from.
8
+
9
+ - **Features hide ambiguity. Outcomes expose it.** "Manage events" could mean a dozen different things. When you write the outcome — the specific walkthrough of what happens step by step — the ambiguity becomes visible. And visible ambiguity can be resolved before the build, not after.
10
+
11
+ - **The walkthrough is the specification.** The moment you describe what happens at each step, you are specifying behaviour. If you cannot walk through it step by step, you do not yet know what you want. That is not a criticism — it is a signal that more thinking is needed.
12
+
13
+ ## Red Flags
14
+
15
+ - Specifications that could mean several different things. If two people could read your specification and picture different systems, it is a feature, not an outcome. Rewrite it with a specific persona, a specific trigger, and a step-by-step walkthrough.
16
+
17
+ - "Manage events" with no walkthrough. Any specification that uses a verb like "manage," "handle," or "process" without describing the steps is a feature wearing an outcome's clothes. What does the person actually do? What do they see? What happens next?
18
+
19
+ - Jumping to build from a feature list. If your specification document reads like a list of capabilities — "user authentication, event management, payment processing" — you have a feature list. Each of those contains multiple outcomes, each of which needs its own specification.
20
+
21
+ ## What This Means for You
22
+
23
+ Go through your current project plan. For every item that reads like a feature — a capability described in two or three words — ask: what does a specific person actually do, step by step, and what is the result? Write that down. That is the outcome. The feature was just the heading.
24
+
25
+ You do not need to get the outcomes perfect yet. Chapter 4 introduces the six-field format that makes outcomes precise enough to build from. For now, the shift is: stop thinking in features, start thinking in walkthroughs.
26
+
27
+ Next: Chapter 4 gives you the exact format for writing outcomes that eliminate ambiguity.
@@ -0,0 +1,37 @@
1
+ # Chapter 4: Outcomes Should Be Specific
2
+
3
+ The six-field outcome format — persona, trigger, walkthrough, verification, contracts exposed, contracts consumed — eliminates the gaps that cause build failures. Every field is required. A missing field is an ambiguity that will become a bug.
4
+
5
+ ## Key Principles
6
+
7
+ - **An outcome is the unit of specification.** Not a user story. Not a feature. An outcome: a six-field structure that names who does what, when, how, and how you confirm it worked. This is what ODD Studio builds from.
8
+
9
+ - **Automated tests confirm the code does what it was told. Verification confirms it was told to do the right thing.** Tests check that a function returns the correct value. Verification checks that the function should have been written in the first place — that the behaviour is what the persona actually needs.
10
+
11
+ - **Every field carries weight.** The persona anchors the outcome to a real use case. The trigger defines when it starts. The walkthrough specifies the behaviour. The verification tells you how to check it. The contracts map dependencies. Skip a field and you create a gap the AI will fill with a guess.
12
+
13
+ ## The Four Quality Traps
14
+
15
+ - **Vagueness.** "The user sees their dashboard" — what is on the dashboard? What numbers? What state? Specify the content.
16
+
17
+ - **Technical language.** "The API returns a 201 with the event payload" — the domain expert cannot verify this. Rewrite in terms of what the persona sees and does.
18
+
19
+ - **Happy path only.** The outcome describes what happens when everything goes right. What happens when the event is sold out? When the payment fails? When the network drops? Failure paths are outcomes too.
20
+
21
+ - **Kitchen sink.** An outcome that tries to cover too much — booking, payment, confirmation, and review in one walkthrough. Split it. Each outcome should describe one coherent sequence.
22
+
23
+ ## Red Flags
24
+
25
+ - An outcome with no verification field. If you cannot describe how to check that it worked, you do not yet know what "worked" means.
26
+
27
+ - A walkthrough written in technical terms. If the walkthrough mentions databases, endpoints, or status codes, it is not written for the domain expert. Rewrite it in terms of what the persona experiences.
28
+
29
+ - An outcome with no failure path. Every outcome that can fail should have a companion outcome describing what happens when it does.
30
+
31
+ ## What This Means for You
32
+
33
+ Take one outcome from your project and run it through the six fields. Is the persona specific? Is the trigger clear? Can you follow the walkthrough as the persona and know at every step what you should see? Can you verify the result without asking a developer? Do you know what data it produces and what data it needs?
34
+
35
+ If any field is missing or vague, fill it in now. That gap is exactly where the build would go wrong.
36
+
37
+ Next: Chapter 5 explains why personas are not demographics — they are design constraints.
@@ -0,0 +1,29 @@
1
+ # Chapter 5: Personas Are Load-Bearing
2
+
3
+ A persona is a design constraint, not a description. A seven-dimension portrait of a specific person in a specific situation anchors every outcome to a real use case. Without it, outcomes float free — plausible but untethered from reality.
4
+
5
+ ## Key Principles
6
+
7
+ - **A persona is a design constraint, not a description.** "Sarah, 34, marketing manager" is a description. It tells you nothing useful. A design constraint tells you: Sarah is on her phone between meetings, has three minutes, needs to confirm an event booking, and will abandon the process if it requires more than two screens. That changes what you build.
8
+
9
+ - **Seven dimensions make a persona load-bearing.** Name and role. Situation and context. Goal and urgency. Technical comfort. Constraints (time, device, connectivity). Prior experience with similar systems. What failure looks like for this person. Each dimension eliminates a class of assumptions.
10
+
11
+ - **Every outcome belongs to a persona.** An outcome written for "users" is an outcome written for no one. When you specify that Sarah — on her phone, in a hurry, moderately technical — needs to complete a booking, every design decision is constrained by her situation. The walkthrough must work for her specifically.
12
+
13
+ - **Different personas produce different outcomes.** The same capability — "book an event" — produces different outcomes for different personas. A first-time visitor needs more guidance. A returning customer needs fewer steps. An administrator needs bulk actions. One feature, multiple personas, multiple outcomes.
14
+
15
+ ## Red Flags
16
+
17
+ - Personas described as demographics. Age, gender, job title, and location are not design constraints. They tell you nothing about how the person will interact with your system. Add situation, urgency, constraints, and failure conditions.
18
+
19
+ - Outcomes written for "users" rather than a specific persona. Every time you see "the user" in an outcome, replace it with a named persona. If you cannot decide which persona this outcome serves, the outcome may be trying to serve everyone and serving no one.
20
+
21
+ - All outcomes sharing one persona. If every outcome is written for the same persona, you are missing perspectives. Who else uses this system? What about the person who uses it rarely? The person who uses it under pressure? The person who makes mistakes?
22
+
23
+ ## What This Means for You
24
+
25
+ Review your personas. For each one, check: do you know their situation, their constraints, their urgency, and what failure looks like for them? If your persona reads like a LinkedIn profile, it is not yet a design constraint.
26
+
27
+ Then check your outcomes. Is every outcome anchored to a specific persona? When you read the walkthrough, can you picture that specific person in that specific situation following those steps?
28
+
29
+ Next: Chapter 6 introduces contracts — the data that flows between outcomes — and why mapping them prevents the most common build failures.
@@ -0,0 +1,31 @@
1
+ # Chapter 6: Every Outcome Has a Contract
2
+
3
+ Every outcome produces something and consumes something. Mapping these contracts before building eliminates the "two architects, one door" problem — two outcomes making conflicting assumptions about shared data.
4
+
5
+ Think of a contract as a handshake between outcomes. One outcome creates a booking record. Another outcome displays booking details. If the first outcome does not produce the information the second outcome needs, the system breaks — not because of a code error, but because the specification had a gap.
6
+
7
+ ## Key Principles
8
+
9
+ - **Every outcome has a contract.** It consumes data from somewhere and produces data for something else. Making this explicit before the build starts means dependencies are visible, not hidden.
10
+
11
+ - **The contract map is the system's skeleton.** When you draw out which outcomes produce which contracts and which outcomes consume them, you see the structure of your system. You see which outcomes must be built first. You see where two outcomes might conflict. You see what is missing.
12
+
13
+ - **Contract conflicts are specification problems, not technical problems.** If two outcomes assume different shapes for the same data — one expects a booking to include a seat number, the other does not — that is a specification gap. Resolve it before building, not during debugging.
14
+
15
+ - **Contracts are described in domain language.** A contract is not a database schema. It is "a booking record containing the customer name, event date, number of tickets, and total price." Describe what information flows between outcomes in the language you use to talk about your business.
16
+
17
+ ## Red Flags
18
+
19
+ - An outcome that consumes nothing. Where does the data come from? If an outcome displays booking details but no other outcome produces booking details, something is missing from your specification.
20
+
21
+ - An outcome that produces nothing. Where does the data go? If an outcome collects customer feedback but nothing else in the system uses that feedback, either the feedback outcome is unnecessary or you are missing a downstream outcome.
22
+
23
+ - Two outcomes producing the same contract with different assumptions. If "create booking" and "import bookings" both produce booking records but assume different fields, the consuming outcomes cannot rely on either. Align the contracts.
24
+
25
+ - Contracts described in technical terms. If your contract mentions "JSON payload," "foreign key," or "API response," rewrite it. Describe the information in domain language: what does this data represent to the people who use your system?
26
+
27
+ ## What This Means for You
28
+
29
+ For each outcome in your current specification, write down: what data does this outcome need to exist before it runs? What data does this outcome create or change? Then draw the connections. Where one outcome's produced contract matches another outcome's consumed contract, you have a dependency. Where there is a mismatch, you have a specification gap to resolve.
30
+
31
+ Next: Chapter 7 explains why writing an outcome once is thinking, and writing it twice is specifying.
@@ -0,0 +1,35 @@
1
+ # Chapter 7: Design the Outcome Twice
2
+
3
+ A first draft is thinking. A reviewed draft is a specification. Write it once to get the idea down. Review it against the four quality traps to catch what was missed. This two-pass approach is the difference between outcomes that feel right and outcomes that build correctly.
4
+
5
+ The first time you write an outcome, you are discovering what you mean. That is valuable work — but it is not yet a specification. It becomes a specification when you review it with fresh eyes and catch the assumptions you made without noticing.
6
+
7
+ ## Key Principles
8
+
9
+ - **A first draft is thinking. A reviewed draft is a specification.** The first pass captures your intent. The second pass tests it against reality. Both are necessary. Skipping the review is the single most common cause of build failures that look like "the AI built the wrong thing."
10
+
11
+ - **The four quality traps are your review checklist.** On the second pass, check each outcome for vagueness, technical language, happy-path-only thinking, and kitchen-sink scope. These four traps catch the majority of specification problems before they become build problems.
12
+
13
+ - **Review as the persona, not as yourself.** When you review the walkthrough, step into the persona's situation. Are you on a phone with three minutes? Are you a first-time user who has never seen this interface? Does the walkthrough still make sense from their perspective?
14
+
15
+ ## The Four Quality Traps (Review Checklist)
16
+
17
+ - **Vagueness.** Does every step describe something specific and observable? "The user sees relevant information" is vague. "Sarah sees her upcoming bookings sorted by date, with the event name, date, and ticket count for each" is specific.
18
+
19
+ - **Technical language.** Can the persona follow every step without technical knowledge? If a step mentions APIs, databases, or status codes, rewrite it in terms of what the persona sees and does.
20
+
21
+ - **Happy path only.** What happens when something goes wrong? If the outcome only describes the success case, add failure paths. What if the event is full? What if the payment is declined? Each significant failure is its own outcome.
22
+
23
+ - **Kitchen sink.** Does the outcome try to do too much? If the walkthrough covers more than one coherent action, split it into separate outcomes connected by contracts.
24
+
25
+ ## Red Flags
26
+
27
+ - Outcomes that have never been reviewed. If you wrote it once and moved on, it is a first draft. Schedule the second pass.
28
+
29
+ - Reviewing your own outcomes immediately after writing them. You will read what you meant, not what you wrote. Review them the next day, or ask someone else to read them.
30
+
31
+ ## What This Means for You
32
+
33
+ Pick three outcomes from your project. Read each one as if you are the persona, in their situation, with their constraints. Run each through the four quality traps. You will find gaps. Fill them. That is the work.
34
+
35
+ Next: Chapter 8 shows how the contract map determines your build order — phases derived from structure, not guesswork.
@@ -0,0 +1,31 @@
1
+ # Chapter 8: The Master Implementation Plan
2
+
3
+ The phase structure is not a project management decision. It is a consequence of the system's structure — derived from the contract map. Phase A contains outcomes with no dependencies. Each subsequent phase builds on the last.
4
+
5
+ You do not decide the build order. The contract map decides it. An outcome that consumes a contract cannot be built before the outcome that produces it. Follow it and the build proceeds smoothly. Ignore it and outcomes fail because the data they need does not exist yet.
6
+
7
+ ## Key Principles
8
+
9
+ - **The phase structure is not a project management decision. It is a consequence of the system's structure.** Look at your contract map. Outcomes that consume nothing from other outcomes go in Phase A. Outcomes that consume only Phase A contracts go in Phase B. Continue until every outcome has a phase. That is your build order.
10
+
11
+ - **A plan that changes is working. A plan that never changes was never consulted.** As you build and verify outcomes, you will discover specification gaps, new edge cases, and better approaches. The plan should update to reflect these discoveries. A static plan is a sign that no one is learning from the build.
12
+
13
+ - **Phases enable parallel work within them.** Outcomes in the same phase have no dependencies on each other — they can be built simultaneously. This is why the swarm (Chapter 13) works: it builds all outcomes in a phase at once.
14
+
15
+ - **Phase boundaries are verification checkpoints.** Before moving to the next phase, every outcome in the current phase should be verified. The next phase depends on the contracts the current phase produces. If those contracts are wrong, everything downstream will be wrong too.
16
+
17
+ ## Red Flags
18
+
19
+ - Phase A with more than five outcomes. If your first phase has too many outcomes, some of them likely have hidden dependencies. Review the contract map — are any Phase A outcomes consuming contracts from other Phase A outcomes?
20
+
21
+ - Phases that share no contract connections. If Phase B outcomes do not consume any contracts from Phase A, why are they in Phase B? Either the phasing is wrong or the contract map is incomplete.
22
+
23
+ - A build order based on priority rather than dependencies. "We want payments first" sounds reasonable but ignores structure. If payments consume a booking contract, bookings must be built first regardless of priority.
24
+
25
+ - Skipping verification between phases. Moving to Phase B with unverified Phase A outcomes means building on unconfirmed foundations. Verify before advancing.
26
+
27
+ ## What This Means for You
28
+
29
+ If you have a contract map, derive your phases from it now. Group outcomes with no consumed contracts into Phase A. Layer the rest by dependency. If you do not have a contract map yet, go back to Chapter 6 — you need it before you can plan.
30
+
31
+ Next: Chapter 9 covers the practical setup — `npx odd-studio init` and what it gives you.