@itsflower/cli 0.1.4 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,43 +1,32 @@
1
1
  {
2
2
  "name": "@itsflower/cli",
3
- "version": "0.1.4",
4
- "description": "🌸 flower CLI — scaffold structured development workflows",
5
- "keywords": [
6
- "cli",
7
- "flower",
8
- "templates",
9
- "workflow"
10
- ],
11
- "license": "MIT",
12
- "repository": {
13
- "type": "git",
14
- "url": "https://github.com/itsflower/flower"
15
- },
3
+ "version": "0.1.5",
16
4
  "bin": {
17
- "flower": "./dist/main.js"
5
+ "flower": "./dist/main.ts"
18
6
  },
19
7
  "files": [
20
- "dist",
21
- "templates",
22
- "skills",
23
- "commands"
8
+ "dist"
24
9
  ],
25
10
  "type": "module",
26
- "publishConfig": {
27
- "access": "public"
11
+ "main": "./dist/main.ts",
12
+ "types": "./dist/main.d.ts",
13
+ "exports": {
14
+ ".": {
15
+ "types": "./dist/main.d.ts",
16
+ "import": "./dist/main.ts"
17
+ }
28
18
  },
29
19
  "scripts": {
30
- "build": "tsup src/main.ts --format esm --shims",
31
- "release": "bun run ../../scripts/release.ts",
32
- "prepack": "npm run build && rm -f templates && cp -r ../../core/templates templates && rm -rf skills && cp -r ../../core/skills skills && rm -rf commands && cp -r ../../core/commands commands",
33
- "postpack": "rm -rf templates && ln -s ../../core/templates templates && rm -rf skills commands"
20
+ "build": "bun build ./src/main.ts --outdir ./dist --target node",
21
+ "dev": "bun run ./src/main.ts",
22
+ "typecheck": "tsc --noEmit"
34
23
  },
35
24
  "dependencies": {
36
- "citty": "^0.2.1"
25
+ "@clack/prompts": "^1.2.0",
26
+ "citty": "^0.2.2"
37
27
  },
38
28
  "devDependencies": {
39
- "@types/bun": "latest",
40
- "tsup": "^8.5.1",
41
- "typescript": "^5.7.0"
29
+ "@types/bun": "^1.3.12",
30
+ "typescript": "^6.0.2"
42
31
  }
43
32
  }
package/LICENSE DELETED
@@ -1,21 +0,0 @@
1
- MIT License
2
-
3
- Copyright (c) 2025 itsflower
4
-
5
- Permission is hereby granted, free of charge, to any person obtaining a copy
6
- of this software and associated documentation files (the "Software"), to deal
7
- in the Software without restriction, including without limitation the rights
8
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
- copies of the Software, and to permit persons to whom the Software is
10
- furnished to do so, subject to the following conditions:
11
-
12
- The above copyright notice and this permission notice shall be included in all
13
- copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.
package/README.md DELETED
@@ -1,28 +0,0 @@
1
- # @itsflower/cli
2
-
3
- 🌸 Scaffold structured development workflows in your project.
4
-
5
- ## Usage
6
-
7
- ```bash
8
- # With npx (Node.js)
9
- npx @itsflower/cli init
10
-
11
- # With bunx (Bun)
12
- bunx @itsflower/cli init
13
- ```
14
-
15
- ## Commands
16
-
17
- ### `flower init`
18
-
19
- Initializes flower in your project by copying workflow templates to `.flower/templates/`:
20
-
21
- - `requirement.md` — Define the problem, scope, and acceptance criteria
22
- - `plan.md` — Break down work into ordered tasks with ACs
23
- - `journal.md` — Log decisions and discoveries during implementation
24
- - `review.md` — Post-implementation quality checklist and learnings
25
-
26
- ## License
27
-
28
- MIT
@@ -1,5 +0,0 @@
1
- # flower workflow
2
-
3
- Start the `flower` workflow for the following request:
4
-
5
- > $DESCRIPTION
@@ -1,40 +0,0 @@
1
- ---
2
- name: flower
3
- description: 4-phase structured development workflow. Use only when user mentions "flower workflow"
4
- ---
5
-
6
- # flower workflow
7
-
8
- ## Workflow
9
-
10
- ```mermaid
11
- flowchart TD
12
- Start([User input]) --> F1[1. Clarify]
13
- F1 --> F2[2. Plan]
14
- F2 -- not feasible --> E1([End - rejected])
15
- F2 -- feasible --> F3[3. Build]
16
- F3 --> F4[4. Review]
17
- F4 --> E2([End - completed])
18
- ```
19
-
20
- ## Phases
21
-
22
- | # | Phase | Reference |
23
- | --- | ------- | ---------------------------------------------- |
24
- | 1 | Clarify | [references/clarify.md](references/clarify.md) |
25
- | 2 | Plan | [references/plan.md](references/plan.md) |
26
- | 3 | Build | [references/build.md](references/build.md) |
27
- | 4 | Review | [references/review.md](references/review.md) |
28
-
29
- ## Document Convention
30
-
31
- Create a folder `.flower/quests/<datetime>--<short-description>/` when starting a new quest.
32
-
33
- - `<datetime>` uses `YYMMDD-HHmm` format — run `uv run scripts/current_datetime.py` to generate it
34
- - `<short-description>` is a kebab-case summary generated by the agent
35
- - Folder contains `requirement.md`, `plan.md`, `journal.md`, and `review.md` based on templates in `.flower/templates/`.
36
-
37
- ## Rules
38
-
39
- - When a phase begins, print `Phase <number>:<name>` for the user to easily follow
40
- - Follow the workflow steps strictly. Do not skip or reorder steps.
@@ -1,96 +0,0 @@
1
- # Phase 3: Build
2
-
3
- Execute tasks from `plan.md` in order. Test each one. Mark it done. Repeat.
4
-
5
- ## Workflow
6
-
7
- ```mermaid
8
- flowchart TD
9
- Start[plan.md] --> B1[Execute next task]
10
- B1 --> B2[Test / Verify]
11
- B2 --> C1{Pass?}
12
- C1 -- no --> B3[Fix issue]
13
- C1 -- yes --> B4[Mark task done ✓]
14
- B3 --> B2
15
- B4 --> B5[Log if noteworthy]
16
- B5 --> C2{All tasks done?}
17
- C2 -- no --> B1
18
- C2 -- yes --> B6[Mark plan completed]
19
- B6 --> End([Phase 4: Review])
20
- ```
21
-
22
- ## Input
23
-
24
- - `requirement.md`
25
- - `plan.md` with `status: in-progress`
26
-
27
- ## Steps
28
-
29
- ### 1. Execute task
30
-
31
- Read the task's Description, AC, and Approach. Investigate relevant code. Implement. Stay in scope.
32
-
33
- | Complexity | Investigation | Implementation | Testing |
34
- | ------------ | ------------------- | ----------------------- | ------------------------ |
35
- | **Trivial** | Quick scan | Direct change | Spot check |
36
- | **Standard** | Read related files | Follow approach in plan | Related test suite |
37
- | **Complex** | Deep codebase study | Incremental changes | Full test suite + manual |
38
-
39
- ### 2. Test
40
-
41
- Run relevant tests. Verify every AC passes. Fix failures before moving on. If unrelated tests were already failing, note in journal and leave them.
42
-
43
- ### 3. Mark task done — do not skip
44
-
45
- **Immediately after tests pass**, check the checkbox in `plan.md`:
46
-
47
- ```
48
- - [ ] Task title → - [x] Task title
49
- ```
50
-
51
- This tracks resumable state. A task is not done until checked.
52
-
53
- ### 4. Log if noteworthy
54
-
55
- Open `.flower/journal.md` (create from `templates/journal.md` on first use).
56
-
57
- Log only when the plan was **not** followed exactly:
58
-
59
- - Deviated from the planned approach
60
- - Made a non-obvious decision
61
- - Hit a problem and resolved it
62
- - Discovered something reusable
63
- - Added a new task
64
-
65
- Entry format:
66
-
67
- ```markdown
68
- ### [Short title]
69
- - **tags**: [keywords]
70
- - **scope**: [global | project:<name>]
71
- - **context**: [what you were doing]
72
- - **insight**: [why — the decision, deviation, or discovery]
73
- ```
74
-
75
- Skip if the plan was followed exactly with no surprises.
76
-
77
- ### 5. Mark plan as completed
78
-
79
- After all tasks are checked, verify:
80
-
81
- - [ ] All checkboxes checked
82
- - [ ] All ACs verified
83
- - [ ] All tests pass
84
- - [ ] No unintended side effects
85
- - [ ] Deviations captured in journal
86
-
87
- Then set `status: completed` in plan frontmatter.
88
-
89
- ## Rules
90
-
91
- - Never skip testing
92
- - Never skip marking done — it tracks where to resume
93
- - One task at a time; don't expand scope mid-task
94
- - New work = new task in the plan, not an expansion of the current one
95
- - Follow plan order; skip ahead only when blocked
96
- - Don't fix unrelated issues — note them for a future quest
@@ -1,84 +0,0 @@
1
- # Phase 1: Clarify
2
-
3
- Clarify user requirements, clearly describing WHAT and WHY, **not** HOW.
4
-
5
- ## Workflow
6
-
7
- ```mermaid
8
- flowchart TD
9
- Start([User input]) --> C1[Analyze]
10
- C1 --> C2[Research]
11
- C2 --> C3{Ambiguities resolved?}
12
- C3 -- No --> C4[Ask for clarification]
13
- C4 --> C1
14
- C3 -- Yes --> C5[Create `requirement.md`]
15
- C5 --> End([Proceed to Phase 2: Plan])
16
- ```
17
-
18
- For each step, follow the instructions below **strictly**
19
-
20
- ## Steps
21
-
22
- ### Analyze
23
-
24
- Read user input word by word. Extract:
25
-
26
- - **Codebase keywords** → file names, modules, functions, endpoints, components mentioned — feed into codebase exploration
27
- - **Technology keywords** → libraries, frameworks, languages, tools, APIs mentioned — feed into documentation lookup
28
- - **Intent** → feature, bug fix, refactor, or enhancement
29
- - **Constraints** → explicit or implicit limitations
30
- - **Ambiguities** → anything with multiple interpretations — feed into clarification questions
31
-
32
- ### Research
33
-
34
- Use extracted keywords to investigate before asking:
35
-
36
- - **Codebase** → grep, glob, read files, `repomix` — understand existing code, patterns, architecture
37
- - **Technology** → use MCPs (`context7`, ...), fetch - understand relevant docs for mentioned technologies
38
- - **External**: look up APIs, standards, or domain concepts the human referenced
39
-
40
- ### Ask for clarification
41
-
42
- Only if ambiguities remain after research:
43
-
44
- | Rule | Detail |
45
- | --------------- | ------------------------------------------------------ |
46
- | Provide options | Prefer choices, not open questions. |
47
- | Batch questions | Group related questions. Max 5 per round. |
48
- | Be specific | Reference concrete code, files, or behaviors. |
49
- | Easy to answer | Human answers in few words or picks option. |
50
- | Show research | Share what you found to give context to your question. |
51
-
52
- After each response, restart: analyze → research based on new info → evaluate remaining ambiguities, gaps.
53
-
54
- ### Create `requirement.md`
55
-
56
- Template in `.flower/templates/requirement.md`.
57
-
58
- | Section | Content |
59
- | --------------------------- | ------------------------------------------------------------------- |
60
- | Problem | Who is affected, when, what goes wrong. 2–5 bullets. No solutions. |
61
- | User Stories | "As a [user], I want [action] so that [benefit]" |
62
- | Goals | Verifiable outcomes — each yes/no checkable |
63
- | Non-Goals | Explicitly excluded — anything someone might assume is in scope |
64
- | Acceptance Criteria | Given/When/Then — pass/fail testable, no subjective judgment |
65
- | Constraints & Prerequisites | Hard limits (unchangeable) + external requirements for this feature |
66
- | Glossary | Domain-specific terms only. Skip if self-explanatory. |
67
-
68
- ## Validate
69
-
70
- - [ ] Every goal is concrete and verifiable
71
- - [ ] Every goal has at least one acceptance criterion
72
- - [ ] Scope boundaries are unambiguous
73
- - [ ] All ambiguities from Q&A are resolved in the document — none deferred
74
- - [ ] Constraints are realistic and compatible with each other
75
-
76
- ## Rules
77
-
78
- - **Read before asking** — research first, ask only what you can't answer
79
- - **Every word matters** — read input thoroughly, don't skip anything
80
- - **Options over open-ended** — prefer always giving choices when asking
81
- - **Problem ≠ solution** — describe what's wrong, not how to fix it
82
- - **Non-Goals prevent scope creep** — put effort here
83
- - **Self-contained but concise** — a reader must understand the problem and success criteria without reading the codebase
84
- - **No assumptions** — ambiguities must be resolved through research or Q&A, never assumed
@@ -1,104 +0,0 @@
1
- # Phase 2: Plan
2
-
3
- Research HOW to implement `requirement.md` — explore codebase, study docs, decide approach, produce `plan.md`.
4
-
5
- ## Workflow
6
-
7
- Execute steps in order. Do not skip. Do not create `plan.md` until the condition is met.
8
-
9
- ```mermaid
10
- flowchart TD
11
- Start([requirement.md]) --> P1[Codebase explore]
12
- P1 --> P2[Research external]
13
- P2 --> C1{Unclear / incomplete / multiple approaches?}
14
- C1 -- no --> P3[Ask for clarification]
15
- P3 --> P1
16
- C1 -- yes --> P4[Create plan.md]
17
- P4 --> End([Proceed to Phase 3: Build])
18
- ```
19
-
20
- ## Steps
21
-
22
- ### Codebase explore
23
-
24
- Search with goals and acceptance criteria from `requirement.md` as targets:
25
-
26
- - **Architecture** — modules, layers, data flow relevant to the requirement
27
- - **Patterns** — how similar features are built; conventions to follow
28
- - **Reuse** — what exists vs. what must be built
29
-
30
- Tools: grep, glob, read files, `repomix`.
31
-
32
- ### Research external
33
-
34
- For every technology, library, or pattern in scope:
35
-
36
- - **Docs** — capabilities, limitations, API surface
37
- - **Best practices** — recommended approaches, common pitfalls
38
- - **Alternatives** — other libraries or patterns solving the same problem
39
-
40
- Tools: `context7`, fetch, web search.
41
-
42
- ### Ask for clarification
43
-
44
- Triggered when the approach is unclear, incomplete, or has multiple valid paths. **Do not decide alone.**
45
-
46
- After receiving answers, loop back to **Codebase explore** to validate the clarified approach before proceeding.
47
-
48
- | Trigger | What to present |
49
- | ------------------------- | -------------------------------------- |
50
- | Multiple valid approaches | Each option with pros, cons, tradeoffs |
51
- | New library or dependency | What it does, why needed, alternatives |
52
- | Significant tradeoff | What is gained vs. given up |
53
-
54
- | Rule | Detail |
55
- | --------------- | ---------------------------------------------------- |
56
- | Provide options | Offer choices with tradeoffs, not open-ended queries |
57
- | Show research | Share findings to give context |
58
- | Batch questions | Group related decisions. Max ~5 per round |
59
- | Be specific | Reference concrete code, libraries, or behaviors |
60
-
61
- ### Create `plan.md`
62
-
63
- Only proceed here when the approach is fully decided, covers 100% of requirements, and has a single clear path.
64
-
65
- Template in `.flower/templates/plan.md`. Set `status: in-progress`.
66
-
67
- | Section | Content |
68
- | ------------------- | ------------------------------------------------------------------- |
69
- | Overview | 2-3 sentences: what is being built, approach, key technical choices |
70
- | Technical Decisions | Non-obvious decisions. WHAT, WHY, alternatives considered |
71
- | Tasks | Ordered by dependency. One logical change per task |
72
- | Dependencies | Internal and external. Skip if none |
73
- | Risks & Mitigation | Only risks that would change the plan. Skip if none |
74
-
75
- Each task must have:
76
-
77
- | Field | Required | Content |
78
- | ----------- | -------- | -------------------------------------------------- |
79
- | Description | Yes | Clear imperative statement |
80
- | AC | Yes | Pass/fail verifiable criteria |
81
- | Approach | Yes | Concrete steps, files to touch, patterns to follow |
82
- | Blocked by | No | Task dependency |
83
-
84
- If no viable approach exists → set `status: rejected`, fill `## Rejection Reason`. End workflow.
85
-
86
- ## Validate
87
-
88
- - [ ] Every requirement goal maps to at least one task
89
- - [ ] Every acceptance criterion is covered by a task's AC
90
- - [ ] Tasks ordered by dependency
91
- - [ ] Each task has AC and Approach
92
- - [ ] Technical decisions grounded in codebase research
93
- - [ ] No compound tasks — split any "X and Y"
94
-
95
- ## Rules
96
-
97
- - **Follow the workflow** — never skip steps or reorder them
98
- - **Research before deciding** — explore codebase and docs before forming an approach
99
- - **Loop on uncertainty** — if unclear after research, ask for clarification then re-explore; repeat until resolved
100
- - **Surface choices** — when multiple approaches exist, present them with tradeoffs; never pick silently
101
- - **New deps need approval** — never add a library to the plan without asking
102
- - **Decisions in the document** — every choice from Q&A must appear in Technical Decisions
103
- - **One task, one change** — if a description uses "and", split it
104
- - **Concrete over vague** — Approach must name files, functions, and patterns
@@ -1,94 +0,0 @@
1
- # Phase 4: Review
2
-
3
- Quality gate and delivery summary. Verify everything not covered by task-level testing: security, performance, project standards, and overall correctness.
4
-
5
- ## Workflow (**STRICTLY ENFORCED**)
6
-
7
- ```mermaid
8
- flowchart TD
9
- Start([plan.md & journal.md]) --> S[Write delivery summary]
10
- S --> CL[Run quality checklist]
11
- CL --> CF{All checks pass?}
12
- CF -- No --> Fix[Fix issues]
13
- Fix --> CL
14
- CF -- Yes --> Done([Mark review as completed])
15
- ```
16
-
17
- ## Input
18
-
19
- - `plan.md` with `status: completed` (all tasks checked)
20
- - `journal.md` (if entries exist)
21
- - Template: `.flower/templates/review.md`
22
-
23
- ## Steps
24
-
25
- 1. **Write delivery summary**
26
-
27
- Create `review.md` from the template in the quest directory. Set `status: draft`.
28
-
29
- **Scale guidance**: match depth to the quest's scope.
30
-
31
- | Quest Size | Summary Length | Metrics | Outcome Detail |
32
- | ---------- | -------------- | -------------------- | ----------------- |
33
- | **Small** | 2-3 sentences | Skip if not measured | Brief outcome |
34
- | **Medium** | 3-5 sentences | Key metrics only | Outcome + context |
35
- | **Large** | 5-8 sentences | Full metrics | Detailed outcome |
36
-
37
- Write the summary covering:
38
- - **What was delivered** — concrete deliverables (features, fixes, refactors), not process steps
39
- - **Overall outcome** — did it meet the requirement's goals? Any goals partially met or adjusted?
40
- - **Key metrics** — lines changed, test coverage delta, performance improvement, or other measurable results (when available and meaningful)
41
- - **Notable deviations** — significant differences from the original plan, if any (reference journal for details)
42
-
43
- The summary must be **self-contained** — someone reading only the review doc should understand what was delivered and whether it succeeded, without referring to the requirement, plan, or journal.
44
-
45
- 2. **Run quality checklist**
46
-
47
- Go through each item systematically. For each: verify → fix if needed → check off.
48
- - [ ] **Dead code & unused files removed** — check for commented-out code, unused imports, orphaned files created during implementation, and temporary debug code
49
- - [ ] **Project standards followed** — verify code style, file structure, naming conventions, and linting rules match the rest of the codebase
50
- - [ ] **No security issues** — scan for hardcoded secrets, SQL injection, XSS, auth bypass, exposed internal errors, and overly permissive permissions
51
- - [ ] **Performance acceptable** — check for N+1 queries, unnecessary re-renders, unbounded loops, large payloads, missing pagination, and missing indexes
52
- - [ ] **All tests pass** — run the full relevant test suite (not just per-task tests from Phase 2); verify no regressions were introduced
53
- - [ ] **Documentation up to date** — README, API docs, inline docs, and config examples reflect the changes (skip if no user-facing docs exist)
54
-
55
- If fixing an issue requires code changes:
56
- 1. Make the fix
57
- 2. Re-run the **full checklist** from the top (a fix can introduce new issues)
58
- 3. Repeat until all items pass cleanly in a single run
59
-
60
- 3. **Write memories**
61
-
62
- Record knowledge gained during this quest for future retrieval. Each memory entry uses the structured format:
63
-
64
- ```markdown
65
- ### [Short actionable title — 5-12 words]
66
-
67
- - **content**: [Detailed explanation with context and examples]
68
- - **tags**: [comma-separated domain keywords]
69
- - **scope**: [global | project:<name>]
70
- ```
71
-
72
- Skip if no meaningful knowledge was gained.
73
-
74
- 4. **Mark review as completed** → set `status: completed` in frontmatter
75
-
76
- ## Output
77
-
78
- - Completed `review.md` with `status: completed`
79
- - Summary of delivery shared with the human
80
-
81
- ## Rules
82
-
83
- - **Summary stands alone** — a reader with no context must understand what was delivered and whether it succeeded
84
- - **Never skip quality checks** — they catch issues that slip through during implementation; rushing here undermines the entire workflow
85
- - **Full re-run after fixes** — if quality checks reveal issues requiring code changes, re-run the entire checklist afterward; partial re-checks miss cascading problems
86
- - **Don't gold-plate** — quality checks fix real issues, they do not refactor working code or add enhancements beyond the quest's scope
87
- - **Reference, don't repeat** — the summary may reference the plan or journal for details, but must not require them to be understood
88
-
89
- ## Status Lifecycle
90
-
91
- | Status | Meaning |
92
- | ----------- | --------------------------------------- |
93
- | `draft` | Initial creation, summary being written |
94
- | `completed` | Quality checks passed, quest done |
@@ -1,3 +0,0 @@
1
- from datetime import datetime
2
-
3
- print(datetime.now().strftime("%y%m%d-%H%M"))
@@ -1,46 +0,0 @@
1
- <!-- Record knowledge and decisions during implementation. Write entries during work, not after.
2
- Each entry uses a structured format for future search and retrieval.
3
-
4
- Guidelines:
5
- - Write in plain language, not formal prose
6
- - Focus on WHY, not WHAT — the code shows what changed
7
- - Only record noteworthy events — if the plan was followed exactly, there is nothing to log
8
- - One entry per meaningful event — do not log routine changes
9
- -->
10
-
11
- ## Entries
12
-
13
- <!-- Each entry must use this structured format:
14
-
15
- ### [Short actionable title]
16
- - **tags**: [comma-separated domain keywords for search]
17
- - **scope**: [global | project:<name>]
18
- - **context**: [What was being worked on]
19
- - **insight**: [The decision, discovery, or deviation — focus on WHY]
20
-
21
- Example:
22
-
23
- ### Switched from LIKE queries to tsvector for product search
24
- - **tags**: postgresql, search, performance, indexing
25
- - **scope**: project:catalog
26
- - **context**: Implementing the search endpoint (Task 2 in plan). Initial ILIKE approach showed 650ms response on 100K rows.
27
- - **insight**: LIKE with leading wildcard forces sequential scans. tsvector + GIN index provides O(log n) lookup, bringing response time from 650ms to 12ms. Always use tsvector for full-text search at scale.
28
-
29
- ---
30
-
31
- ### Input validation prevents DoS via long search queries
32
- - **tags**: security, validation, performance
33
- - **scope**: global
34
- - **context**: Testing edge cases on the search endpoint. 10,000-char query caused 3s parse time.
35
- - **insight**: Unbounded input length is both a performance risk and a DoS vector. Added 1-200 char validation returning HTTP 400. Always validate input length for text-processing endpoints.
36
- -->
37
-
38
- ## Deviations from Plan
39
-
40
- <!-- Summarize significant differences between the plan and what was actually implemented. Fill this in at the end by pulling from entries above.
41
-
42
- Format:
43
- | Planned | Actual | Reason |
44
- | ------- | ------ | ------ |
45
- | [Original approach] | [What was done instead] | [Why the change] |
46
- -->