@itsflower/cli 0.1.4 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/templates/plan.md DELETED
@@ -1,93 +0,0 @@
1
- ---
2
- status: [in-progress | completed | rejected]
3
- ---
4
-
5
- ## 1. Overview
6
-
7
- <!-- Write 2-3 sentences stating what is being built and the general approach. Scope and direction must be clear from this section alone.
8
-
9
- Example: "Add full-text search to the product catalog API using PostgreSQL tsvector. A new search endpoint accepts keyword queries and returns products ranked by relevance. A GIN index on a generated tsvector column ensures response times stay under 200ms." -->
10
-
11
- ## 2. Technical Decisions
12
-
13
- <!-- List decisions that affect multiple tasks. State WHAT was decided and WHY. Only include decisions where the choice is non-obvious or where choosing differently would significantly change the plan.
14
-
15
- Format (simple — use when the choice is clear):
16
- - **[Decision]**: [Rationale]
17
-
18
- Format (complex — use when alternatives were seriously considered):
19
- - **[Decision]**: [Rationale]
20
- - Alternatives considered: [Rejected option → why rejected]
21
- - Tradeoff: [What is given up]
22
-
23
- Example:
24
- - **PostgreSQL tsvector instead of Elasticsearch**: Avoids new infrastructure dependency; tsvector is sufficient for catalogs under 500K items and keeps the stack simple.
25
- - Alternatives considered: Elasticsearch → requires new cluster, adds ops complexity not needed at current scale
26
- - Tradeoff: No built-in typo tolerance or synonym expansion
27
- - **GIN index on a generated tsvector column**: Keeps the index in sync automatically when product data changes, no application-level trigger logic needed.
28
-
29
- Skip this section if all technical choices are obvious or only affect a single task. -->
30
-
31
- ## 3. Tasks
32
-
33
- <!-- Break down the work into concrete, ordered tasks. Group related tasks under a heading when it aids readability.
34
-
35
- Each task MUST have:
36
- - A clear, imperative description of WHAT to do
37
- - **AC**: Acceptance criteria specific to this task — pass/fail verifiable
38
- - **Approach**: How to implement — concrete steps, files to touch, patterns to follow
39
- - Optionally, **Blocked by** if it depends on another task completing first
40
-
41
- Guidelines:
42
- - One logical change per task — if the description says "X and Y", split into two tasks
43
- - Order tasks by dependency — tasks that unblock others come first
44
- - Use precise verbs: implement, add, remove, migrate, validate — not "handle", "deal with", "look into"
45
- - AC must be specific enough to verify programmatically. Bad: "works correctly". Good: "GET /api/products/search?q=mouse returns HTTP 200 with matching products"
46
- -->
47
-
48
- ### [Group Name]
49
-
50
- - [ ] **Task 1**: [Description]
51
- - AC:
52
- - [Concrete testable condition]
53
- - [Concrete testable condition]
54
- - Approach: [How to implement — concrete steps, files to touch, patterns to follow]
55
-
56
- - [ ] **Task 2**: [Description]
57
- - AC:
58
- - [Concrete testable condition]
59
- - Approach: [How to implement]
60
- - Blocked by: Task 1
61
-
62
- ### [Group Name]
63
-
64
- - [ ] **Task 3**: [Description]
65
- - AC:
66
- - [Concrete testable condition]
67
- - Approach: [How to implement]
68
-
69
- ## 4. Dependencies
70
-
71
- <!-- List anything that must exist or be available BEFORE implementation can begin. Skip items already in place. Skip section entirely if no dependencies exist. -->
72
-
73
- ### Internal Dependencies
74
-
75
- <!-- Code, modules, or infrastructure within the project that tasks depend on — only if NOT already listed as a task above. If it is, use "Blocked by" on the task instead. -->
76
-
77
- ### External Dependencies
78
-
79
- <!-- Third-party services, APIs, packages, credentials, or infrastructure outside the project's control. Example: "PostgreSQL 15+ with full-text search support", "API gateway configured to allow the new endpoint". -->
80
-
81
- ## 5. Risks & Mitigation
82
-
83
- <!-- Identify what could go wrong during implementation and how to handle it. Focus on risks that would change the plan if they materialize. Skip generic risks like "bugs might occur". Skip section entirely if no meaningful risks exist.
84
-
85
- Format:
86
- - **Risk**: [What could go wrong]
87
- - Impact: [What happens if it does]
88
- - Mitigation: [How to prevent or respond]
89
- -->
90
-
91
- ## Rejection Reason
92
-
93
- <!-- Only fill this section if status is "rejected". Explain why the requirement is not feasible — cite specific constraints, technical limitations, or contradictions that make it unachievable. Delete this section if the plan is feasible. -->
@@ -1,45 +0,0 @@
1
- ---
2
- type: [feature | enhancement | bug-fix | refactor]
3
- size: [small | medium | large]
4
- ---
5
-
6
- ## 1. Problem
7
-
8
- <!-- Who is affected, when it occurs, what goes wrong. 2–5 bullets. No solutions. -->
9
-
10
- ## 2. User Stories
11
-
12
- <!-- "As a [user], I want [action] so that [benefit]" — include key workflows and edge cases. -->
13
-
14
- ## 3. Scope
15
-
16
- ### Goals
17
-
18
- <!-- Verifiable outcomes — each must be yes/no checkable. -->
19
-
20
- ### Non-Goals
21
-
22
- <!-- Explicitly excluded — anything someone might reasonably assume is in scope. -->
23
-
24
- ## 4. Acceptance Criteria
25
-
26
- <!-- Given [context], when [action], then [result]. Each must be pass/fail testable. -->
27
-
28
- ## 5. Constraints & Prerequisites
29
-
30
- ### Constraints
31
-
32
- <!-- Hard limits: tech stack, performance, security, regulations, deadlines. -->
33
-
34
- ### Prerequisites
35
-
36
- <!-- Things outside this scope but required for it to work — must already exist or be completed separately. -->
37
-
38
- ## 6. Glossary
39
-
40
- <!-- Domain-specific terms only. Skip section if all terms are self-explanatory. -->
41
-
42
- <!--
43
- | Term | Definition |
44
- | ---- | ---------- |
45
- -->
@@ -1,49 +0,0 @@
1
- ---
2
- status: [draft | completed]
3
- ---
4
-
5
- ## 1. Summary
6
-
7
- <!-- Write 2-4 sentences stating what was delivered and the overall outcome. This must be self-contained — do not assume prior context.
8
-
9
- Example: "Added full-text search to the product catalog API using PostgreSQL tsvector. Shoppers can now search products by name, description, and tags with relevance-ranked results. All acceptance criteria passed. Response time is under 150ms at p95." -->
10
-
11
- ## 2. Quality Checklist
12
-
13
- <!-- Complete each item. If skipping any, note the reason.
14
-
15
- - [ ] Dead code & unused files removed
16
- - [ ] Project standards followed (style, structure, patterns, linting)
17
- - [ ] No security issues (secrets, injection, auth bypass, etc.)
18
- - [ ] Performance acceptable (no N+1, unnecessary re-renders, large payloads, etc.)
19
- - [ ] All tests pass
20
- - [ ] Documentation up to date (skip if no user-facing docs exist)
21
- -->
22
-
23
- ## 3. Memories
24
-
25
- <!-- Record knowledge gained during this quest for future retrieval.
26
-
27
- Each entry:
28
- - **title**: Short, actionable (5-12 words)
29
- - **content**: Detailed explanation with context and examples
30
- - **tags**: Comma-separated domain keywords
31
- - **scope**: global | project:<name>
32
-
33
- ### [Title]
34
- - **content**: [Detailed explanation]
35
- - **tags**: [e.g., search, backend, postgresql]
36
- - **scope**: [e.g., global, project:catalog]
37
-
38
- Example:
39
-
40
- ### PostgreSQL tsvector requires GIN index for acceptable performance
41
- - **content**: Without a GIN index on the tsvector column, full-text search queries degrade to sequential scans. On a 100K row table, query time dropped from 800ms to 12ms after adding the index. Always create a GIN index when using tsvector.
42
- - **tags**: postgresql, search, performance, indexing
43
- - **scope**: project:catalog
44
-
45
- ### Test search relevance with real-world data, not synthetic data
46
- - **content**: Initial tests with synthetic data all passed, but real product names with typos and abbreviations exposed ranking issues. Always use a representative sample of production data for search quality tests.
47
- - **tags**: testing, search, data-quality
48
- - **scope**: global
49
- -->