impx 0.0.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- impx-0.0.1/.claude/settings.local.json +9 -0
- impx-0.0.1/.claude/skills/linus/SKILL.md +113 -0
- impx-0.0.1/.claude/skills/ogilvy/SKILL.md +142 -0
- impx-0.0.1/CLAUDE.md +31 -0
- impx-0.0.1/HANDOFF.md +193 -0
- impx-0.0.1/PKG-INFO +27 -0
- impx-0.0.1/README.md +7 -0
- impx-0.0.1/base-plan.md +46 -0
- impx-0.0.1/conversation-logs/001-initial-interview.md +208 -0
- impx-0.0.1/conversation-logs/002-design-doc.md +157 -0
- impx-0.0.1/conversation-logs/003-integration-model.md +128 -0
- impx-0.0.1/current_ai/CLAUDE.md +335 -0
- impx-0.0.1/current_ai/commands/daily-concept-extended.md +63 -0
- impx-0.0.1/current_ai/commands/daily-concept.md +71 -0
- impx-0.0.1/current_ai/settings.json +136 -0
- impx-0.0.1/current_ai/skills/code-review/SKILL.md +134 -0
- impx-0.0.1/current_ai/skills/code-review/references/security-checklist.md +449 -0
- impx-0.0.1/current_ai/skills/code-review/scripts/analyze_complexity.py +268 -0
- impx-0.0.1/current_ai/skills/code-review-since/SKILL.md +102 -0
- impx-0.0.1/current_ai/skills/commit/SKILL.md +107 -0
- impx-0.0.1/current_ai/skills/discuss/SKILL.md +494 -0
- impx-0.0.1/current_ai/skills/execute/SKILL.md +491 -0
- impx-0.0.1/current_ai/skills/execution-report/SKILL.md +91 -0
- impx-0.0.1/current_ai/skills/explore/SKILL.md +466 -0
- impx-0.0.1/current_ai/skills/explore/references/exploration-strategies.md +344 -0
- impx-0.0.1/current_ai/skills/explore/references/feasibility-assessment.md +472 -0
- impx-0.0.1/current_ai/skills/explore/references/web-search-patterns.md +410 -0
- impx-0.0.1/current_ai/skills/explore/scripts/estimate_complexity.py +290 -0
- impx-0.0.1/current_ai/skills/plan-feature/SKILL.md +125 -0
- impx-0.0.1/current_ai/skills/prime/SKILL.md +187 -0
- impx-0.0.1/current_ai/skills/prime-deep/SKILL.md +192 -0
- impx-0.0.1/current_ai/skills/spec/SKILL.md +559 -0
- impx-0.0.1/current_ai/skills/validate/SKILL.md +422 -0
- impx-0.0.1/current_ai/skills/validate/references/quality-gates-reference.md +269 -0
- impx-0.0.1/current_ai/skills/validate/scripts/check_coverage.py +179 -0
- impx-0.0.1/design-doc.md +1738 -0
- impx-0.0.1/feedback-design-docs.md +25 -0
- impx-0.0.1/feedback-implematation.md +4 -0
- impx-0.0.1/initial-build-ticket-idea.md +137 -0
- impx-0.0.1/interview-agent-synthesis.md +466 -0
- impx-0.0.1/next.md +3 -0
- impx-0.0.1/pyproject.toml +32 -0
- impx-0.0.1/requirements.md +554 -0
- impx-0.0.1/research/clarity-interviewer/SKILL.md +131 -0
- impx-0.0.1/research/clarity-interviewer/SKILL.md:Zone.Identifier +0 -0
- impx-0.0.1/research/clarity-interviewer/references/domains/personal-clarity.md +192 -0
- impx-0.0.1/research/clarity-interviewer/references/domains/personal-clarity.md:Zone.Identifier +0 -0
- impx-0.0.1/research/clarity-interviewer/references/domains/software-discovery.md +154 -0
- impx-0.0.1/research/clarity-interviewer/references/domains/software-discovery.md:Zone.Identifier +0 -0
- impx-0.0.1/research/clarity-interviewer/references/story-based-questions.md +175 -0
- impx-0.0.1/research/clarity-interviewer/references/story-based-questions.md:Zone.Identifier +0 -0
- impx-0.0.1/research/code-review/ai-reviewer.mjs +682 -0
- impx-0.0.1/research/early-research/.claude/CLAUDE.md +335 -0
- impx-0.0.1/research/early-research/.claude/reference/aws-lambda-best-practices.md +124 -0
- impx-0.0.1/research/early-research/.claude/reference/database-standards.md +366 -0
- impx-0.0.1/research/early-research/.claude/reference/error-handling-patterns.md +565 -0
- impx-0.0.1/research/early-research/.claude/reference/fastapi-best-practices.md +523 -0
- impx-0.0.1/research/early-research/.claude/reference/git-workflow.md +174 -0
- impx-0.0.1/research/early-research/.claude/reference/habit-tracker-example.md +171 -0
- impx-0.0.1/research/early-research/.claude/reference/performance-optimization.md +492 -0
- impx-0.0.1/research/early-research/.claude/reference/piv-loop-methodology.md +558 -0
- impx-0.0.1/research/early-research/.claude/reference/pre-commit-hooks-guide.md +766 -0
- impx-0.0.1/research/early-research/.claude/reference/pydantic-best-practices.md +542 -0
- impx-0.0.1/research/early-research/.claude/reference/pytest-best-practices.md +304 -0
- impx-0.0.1/research/early-research/.claude/reference/rg-search-patterns.md +240 -0
- impx-0.0.1/research/early-research/.claude/reference/security-best-practices.md +253 -0
- impx-0.0.1/research/early-research/.claude/reference/style-conventions.md +745 -0
- impx-0.0.1/research/early-research/.claude/reference/uv-package-manager.md +186 -0
- impx-0.0.1/research/early-research/.claude/schemas/README.md +166 -0
- impx-0.0.1/research/early-research/.claude/schemas/api-patterns.yaml +114 -0
- impx-0.0.1/research/early-research/.claude/schemas/architecture-patterns.yaml +153 -0
- impx-0.0.1/research/early-research/.claude/schemas/database-decisions.yaml +88 -0
- impx-0.0.1/research/early-research/.claude/schemas/deployment-options.yaml +208 -0
- impx-0.0.1/research/early-research/.claude/schemas/task.yaml +91 -0
- impx-0.0.1/research/early-research/.claude/schemas/testing-strategies.yaml +109 -0
- impx-0.0.1/research/early-research/.claude/setting.example.json +65 -0
- impx-0.0.1/research/early-research/.claude/settings.local.json +145 -0
- impx-0.0.1/research/early-research/.claude/skills/code-review/SKILL.md +134 -0
- impx-0.0.1/research/early-research/.claude/skills/code-review/references/security-checklist.md +449 -0
- impx-0.0.1/research/early-research/.claude/skills/code-review/scripts/analyze_complexity.py +268 -0
- impx-0.0.1/research/early-research/.claude/skills/code-review-since/SKILL.md +102 -0
- impx-0.0.1/research/early-research/.claude/skills/commit/SKILL.md +106 -0
- impx-0.0.1/research/early-research/.claude/skills/discuss/SKILL.md +494 -0
- impx-0.0.1/research/early-research/.claude/skills/execute/SKILL.md +491 -0
- impx-0.0.1/research/early-research/.claude/skills/execute-prp/SKILL.md +48 -0
- impx-0.0.1/research/early-research/.claude/skills/execution-report/SKILL.md +91 -0
- impx-0.0.1/research/early-research/.claude/skills/explore/SKILL.md +466 -0
- impx-0.0.1/research/early-research/.claude/skills/explore/references/exploration-strategies.md +344 -0
- impx-0.0.1/research/early-research/.claude/skills/explore/references/feasibility-assessment.md +472 -0
- impx-0.0.1/research/early-research/.claude/skills/explore/references/web-search-patterns.md +410 -0
- impx-0.0.1/research/early-research/.claude/skills/explore/scripts/estimate_complexity.py +290 -0
- impx-0.0.1/research/early-research/.claude/skills/generate-prp/SKILL.md +74 -0
- impx-0.0.1/research/early-research/.claude/skills/implement-fix/SKILL.md +111 -0
- impx-0.0.1/research/early-research/.claude/skills/implement-plan/SKILL.md +127 -0
- impx-0.0.1/research/early-research/.claude/skills/pause/SKILL.md +84 -0
- impx-0.0.1/research/early-research/.claude/skills/plan/SKILL.md +419 -0
- impx-0.0.1/research/early-research/.claude/skills/plan-feature/SKILL.md +125 -0
- impx-0.0.1/research/early-research/.claude/skills/prime/SKILL.md +187 -0
- impx-0.0.1/research/early-research/.claude/skills/prime-deep/SKILL.md +192 -0
- impx-0.0.1/research/early-research/.claude/skills/rca/SKILL.md +131 -0
- impx-0.0.1/research/early-research/.claude/skills/resume-session/SKILL.md +109 -0
- impx-0.0.1/research/early-research/.claude/skills/session-start/SKILL.md +146 -0
- impx-0.0.1/research/early-research/.claude/skills/spec/SKILL.md +559 -0
- impx-0.0.1/research/early-research/.claude/skills/status/SKILL.md +80 -0
- impx-0.0.1/research/early-research/.claude/skills/task-complete/SKILL.md +124 -0
- impx-0.0.1/research/early-research/.claude/skills/task-create/SKILL.md +130 -0
- impx-0.0.1/research/early-research/.claude/skills/task-list/SKILL.md +92 -0
- impx-0.0.1/research/early-research/.claude/skills/task-update/SKILL.md +109 -0
- impx-0.0.1/research/early-research/.claude/skills/validate/SKILL.md +422 -0
- impx-0.0.1/research/early-research/.claude/skills/validate/references/quality-gates-reference.md +269 -0
- impx-0.0.1/research/early-research/.claude/skills/validate/scripts/check_coverage.py +179 -0
- impx-0.0.1/research/early-research/CLAUDE-TEMPLATE.md +258 -0
- impx-0.0.1/research/early-research/GOALS.md +1656 -0
- impx-0.0.1/research/early-research/examples/pydantic-ai-poc/README.md +373 -0
- impx-0.0.1/research/early-research/examples/pydantic-ai-poc/abstract_interface_demo.py +293 -0
- impx-0.0.1/research/early-research/examples/pydantic-ai-poc/document_agent_poc.py +144 -0
- impx-0.0.1/research/early-research/examples/pydantic-ai-poc/provider_comparison.py +190 -0
- impx-0.0.1/research/early-research/my-current-process.md +18 -0
- impx-0.0.1/research/early-research/thoughts.md +56 -0
- impx-0.0.1/research/hooks/hooks/Readme.md +1 -0
- impx-0.0.1/research/hooks/hooks/notify-permission.sh +4 -0
- impx-0.0.1/research/hooks/hooks/post-commit-confirm.sh +9 -0
- impx-0.0.1/research/hooks/hooks/post-edit-format.sh +12 -0
- impx-0.0.1/research/hooks/hooks/post-failure-autofix.sh +21 -0
- impx-0.0.1/research/hooks/hooks/pre-commit-lint.sh +13 -0
- impx-0.0.1/research/hooks/hooks/session-start-piv.sh +10 -0
- impx-0.0.1/research/hooks/hooks-test/README.md +474 -0
- impx-0.0.1/research/hooks/hooks-test/pyproject.toml +21 -0
- impx-0.0.1/research/hooks/hooks-test/src/__init__.py +0 -0
- impx-0.0.1/research/hooks/hooks-test/src/math_utils.py +11 -0
- impx-0.0.1/research/hooks/hooks-test/tests/__init__.py +0 -0
- impx-0.0.1/research/hooks/hooks-test/tests/test_math.py +18 -0
- impx-0.0.1/research/hooks/hooks-test/uv.lock +239 -0
- impx-0.0.1/research/hooks/precompact.md +11 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/CLAUDE.md +236 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/SKILL.md +88 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/design.md +90 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/dev.md +63 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/evaluate.md +56 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/idea.md +54 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/mnt/user-data/outputs/engineering-vault/.claude/skills/cost-tracker/SKILL.md +86 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/mnt/user-data/outputs/engineering-vault/.claude/skills/mermaid-diagrams/SKILL.md +102 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/mnt/user-data/outputs/engineering-vault/.claude/skills/research-report/SKILL.md +94 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/mnt/user-data/outputs/engineering-vault/.claude/skills/ticket-format/SKILL.md +82 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/plan.md +50 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/research.md +88 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/skills/cost.md +86 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/skills/critic.md +88 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/skills/mermaid.md +102 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/skills/research_report.md +94 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/skills/ticket.md +82 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/spec.md +110 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/status.md +57 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/test.md +71 -0
- impx-0.0.1/research/opus_first_ideas/agentresearch/update-system.md +52 -0
- impx-0.0.1/research/opus_first_ideas/claude_agent_teams.md +148 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/.gitignore +37 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/AGENT_DELEGATION.md +143 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/CLAUDE.md +236 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/ai-engineer.md +39 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/architect.md +56 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/backend-dev.md +38 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/code-reviewer.md +76 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/data-engineer.md +37 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/devops.md +35 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/documentation.md +53 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/frontend-dev.md +38 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/performance-evaluator.md +88 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/pm.md +63 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/qa-tester.md +58 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/researcher.md +48 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/settings.json +76 -0
- impx-0.0.1/research/opus_first_ideas/commands_and_info/settings.local.json +11 -0
- impx-0.0.1/src/imp/__init__.py +3 -0
|
@@ -0,0 +1,113 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: linus
|
|
3
|
+
description: Apply brutally honest engineering critique to design proposals. Use when evaluating architecture decisions, integration approaches, or abstractions before building them. Helps catch hidden complexity and maintenance nightmares early.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Linus Critique
|
|
7
|
+
|
|
8
|
+
You are channeling Linus Torvalds' technical critique style: blunt, no-nonsense, focused on what actually works vs what sounds elegant. Your job is to find the real problems in a design before anyone wastes time building it.
|
|
9
|
+
|
|
10
|
+
**When invoked:** Apply systematic critical thinking to the proposal (either passed as $ARGUMENTS or from recent conversation context).
|
|
11
|
+
|
|
12
|
+
## Framework
|
|
13
|
+
|
|
14
|
+
Apply these checks systematically:
|
|
15
|
+
|
|
16
|
+
### 1. Identity Check
|
|
17
|
+
What is this claiming to be? Is it trying to be two things at once?
|
|
18
|
+
- "This is a plugin" vs "This is the runtime" — pick one
|
|
19
|
+
- "This is an abstraction" vs "This is an implementation" — which is it really?
|
|
20
|
+
- Look for incompatible identities hiding in the same design
|
|
21
|
+
|
|
22
|
+
### 2. Dependency Analysis
|
|
23
|
+
What does this depend on to work?
|
|
24
|
+
- Do those dependencies have stable APIs or contracts?
|
|
25
|
+
- Are you building against moving targets?
|
|
26
|
+
- Will you be rewriting this every time X updates?
|
|
27
|
+
- Is the dependency actually maintained or dying?
|
|
28
|
+
|
|
29
|
+
### 3. Abstraction Tax
|
|
30
|
+
If this introduces an ABC/interface/abstraction layer:
|
|
31
|
+
- Does it actually hide complexity or just move it around?
|
|
32
|
+
- Will it be full of special cases for each implementation?
|
|
33
|
+
- Can you name 3 real implementations that fit cleanly? Or is it theoretical?
|
|
34
|
+
- Is this solving a problem you actually have or might have someday?
|
|
35
|
+
|
|
36
|
+
### 4. Hidden Complexity
|
|
37
|
+
What looks simple but isn't?
|
|
38
|
+
- "We'll sync the state" — okay, now you're building a sync engine. Ready for merge conflicts?
|
|
39
|
+
- "It's just a config file" — that grows into an unmaintainable monster
|
|
40
|
+
- "We'll generate it" — great, now you have code generation to maintain
|
|
41
|
+
- "It's backwards compatible" — with what version? For how long?
|
|
42
|
+
|
|
43
|
+
### 5. Maintenance Surface
|
|
44
|
+
When things change (and they will):
|
|
45
|
+
- How many places do you have to update?
|
|
46
|
+
- Does this create coupling you'll regret later?
|
|
47
|
+
- What happens when the underlying tool changes its config format?
|
|
48
|
+
- Are you creating a two-source-of-truth problem?
|
|
49
|
+
|
|
50
|
+
### 6. The Real Problem
|
|
51
|
+
Stop and ask:
|
|
52
|
+
- What problem are you actually trying to solve?
|
|
53
|
+
- Does THIS design solve THAT problem?
|
|
54
|
+
- Or does it solve a different problem you just invented?
|
|
55
|
+
- Are you creating new problems to justify the solution?
|
|
56
|
+
|
|
57
|
+
### 7. Build Order Reality Check
|
|
58
|
+
- Can you validate this before fully committing to it?
|
|
59
|
+
- Or are you designing the whole thing upfront with no feedback loop?
|
|
60
|
+
- What's the simplest version that proves/disproves the core assumption?
|
|
61
|
+
- Have you built anything like this before, or is it all theoretical?
|
|
62
|
+
|
|
63
|
+
## Output Format
|
|
64
|
+
|
|
65
|
+
Structure your critique like this:
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
Let me tear into this.
|
|
69
|
+
|
|
70
|
+
[Identity/Core Problem - what's wrong at the fundamental level]
|
|
71
|
+
|
|
72
|
+
[Specific Issues - numbered list of concrete problems]
|
|
73
|
+
1. **[Issue category]:** [What's wrong and why it'll bite you]
|
|
74
|
+
2. **[Issue category]:** [What's wrong and why it'll bite you]
|
|
75
|
+
...
|
|
76
|
+
|
|
77
|
+
[What You're Actually Building - the hidden complexity]
|
|
78
|
+
You think you're building X. You're actually building Y, and that's a full project.
|
|
79
|
+
|
|
80
|
+
[The Alternative - what would actually work]
|
|
81
|
+
Here's what would work: [concrete alternative]
|
|
82
|
+
|
|
83
|
+
[Bottom Line - one sentence summary]
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
## Style Guide
|
|
87
|
+
- **Be blunt.** Don't soften criticism with "I think" or "perhaps" or "you might consider"
|
|
88
|
+
- **Be specific.** Don't say "this is complex" — say "this requires a merge strategy for config drift"
|
|
89
|
+
- **Be practical.** Point to real consequences, not theoretical problems
|
|
90
|
+
- **Be constructive.** After tearing it apart, offer what would actually work
|
|
91
|
+
- **Use examples.** "Remember when X did this? It failed because Y"
|
|
92
|
+
- **No jargon hiding.** If you use a technical term, immediately explain what it actually means in practice
|
|
93
|
+
|
|
94
|
+
## Key Phrases to Use
|
|
95
|
+
- "Let's be honest about what you're building..."
|
|
96
|
+
- "This sounds elegant until you hit..."
|
|
97
|
+
- "You're not building X, you're building Y, and Y is..."
|
|
98
|
+
- "Here's what actually happens when..."
|
|
99
|
+
- "This is [problem] wearing a [different name] hat"
|
|
100
|
+
- "Stop designing. Build the simplest version that..."
|
|
101
|
+
|
|
102
|
+
## What NOT to Do
|
|
103
|
+
- Don't be mean about the person — critique the design
|
|
104
|
+
- Don't be theoretical — point to practical consequences
|
|
105
|
+
- Don't just say "no" — offer the alternative that would work
|
|
106
|
+
- Don't assume malice — assume they haven't thought through the consequences yet
|
|
107
|
+
|
|
108
|
+
## Success Criteria
|
|
109
|
+
After your critique, the user should:
|
|
110
|
+
1. Understand exactly what's wrong with the proposal
|
|
111
|
+
2. Know what hidden complexity they'd be taking on
|
|
112
|
+
3. Have a concrete alternative to consider
|
|
113
|
+
4. Feel like they dodged a bullet, not like they got yelled at
|
|
@@ -0,0 +1,142 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ogilvy
|
|
3
|
+
description: Apply rigorous naming and brand strategy to packages, tools, commands, and features. Use when choosing names, evaluating name candidates, or building CLI identity. Channeling David Ogilvy — the man who believed a bad name could kill a great product.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Ogilvy — Naming Strategist
|
|
7
|
+
|
|
8
|
+
You are channeling David Ogilvy's approach to naming: research-driven, linguistically precise, and ruthlessly practical. A name isn't decoration — it's the first piece of your product that people encounter, and most of them will never get past it. Your job is to find the name that works hardest.
|
|
9
|
+
|
|
10
|
+
**When invoked:** Evaluate names or generate naming candidates for the subject (either passed as $ARGUMENTS or from recent conversation context).
|
|
11
|
+
|
|
12
|
+
## Philosophy
|
|
13
|
+
|
|
14
|
+
"The consumer isn't a moron; she's your wife." And the developer isn't an idiot — they're someone scanning a list of 50 packages at 11pm trying to find the one that does what they need. Your name has to work in that moment.
|
|
15
|
+
|
|
16
|
+
A name must do three things:
|
|
17
|
+
1. **Stick** — memorable after one encounter
|
|
18
|
+
2. **Signal** — hint at what the thing does
|
|
19
|
+
3. **Survive** — work in every context it'll appear (CLI, docs, conversation, `pip install`)
|
|
20
|
+
|
|
21
|
+
## Framework
|
|
22
|
+
|
|
23
|
+
Apply these checks systematically to any name candidate:
|
|
24
|
+
|
|
25
|
+
### 1. The Terminal Test
|
|
26
|
+
Type it. Say it. Read it in a sentence.
|
|
27
|
+
- `pip install ____` — does it flow or fight the keyboard?
|
|
28
|
+
- `____ init` / `____ check` — does it work as a CLI prefix?
|
|
29
|
+
- "I ran ____ on the repo" — does it sound like a tool or a disease?
|
|
30
|
+
- "We use ____ for our workflow" — does it hold up in conversation?
|
|
31
|
+
- Is it under 10 characters? Under 8 is better. Under 6 is ideal.
|
|
32
|
+
|
|
33
|
+
### 2. The Semantic Check
|
|
34
|
+
What does the name actually communicate?
|
|
35
|
+
- Does it hint at what the tool does? Even obliquely?
|
|
36
|
+
- Does it accidentally suggest something it doesn't do?
|
|
37
|
+
- Is it a real word, a coined word, or a portmanteau? Each has trade-offs:
|
|
38
|
+
- **Real word:** Instant meaning, but harder to claim (search, domain, package name)
|
|
39
|
+
- **Coined:** Ownable, but empty until you fill it with meaning
|
|
40
|
+
- **Portmanteau:** Can be clever or can be cringe — there's almost no middle ground
|
|
41
|
+
- Does the name create the right *feeling*? Fast/slow, heavy/light, precise/creative?
|
|
42
|
+
|
|
43
|
+
### 3. The Availability Sweep
|
|
44
|
+
Names that are taken are dead names. Check:
|
|
45
|
+
- PyPI (`pip install ____`)
|
|
46
|
+
- npm (if relevant)
|
|
47
|
+
- GitHub org/repo
|
|
48
|
+
- Domain (nice-to-have, not required for dev tools)
|
|
49
|
+
- Is there a well-known project with this name in an adjacent space?
|
|
50
|
+
- Will it collide in search results with something unrelated?
|
|
51
|
+
|
|
52
|
+
### 4. The Longevity Test
|
|
53
|
+
Names age. Some age well, some don't.
|
|
54
|
+
- Is this name tied to a feature that might change? (Don't name it "AutoPR" if it might do more than PRs)
|
|
55
|
+
- Does it scale to a broader scope without sounding wrong?
|
|
56
|
+
- Will you be embarrassed by this name in 2 years?
|
|
57
|
+
- Is it too clever? Cleverness fades. Clarity compounds.
|
|
58
|
+
|
|
59
|
+
### 5. The Competitive Position
|
|
60
|
+
Where does this name sit in the landscape?
|
|
61
|
+
- Does it sound like its competitors or stand apart?
|
|
62
|
+
- Does it accidentally imply it's a wrapper/plugin for something else?
|
|
63
|
+
- Would someone mistake it for an existing tool?
|
|
64
|
+
|
|
65
|
+
### 6. Sound Symbolism
|
|
66
|
+
This isn't pseudoscience — Ogilvy tested this.
|
|
67
|
+
- Hard consonants (k, t, p) feel fast, precise, technical
|
|
68
|
+
- Soft sounds (l, m, w) feel smooth, approachable
|
|
69
|
+
- Short vowels (i, e) feel small and quick
|
|
70
|
+
- Long vowels (o, a) feel expansive
|
|
71
|
+
- Does the sound match the personality of the tool?
|
|
72
|
+
|
|
73
|
+
## Process
|
|
74
|
+
|
|
75
|
+
When generating names (not just evaluating):
|
|
76
|
+
|
|
77
|
+
1. **Start with the job** — what does this tool actually do? Write it in one sentence.
|
|
78
|
+
2. **Map the semantic field** — list 15-20 words associated with that job. Verbs, nouns, metaphors.
|
|
79
|
+
3. **Generate candidates** — combine, compress, twist. Produce 8-12 raw candidates.
|
|
80
|
+
4. **Kill round** — apply the framework above. Cut to 3-5 survivors.
|
|
81
|
+
5. **Present finalists** — each with rationale and one honest weakness.
|
|
82
|
+
|
|
83
|
+
## Output Format
|
|
84
|
+
|
|
85
|
+
### When evaluating a name:
|
|
86
|
+
```
|
|
87
|
+
Here's what I think about "____."
|
|
88
|
+
|
|
89
|
+
**What it gets right:** [strengths]
|
|
90
|
+
**Where it breaks:** [problems, be specific]
|
|
91
|
+
**Terminal test:** `pip install ____` / `____ init` — [verdict]
|
|
92
|
+
**Verdict:** [Keep / Kill / Rework] — [one sentence why]
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### When generating names:
|
|
96
|
+
```
|
|
97
|
+
The job: [one sentence description of what the tool does]
|
|
98
|
+
|
|
99
|
+
**Finalists:**
|
|
100
|
+
|
|
101
|
+
1. **name** — [why it works]. Weakness: [honest downside].
|
|
102
|
+
`pip install name` / `name init` / "We use name for..."
|
|
103
|
+
|
|
104
|
+
2. **name** — [why it works]. Weakness: [honest downside].
|
|
105
|
+
`pip install name` / `name init` / "We use name for..."
|
|
106
|
+
|
|
107
|
+
3. **name** — [why it works]. Weakness: [honest downside].
|
|
108
|
+
`pip install name` / `name init` / "We use name for..."
|
|
109
|
+
|
|
110
|
+
**My pick:** [which one and why, in one sentence]
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
## Style Guide
|
|
114
|
+
- **Be decisive.** Rank your candidates. Have a pick. Don't say "they're all good options."
|
|
115
|
+
- **Be specific about why.** Not "it sounds nice" — "the hard 'k' gives it a technical edge and it's 4 characters to type."
|
|
116
|
+
- **Be honest about weaknesses.** Every name has a downside. Name it.
|
|
117
|
+
- **Test in context.** Always show the name in real usage — CLI commands, sentences, install commands.
|
|
118
|
+
- **Kill fast.** If a name fails any framework check badly, don't spend time on it.
|
|
119
|
+
- **Respect the keyboard.** Developers type names hundreds of times. Every character counts.
|
|
120
|
+
|
|
121
|
+
## Key Phrases to Use
|
|
122
|
+
- "A name is a promise. What is this one promising?"
|
|
123
|
+
- "Type it ten times. Still like it?"
|
|
124
|
+
- "The best name is the one that needs no explanation."
|
|
125
|
+
- "You're not naming a baby. You're naming a tool people will type at midnight."
|
|
126
|
+
- "If you have to explain the name, you've already lost."
|
|
127
|
+
- "Good names are discovered, not invented."
|
|
128
|
+
|
|
129
|
+
## What NOT to Do
|
|
130
|
+
- Don't generate long lists of 20+ options — that's lazy brainstorming, not strategy
|
|
131
|
+
- Don't suggest names without checking if they're plausibly available
|
|
132
|
+
- Don't fall in love with cleverness over clarity
|
|
133
|
+
- Don't ignore the CLI context — this isn't naming a startup, it's naming a dev tool
|
|
134
|
+
- Don't suggest names that are hard to spell or pronounce
|
|
135
|
+
- Don't use -ify, -ly, -io suffixes unless they genuinely work
|
|
136
|
+
|
|
137
|
+
## Success Criteria
|
|
138
|
+
After your naming session, the user should:
|
|
139
|
+
1. Have 3-5 strong candidates with clear trade-offs
|
|
140
|
+
2. Understand why each name works or doesn't
|
|
141
|
+
3. Be able to make a confident decision
|
|
142
|
+
4. Feel like the name was chosen, not settled for
|
impx-0.0.1/CLAUDE.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
# TeamBuild — AI-Powered Engineering Workflow System
|
|
2
|
+
|
|
3
|
+
## Start Here
|
|
4
|
+
|
|
5
|
+
1. Read [HANDOFF.md](HANDOFF.md) — current project state, what's been decided, what's next, and the full research inventory
|
|
6
|
+
2. Read [requirements.md](requirements.md) — standalone requirements spec (extracted from interview)
|
|
7
|
+
3. Check [conversation-logs/](conversation-logs/) — numbered transcripts of all project conversations
|
|
8
|
+
|
|
9
|
+
## Key Context Files
|
|
10
|
+
|
|
11
|
+
| File | Purpose |
|
|
12
|
+
|------|---------|
|
|
13
|
+
| `HANDOFF.md` | **Current state + next task** — always read this first |
|
|
14
|
+
| `requirements.md` | v0.1 requirements spec with build order and module definitions |
|
|
15
|
+
| `base-plan.md` | Original plan (superseded by requirements.md, kept for context) |
|
|
16
|
+
| `initial-build-ticket-idea.md` | Original inspiration — full vision beyond v0.1 |
|
|
17
|
+
| `conversation-logs/` | All project conversations, numbered in order |
|
|
18
|
+
| `research/` | Reference material — see HANDOFF.md for what's been reviewed vs. not |
|
|
19
|
+
| `current_ai/` | Josh's current Claude Code setup — baseline for skills and config |
|
|
20
|
+
|
|
21
|
+
## Rules
|
|
22
|
+
|
|
23
|
+
- **Context efficiency is paramount** — NEVER front-load an entire codebase into context. Use targeted, lazy loading. Read index notes first, drill down only when needed. See requirements.md for the full Context Window Management spec.
|
|
24
|
+
- **Modular everything** — every component must work independently, plug-and-play per project/user
|
|
25
|
+
- **Provider-agnostic** — no AI provider lock-in, abstraction layer from day one
|
|
26
|
+
- **TDD** — tests before implementation, always
|
|
27
|
+
- **Strict linting and type checks** — no exceptions
|
|
28
|
+
- **Track costs** — token usage, actual costs, context window health on everything
|
|
29
|
+
- **Circuit breaker** — all agent loops must have a max retry (3-5) before escalating to human
|
|
30
|
+
- **Open source first** — self-hosted where possible
|
|
31
|
+
- **Log every conversation** — save to `conversation-logs/` with sequential numbering before ending a session
|
impx-0.0.1/HANDOFF.md
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
1
|
+
# TeamBuild — Current Handoff State
|
|
2
|
+
|
|
3
|
+
**Last Updated:** 2026-02-10
|
|
4
|
+
**Last Phase Completed:** Design Doc finalized (003)
|
|
5
|
+
**Current Phase:** Design complete — ready to build
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## What Is This Project?
|
|
10
|
+
|
|
11
|
+
An AI-powered engineering workflow system that manages the full development lifecycle — from idea through maintenance — using specialized AI agents. The system should be modular, provider-agnostic, and self-improving through built-in metrics.
|
|
12
|
+
|
|
13
|
+
Owner: Josh @ MetrIQ
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## What Has Been Decided
|
|
18
|
+
|
|
19
|
+
All decisions below were made during the initial interview ([conversation-logs/001-initial-interview.md](conversation-logs/001-initial-interview.md)).
|
|
20
|
+
|
|
21
|
+
### v0.1 Build Order
|
|
22
|
+
1. **Interview Agent** — two modes: direct interview + gap analysis on incomplete specs
|
|
23
|
+
2. **PM Integration** — Plane first (personal), Linear-ready (work). Single source of truth for all agent coordination
|
|
24
|
+
3. **Coding Agent Team** — high autonomy, TDD, self-validating. Spec → PR pipeline
|
|
25
|
+
4. **Critique/Review Layer** — applies at every phase. Circuit breaker: 3-5 attempts then escalate to human
|
|
26
|
+
5. **Cost/Performance Metrics** — built BY items 1-4 as a dogfooding test
|
|
27
|
+
6. **Provider Abstraction** — model-agnostic from day one. Validate Pydantic AI first
|
|
28
|
+
|
|
29
|
+
### Hard Requirements
|
|
30
|
+
- Modular components — every piece must be plug-and-play, usable independently per project/user
|
|
31
|
+
- TDD, strict linting, type checks
|
|
32
|
+
- Python for AI orchestration, TypeScript for web frontend, best tool otherwise
|
|
33
|
+
- Open source first, self-hosted where possible
|
|
34
|
+
- Circuit breaker on all agent loops (3-5 attempts max)
|
|
35
|
+
- Track tokens, costs, context window health everywhere
|
|
36
|
+
- Terminal-first workflow, desktop + PM notifications for v1
|
|
37
|
+
|
|
38
|
+
### What Success Looks Like
|
|
39
|
+
Hand off a feature spec → agents build through PR with real validation → human only steps in when genuinely stuck or for final approval. PM tool shows exactly what's happening. Cost and quality metrics prove the system is effective.
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Critical Architectural Constraint: Context Window Management
|
|
44
|
+
|
|
45
|
+
**This is the single biggest technical risk in the system.** See [requirements.md — Context Window Management](requirements.md) for full details.
|
|
46
|
+
|
|
47
|
+
**The problem:** Previous "prime" approaches front-loaded entire codebase understanding into context, consuming the smart window before agents could do real work. On large codebases, this made agents functionally useless.
|
|
48
|
+
|
|
49
|
+
**The solution:** Targeted, lazy context loading. Agents fetch only what's relevant to their specific task. Key mechanisms:
|
|
50
|
+
- Atomic index notes per module (Zettelkasten-inspired navigation)
|
|
51
|
+
- Task-scoped context driven by PM ticket scope markers
|
|
52
|
+
- Context budget tracking with degradation warnings
|
|
53
|
+
- Shared knowledge layer (external storage, not context window)
|
|
54
|
+
- Auto-maintained indexes updated as agents work
|
|
55
|
+
|
|
56
|
+
**This affects every module's design** — see the impact table in requirements.md.
|
|
57
|
+
|
|
58
|
+
---
|
|
59
|
+
|
|
60
|
+
## What Has NOT Been Decided
|
|
61
|
+
|
|
62
|
+
- Beads format — referenced for token/project management, needs research
|
|
63
|
+
- Notification implementation (desktop notification library/approach)
|
|
64
|
+
- DuckDB vs SQLite for metrics storage (research needed)
|
|
65
|
+
- Plane API specifics (research needed before PM module)
|
|
66
|
+
- tree-sitter Python bindings for AST parsing (research needed before `teambuild init`)
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
70
|
+
## Completed: Design Doc with Mermaid Diagrams
|
|
71
|
+
|
|
72
|
+
**Output:** [design-doc.md](design-doc.md)
|
|
73
|
+
|
|
74
|
+
The design doc covers all required areas with Mermaid diagrams:
|
|
75
|
+
- High-level system architecture (all modules and how they connect)
|
|
76
|
+
- Module dependency graph (what's independent, what depends on what)
|
|
77
|
+
- Agent lifecycle (how an agent picks up work, does it, validates, reports)
|
|
78
|
+
- Critique/review flow (two-pass architecture + circuit breaker pattern)
|
|
79
|
+
- PM integration data flow (ticket creation → assignment → status → completion)
|
|
80
|
+
- Metrics collection points (6 collection points annotated on architecture)
|
|
81
|
+
- Context window management (atomic indexes, progressive disclosure, budget tracking)
|
|
82
|
+
- Interview Agent internal architecture (detailed enough to build from)
|
|
83
|
+
- Package structure for the full project
|
|
84
|
+
|
|
85
|
+
### Design Decisions Resolved
|
|
86
|
+
| Question | Decision |
|
|
87
|
+
|----------|----------|
|
|
88
|
+
| **TeamBuild identity** | Workflow framework — owns planning, validation, review, metrics. Code execution is pluggable. |
|
|
89
|
+
| **Integration model** | CLI + git-native `.index.md` files. No agent runtime dependency. Config in `[tool.teambuild]`. |
|
|
90
|
+
| **Project understanding** | Three-layer: static analysis + AST/tree-sitter + lazy AI summarization. 80% free. |
|
|
91
|
+
| **Managed executor** | Optional `teambuild code` wrapping coding agents. Built last. |
|
|
92
|
+
| **Agent runtime config** | Thin optional generators (`teambuild setup`), NOT an adapter pattern. |
|
|
93
|
+
| PM abstraction | Adapter pattern (PlaneAdapter, LinearAdapter implement PMAdapter ABC) |
|
|
94
|
+
| Agent coordination | PM-driven + git worktrees for file isolation, 3-tier delegation (solo/subagent/team) |
|
|
95
|
+
| Metrics storage | JSONL → DuckDB/SQLite (research needed), all behind MetricsStore ABC, with PM export |
|
|
96
|
+
| Module interfaces | Python packages + Pydantic models for all inputs/outputs |
|
|
97
|
+
| Atomic index implementation | Hierarchical `.index.md` — root index routes to 2-3 relevant module indexes |
|
|
98
|
+
| Context budgeting | Smart window ~120K, housekeeping 30K, hard ceiling 150K |
|
|
99
|
+
| Package structure | Monorepo + import-linter enforced layer boundaries. Tests colocated. |
|
|
100
|
+
| Self-validation | Standalone `teambuild check` CLI. Language-agnostic categories resolved per-language. |
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
## Next Task: Build Module 1
|
|
105
|
+
|
|
106
|
+
Design doc is complete. All outstanding items from conversation 002 have been written into design-doc.md (sections 3, 4, 8, 9).
|
|
107
|
+
|
|
108
|
+
### Key Design Decisions Made in Conversation 003
|
|
109
|
+
|
|
110
|
+
1. **TeamBuild identity resolved** — Workflow framework, not agent runtime. Owns planning, validation, review, metrics. Code execution is a pluggable slot.
|
|
111
|
+
2. **Integration model** — CLI + git-native `.index.md` files. No `.claude/` dependency. No `.teambuild/` directory. Config in `[tool.teambuild]` in pyproject.toml.
|
|
112
|
+
3. **Three-layer project understanding** — Static analysis (free) + AST/tree-sitter (free) + AI summarization (lazy, cached). `teambuild init` runs L1+L2 for free, one AI call for root index.
|
|
113
|
+
4. **Managed executor is optional, built last** — `teambuild code` wraps coding agents with rules + monitoring. Not core to the system. Built after everything else is proven.
|
|
114
|
+
5. **No adapter pattern for agent runtimes** — Thin, optional setup generators instead (`teambuild setup claude`). Not an abstraction layer.
|
|
115
|
+
6. **Validation is standalone** — `teambuild check` works regardless of what wrote the code. Wired into git hooks or CI, not agent-specific hooks.
|
|
116
|
+
|
|
117
|
+
### Build Sequence (from design-doc.md §14)
|
|
118
|
+
|
|
119
|
+
1. **Project scaffolding** — pyproject.toml with uv, import-linter, ruff, mypy, pytest
|
|
120
|
+
2. **Provider Abstraction** — AgentProvider ABC, Pydantic AI implementation, MockProvider, config
|
|
121
|
+
3. **Metrics Collection** — lightweight collector lib, JSONL store
|
|
122
|
+
4. **Interview Agent** — InterviewAgent, QuestionGenerator, CompletenessTracker, NoteKeeper, OutputFormatter, domain libraries, CLI command
|
|
123
|
+
5. **PM Integration** — PMAdapter ABC, PlaneAdapter, ticket models
|
|
124
|
+
6. **Validation Runner** — `teambuild check`, language-agnostic gates, auto-detection
|
|
125
|
+
7. **Critique/Review Layer** — two-pass review, agentPrompt, circuit breaker
|
|
126
|
+
8. **Cost/Performance Metrics** (full module) — storage, dashboards, PM export. Built BY items 1-7.
|
|
127
|
+
9. **Context & Index Manager** — three-layer analysis, `.index.md` generation
|
|
128
|
+
10. **Managed Executor** (optional) — `teambuild code`, wraps coding agents
|
|
129
|
+
11. **Setup Generators** (optional) — `teambuild setup claude/cursor/generic`
|
|
130
|
+
|
|
131
|
+
---
|
|
132
|
+
|
|
133
|
+
## Research Inventory
|
|
134
|
+
|
|
135
|
+
### Reviewed and Incorporated into Requirements
|
|
136
|
+
- `base-plan.md` — original build plan, now superseded by requirements.md
|
|
137
|
+
- `initial-build-ticket-idea.md` — original build ticket idea, key ideas incorporated
|
|
138
|
+
- `research/clarity-interviewer/` — story-based interview methodology (Teresa Torres, Dylan Davis). **Synthesized into Interview Agent spec** — see `interview-agent-synthesis.md` for full comparison. Key mechanics incorporated: story-based questioning, completeness tracking, dynamic question generation, domain libraries, explorer mindset.
|
|
139
|
+
|
|
140
|
+
### Reviewed and Incorporated into Design Doc (002)
|
|
141
|
+
|
|
142
|
+
All research below was reviewed in conversation 002 and informed the design doc (`design-doc.md`).
|
|
143
|
+
|
|
144
|
+
#### `research/early-research/`
|
|
145
|
+
- **Pydantic AI POC** — validated provider abstraction pattern. Generic `AgentProvider` interface works across Anthropic, OpenAI, Bedrock, Ollama. Adopted as foundation for Provider Abstraction layer.
|
|
146
|
+
- **25 skills (PIV-Swarm workflow)** — prime → explore → discuss → spec → plan → execute → validate → commit. Schema-driven decisions reduce questions by 70%. Progressive context loading (Level 1/2/3). Fresh context per task via `context: fork`. All patterns incorporated into design.
|
|
147
|
+
- **GOALS.md, thoughts.md, my-current-process.md** — Josh's earlier thinking. Key insight: "Context is King, Sessions are Natural, Quality is Non-Negotiable." Token budget management, multi-session architecture, precompact hooks.
|
|
148
|
+
- **Decision schemas** — architecture-patterns.yaml, api-patterns.yaml, testing-strategies.yaml, database-decisions.yaml, deployment-options.yaml. Smart defaults pattern adopted for Interview Agent question reduction.
|
|
149
|
+
|
|
150
|
+
#### `research/hooks/`
|
|
151
|
+
- Hook patterns adopted for Critique/Review Layer Pass 1 (lightweight, synchronous checks). Pre-commit lint, post-edit format, post-failure autofix with targeted fix instructions. Circuit breaker pattern via stateful retry tracking. Context compaction rules (compact early, never without plan + report).
|
|
152
|
+
|
|
153
|
+
#### `research/opus_first_ideas/`
|
|
154
|
+
- **12 specialist agent roles** reviewed (Researcher, Architect, PM, Backend/Frontend/Data/AI/DevOps Engineers, Code Reviewer, QA Tester, Docs Writer, Performance Evaluator). Informed the agent coordination model.
|
|
155
|
+
- **10-phase lifecycle** reviewed (Idea → Research → Design → Spec → Plan → Dev → Test → Evaluate → Update System → Maintenance). Informed overall system architecture.
|
|
156
|
+
- **3-tier delegation model** adopted (Solo / Subagent / Team) with cost implications (1x / ~2x / ~5x).
|
|
157
|
+
- **5 reusable skills** reviewed (Critique, Cost Tracker, Research Report, Ticket Format, Mermaid Diagrams).
|
|
158
|
+
|
|
159
|
+
#### `current_ai/`
|
|
160
|
+
- Josh's current Claude Code setup reviewed. PIV-Swarm workflow, smart defaults, 3-tier testing, helper scripts (analyze_complexity.py, check_coverage.py, estimate_complexity.py). Skills mapped to TeamBuild modules: /prime → Context Agent, /explore → Research Agent, /discuss → Interview Agent, /spec → PM Integration, /execute → Coding Agent Team, /code-review → Critique/Review Layer.
|
|
161
|
+
|
|
162
|
+
#### `research/code-review/`
|
|
163
|
+
- **ai-reviewer.mjs** — GitHub Actions AI code reviewer using Bedrock. Key concepts adopted for design:
|
|
164
|
+
- **`agentPrompt` field** — AI-to-AI handoff prompt per issue. Review produces fix prompts that coding agents pick up directly. Turns review into a pipeline, not a handoff.
|
|
165
|
+
- **False positive prevention** — mandatory 5-point self-check, banned speculative language, "zero issues = ideal outcome."
|
|
166
|
+
- **Priority-based truncation** — file priority weights for context budget allocation.
|
|
167
|
+
- **Stale comment resolution** — auto-minimize resolved issues. Adopted for PM integration.
|
|
168
|
+
- **Two-pass review** — automated checks (Pass 1) + AI deep review (Pass 2).
|
|
169
|
+
|
|
170
|
+
---
|
|
171
|
+
|
|
172
|
+
## Project Structure
|
|
173
|
+
|
|
174
|
+
```
|
|
175
|
+
teambuild/
|
|
176
|
+
├── CLAUDE.md # Agent entry point — read this first
|
|
177
|
+
├── HANDOFF.md # THIS FILE — current state and next task
|
|
178
|
+
├── design-doc.md # System design with Mermaid diagrams
|
|
179
|
+
├── requirements.md # Standalone requirements spec from interview
|
|
180
|
+
├── interview-agent-synthesis.md # Clarity-interviewer + TeamBuild synthesis
|
|
181
|
+
├── base-plan.md # Original plan (superseded by requirements.md)
|
|
182
|
+
├── initial-build-ticket-idea.md # Original inspiration doc
|
|
183
|
+
├── conversation-logs/
|
|
184
|
+
│ ├── 001-initial-interview.md # Full interview transcript + metrics
|
|
185
|
+
│ ├── 002-design-doc.md # Research review + design doc creation
|
|
186
|
+
│ └── 003-integration-model.md # Integration model + project understanding + identity resolution
|
|
187
|
+
├── research/
|
|
188
|
+
│ ├── code-review/ # GitHub Actions AI reviewer (ai-reviewer.mjs)
|
|
189
|
+
│ ├── early-research/ # Early exploration, Pydantic AI POC, skills/schemas
|
|
190
|
+
│ ├── hooks/ # Development hooks for quality control
|
|
191
|
+
│ └── opus_first_ideas/ # Opus dry run — agent roles, lifecycle phases
|
|
192
|
+
└── current_ai/ # Josh's current Claude Code setup
|
|
193
|
+
```
|
impx-0.0.1/PKG-INFO
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: impx
|
|
3
|
+
Version: 0.0.1
|
|
4
|
+
Summary: AI-powered engineering workflow framework — planning, validation, review, metrics.
|
|
5
|
+
Project-URL: Homepage, https://github.com/halljoshr/imp
|
|
6
|
+
Project-URL: Repository, https://github.com/halljoshr/imp
|
|
7
|
+
Author: Josh Hall
|
|
8
|
+
License-Expression: MIT
|
|
9
|
+
Keywords: ai,engineering,metrics,review,validation,workflow
|
|
10
|
+
Classifier: Development Status :: 1 - Planning
|
|
11
|
+
Classifier: Intended Audience :: Developers
|
|
12
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
13
|
+
Classifier: Programming Language :: Python :: 3
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
16
|
+
Classifier: Topic :: Software Development :: Build Tools
|
|
17
|
+
Classifier: Topic :: Software Development :: Quality Assurance
|
|
18
|
+
Requires-Python: >=3.12
|
|
19
|
+
Description-Content-Type: text/markdown
|
|
20
|
+
|
|
21
|
+
# imp
|
|
22
|
+
|
|
23
|
+
AI-powered engineering workflow framework — planning, validation, review, metrics.
|
|
24
|
+
|
|
25
|
+
Code execution is a pluggable slot for any agent runtime. Imp is the foreman, not the carpenter.
|
|
26
|
+
|
|
27
|
+
**Status:** Early development. Name claim release.
|
impx-0.0.1/README.md
ADDED
impx-0.0.1/base-plan.md
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
|
|
2
|
+
# Goals (Early)
|
|
3
|
+
|
|
4
|
+
We want to put together a base plan where we decide the order that we are going to build the skills and commands and test them. We're going to build them each individually so that we can put them through the ringer a little bit and then use that testing to improve the project. And then we will move on to the next stage, almost like we're building it in live real time so that we can use it and test it on many projects, creating examples and everything that we can put in our research to better improve the project in the future.
|
|
5
|
+
|
|
6
|
+
So I would say that the first couple of agents that we're going to need to build are the research agent and the interview agent. The research should be good at doing the web searches and reading of documentation of different types of projects and software that we need to implement into our code. And then the interview agent should be used best interviewing practices for talking to the people who are going to actually use the tool and how they would want it built, what features are most important, what are least important. And it should take you through a live interviewing process where you are asking me the questions and you are basing the next question off of my responses to the original questions.
|
|
7
|
+
|
|
8
|
+
Once we have those, we can test them out by trying to build the other agents, or a couple of the other agents based off of these two. And then we can iterate and make these better while building out each of the new agents and commands and process and connections and integrations that we need to build as well.
|
|
9
|
+
|
|
10
|
+
After we have those first two agents built, the first thing we're going to want to do is set up the plane integration so that we can build plans, tickets, and projects all in our project management software so that we can build that integration right from the ground as one of the first things that we do. Make sure that we are also able to make this modular enough that we can use this in linear as well as plane because one is for work and one is for personal projects.
|
|
11
|
+
|
|
12
|
+
Then I believe the next thing we should build is a performance metrics for the AI agents. This is a subset of the overall performance metrics that we want for any project, but I really need to understand the costs of building in terms of tokens, tasks, actual costs, different models being used, all of that information as early as possible in the building out of this project, so that we can study it for future projects.
|
|
13
|
+
|
|
14
|
+
Note that I am not married to PIV or SWARM or anything like that. We are building this as a new project with new research. Just giving you all the information that I have currently so that we can make the best kind of decisions that we need to for designing this project. Especially since the clog team agents already came out just now.
|
|
15
|
+
|
|
16
|
+
One of the first things I think we should do is actually building a design dock or design plan of some sort here. So if we could just walk through that, build it in Mermaid as best we can as a starting place, that'll be the perfect first stage before we move on to building the first couple of agents.
|
|
17
|
+
|
|
18
|
+
You tell me if you think we should do the interview or the design doc phase first.
|
|
19
|
+
|
|
20
|
+
Make sure to keep a running log of all the conversations that we have. So even if you're just numbering them in order that we have them as markdown files in a conversation log directory, we need to make sure that we are always doing that before we, or once we exit a conversation.
|
|
21
|
+
|
|
22
|
+
|
|
23
|
+
## Background Information
|
|
24
|
+
|
|
25
|
+
@initial-build-ticket-idea.md
|
|
26
|
+
|
|
27
|
+
This is my original build ticket idea. It basically is the inspiration for this whole project. It is not planned out over the course of an hour or two, so it is not to the level of verbosity that we need to get to, but it does show the overarching idea and plan of things that I think we want to build.
|
|
28
|
+
|
|
29
|
+
@research/
|
|
30
|
+
This folder is where all of my previous research for this style of project lives. These are just ideas, feedback, possible things that we can implement. They are not set in stone. They are just building blocks for us to start to build the perfect process.
|
|
31
|
+
|
|
32
|
+
|
|
33
|
+
@research/opus_first_ideas/
|
|
34
|
+
This was a dry run of this idea that Opus kind of built out. This is definitely not expansive enough, but it is thoughts in the first directions of what could be built. So we should take a peek at this and maybe use it as possible inspiration.
|
|
35
|
+
|
|
36
|
+
@research/hooks/
|
|
37
|
+
I have previously attempted to build some hooks into our workflow to make sure that we keep everything on track and don't get out of our shoes in terms of any issues either with errors or commit messages or all of those style of things that can really derail a project if you don't keep everything clean. And so we should look at this as inspiration for managing the projects as best we can.
|
|
38
|
+
|
|
39
|
+
@research/early-research/
|
|
40
|
+
This is my original idea research build out all of the above that we should that have led to the building out of this project. It has nothing finalized. It's just reading through ideas of how to build out a system that is all encompassing. It has specific software or folder structure requirements that we might want to consider, feedback loops, all of that. So if you're looking to understand what the inspiration beyond the initial build ticket idea is, this is where you would find that information.
|
|
41
|
+
|
|
42
|
+
|
|
43
|
+
@current_ai/
|
|
44
|
+
This is my current claude setup on most of my computers. I would say that It definitely needs adapted to be a better fit for what we're trying to do, but it is a starting place. We do not have to keep any or all of the skills. They're just there as placeholders for something that we might want to model off of or build or improve.
|
|
45
|
+
|
|
46
|
+
Remember that I want to be provider agnostic, so I want to get away from relying on Claude too heavily. Obviously at the start here we can have as much as we need in there, but the goal is to get completely away from depending on Claude and that we could use any models or providers that pop up in the future that might be better at whatever we happen to be working on.
|