fraim-framework 2.0.26 → 2.0.30

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (104) hide show
  1. package/.github/workflows/deploy-fraim.yml +1 -1
  2. package/dist/registry/scripts/build-scripts-generator.js +205 -0
  3. package/dist/registry/scripts/cleanup-branch.js +258 -0
  4. package/dist/registry/scripts/evaluate-code-quality.js +66 -0
  5. package/dist/registry/scripts/exec-with-timeout.js +142 -0
  6. package/dist/registry/scripts/fraim-config.js +61 -0
  7. package/dist/registry/scripts/generate-engagement-emails.js +630 -0
  8. package/dist/registry/scripts/generic-issues-api.js +100 -0
  9. package/dist/registry/scripts/newsletter-helpers.js +731 -0
  10. package/dist/registry/scripts/openapi-generator.js +664 -0
  11. package/dist/registry/scripts/performance/profile-server.js +390 -0
  12. package/dist/registry/scripts/run-thank-you-workflow.js +92 -0
  13. package/dist/registry/scripts/send-newsletter-simple.js +85 -0
  14. package/dist/registry/scripts/send-thank-you-emails.js +54 -0
  15. package/dist/registry/scripts/validate-openapi-limits.js +311 -0
  16. package/dist/registry/scripts/validate-test-coverage.js +262 -0
  17. package/dist/registry/scripts/verify-test-coverage.js +66 -0
  18. package/dist/src/cli/commands/init.js +14 -12
  19. package/dist/src/cli/commands/sync.js +19 -2
  20. package/dist/src/cli/fraim.js +24 -22
  21. package/dist/src/cli/setup/first-run.js +13 -6
  22. package/dist/src/fraim/config-loader.js +0 -8
  23. package/dist/src/fraim/db-service.js +26 -15
  24. package/dist/src/fraim/issues.js +67 -0
  25. package/dist/src/fraim/setup-wizard.js +1 -69
  26. package/dist/src/fraim/types.js +0 -11
  27. package/dist/src/fraim-mcp-server.js +272 -18
  28. package/dist/src/utils/git-utils.js +1 -1
  29. package/dist/src/utils/version-utils.js +32 -0
  30. package/dist/tests/debug-tools.js +79 -0
  31. package/dist/tests/esm-compat.js +11 -0
  32. package/dist/tests/test-chalk-esm-issue.js +159 -0
  33. package/dist/tests/test-chalk-real-world.js +265 -0
  34. package/dist/tests/test-chalk-regression.js +327 -0
  35. package/dist/tests/test-chalk-resolution-issue.js +304 -0
  36. package/dist/tests/test-cli.js +0 -2
  37. package/dist/tests/test-fraim-install-chalk-issue.js +254 -0
  38. package/dist/tests/test-fraim-issues.js +59 -0
  39. package/dist/tests/test-genericization.js +1 -3
  40. package/dist/tests/test-mcp-connection.js +166 -0
  41. package/dist/tests/test-mcp-issue-integration.js +144 -0
  42. package/dist/tests/test-mcp-lifecycle-methods.js +312 -0
  43. package/dist/tests/test-node-compatibility.js +71 -0
  44. package/dist/tests/test-npm-install.js +66 -0
  45. package/dist/tests/test-npm-resolution-diagnostic.js +140 -0
  46. package/dist/tests/test-session-rehydration.js +145 -0
  47. package/dist/tests/test-standalone.js +2 -8
  48. package/dist/tests/test-sync-version-update.js +93 -0
  49. package/dist/tests/test-telemetry.js +190 -0
  50. package/package.json +10 -8
  51. package/registry/agent-guardrails.md +62 -54
  52. package/registry/rules/agent-success-criteria.md +52 -0
  53. package/registry/rules/agent-testing-guidelines.md +502 -502
  54. package/registry/rules/communication.md +121 -121
  55. package/registry/rules/continuous-learning.md +54 -54
  56. package/registry/rules/ephemeral-execution.md +10 -5
  57. package/registry/rules/hitl-ppe-record-analysis.md +302 -302
  58. package/registry/rules/local-development.md +251 -251
  59. package/registry/rules/software-development-lifecycle.md +104 -104
  60. package/registry/rules/successful-debugging-patterns.md +482 -478
  61. package/registry/rules/telemetry.md +67 -0
  62. package/registry/scripts/build-scripts-generator.ts +216 -215
  63. package/registry/scripts/cleanup-branch.ts +303 -284
  64. package/registry/scripts/code-quality-check.sh +559 -559
  65. package/registry/scripts/detect-tautological-tests.sh +38 -38
  66. package/registry/scripts/evaluate-code-quality.ts +1 -1
  67. package/registry/scripts/generate-engagement-emails.ts +744 -744
  68. package/registry/scripts/generic-issues-api.ts +110 -150
  69. package/registry/scripts/newsletter-helpers.ts +874 -874
  70. package/registry/scripts/openapi-generator.ts +695 -693
  71. package/registry/scripts/performance/profile-server.ts +5 -3
  72. package/registry/scripts/prep-issue.sh +468 -455
  73. package/registry/scripts/validate-openapi-limits.ts +366 -365
  74. package/registry/scripts/validate-test-coverage.ts +280 -280
  75. package/registry/scripts/verify-pr-comments.sh +70 -70
  76. package/registry/scripts/verify-test-coverage.ts +1 -1
  77. package/registry/templates/bootstrap/ARCHITECTURE-TEMPLATE.md +53 -53
  78. package/registry/templates/evidence/Implementation-BugEvidence.md +85 -85
  79. package/registry/templates/evidence/Implementation-FeatureEvidence.md +120 -120
  80. package/registry/templates/marketing/HBR-ARTICLE-TEMPLATE.md +66 -0
  81. package/registry/workflows/bootstrap/create-architecture.md +2 -2
  82. package/registry/workflows/bootstrap/evaluate-code-quality.md +3 -3
  83. package/registry/workflows/bootstrap/verify-test-coverage.md +2 -2
  84. package/registry/workflows/customer-development/insight-analysis.md +156 -156
  85. package/registry/workflows/customer-development/interview-preparation.md +421 -421
  86. package/registry/workflows/customer-development/strategic-brainstorming.md +146 -146
  87. package/registry/workflows/customer-development/thank-customers.md +193 -191
  88. package/registry/workflows/customer-development/weekly-newsletter.md +362 -352
  89. package/registry/workflows/improve-fraim/contribute.md +32 -0
  90. package/registry/workflows/improve-fraim/file-issue.md +32 -0
  91. package/registry/workflows/marketing/hbr-article.md +73 -0
  92. package/registry/workflows/performance/analyze-performance.md +63 -59
  93. package/registry/workflows/product-building/design.md +3 -2
  94. package/registry/workflows/product-building/implement.md +4 -3
  95. package/registry/workflows/product-building/prep-issue.md +28 -17
  96. package/registry/workflows/product-building/resolve.md +3 -2
  97. package/registry/workflows/product-building/retrospect.md +3 -2
  98. package/registry/workflows/product-building/spec.md +5 -4
  99. package/registry/workflows/product-building/test.md +3 -2
  100. package/registry/workflows/quality-assurance/iterative-improvement-cycle.md +562 -562
  101. package/registry/workflows/replicate/website-discovery-analysis.md +3 -3
  102. package/registry/workflows/reviewer/review-implementation-vs-design-spec.md +632 -632
  103. package/registry/workflows/reviewer/review-implementation-vs-feature-spec.md +669 -669
  104. package/tsconfig.json +2 -1
@@ -1,146 +1,146 @@
1
- # Strategic Brainstorming Workflow
2
-
3
- ## INTENT
4
- To help Product Managers brainstorm and validate their customer development strategy through deep thinking, assumption challenging, and market research to identify the right customer segments and validation questions.
5
-
6
- ## PRINCIPLES
7
- - **Assumption Challenging**: Question PM's assumptions and explore alternatives
8
- - **Deep Research**: Use market research to validate/invalidate assumptions
9
- - **Strategic Thinking**: Help PM think through customer segments and priorities
10
- - **Actionable Output**: Provide clear recommendations for customer interviews
11
-
12
- ## WORKFLOW TRIGGER
13
- **PM explicitly starts this workflow**:
14
- - "Starting Strategic Brainstorming"
15
- - "I need to brainstorm our customer development strategy"
16
- - "Following the strategic brainstorming workflow"
17
-
18
- ## AI AGENT PROCESS
19
-
20
- ### Step 1: Deep Context Gathering
21
- **AI Agent First Actions**:
22
- 1. **Read Business Context**
23
- - Read through `docs/business/` folder for company vision, strategy, and market positioning
24
- - Review existing business documents and market analysis
25
- - Understand current product positioning and competitive landscape
26
-
27
- 2. **Read Customer Insights**
28
- - Review `docs/customer-insights/` folder for existing customer research
29
- - Analyze previous customer interviews and feedback
30
- - Identify patterns in customer pain points and needs
31
-
32
- 3. **Analyze Codebase Context**
33
- - Review current product features and capabilities
34
- - Understand technical architecture and constraints
35
- - Identify areas where customer feedback could inform development
36
-
37
- **PM provides unstructured input**:
38
- - Raw thoughts about current product state
39
- - Initial assumptions and hypotheses
40
- - Unstructured priorities and concerns
41
- - Any existing customer insights
42
- - Competitive concerns or opportunities
43
-
44
- **AI Agent Response**:
45
- - Acknowledge the brainstorming session with deep context understanding
46
- - Reference specific insights from business docs and customer research
47
- - Ask probing questions informed by existing knowledge
48
- - Identify key assumptions to explore based on current context
49
-
50
- ### Step 2: Assumption Challenging & Exploration
51
- **AI Agent Brainstorming Process**:
52
- 1. **Challenge Assumptions**
53
- - "What makes you think [assumption] is true?"
54
- - "Have you considered alternative perspectives?"
55
- - "What evidence do you have for this assumption?"
56
-
57
- 2. **Explore Alternatives**
58
- - Suggest different customer segments
59
- - Propose alternative problem definitions
60
- - Consider different solution approaches
61
-
62
- 3. **Identify Gaps**
63
- - What assumptions need validation?
64
- - What customer segments are missing?
65
- - What competitive threats exist?
66
-
67
- ### Step 3: Market Research & Validation
68
- **AI Agent Research Process**:
69
- 1. **Market Research**
70
- - Research industry trends and customer pain points
71
- - Analyze competitive landscape
72
- - Identify potential customer personas
73
-
74
- 2. **Assumption Validation**
75
- - Find evidence for/against assumptions
76
- - Identify similar companies and their approaches
77
- - Research customer segment characteristics
78
-
79
- 3. **Opportunity Analysis**
80
- - Identify market gaps
81
- - Assess segment attractiveness
82
- - Evaluate competitive positioning
83
-
84
- ### Step 4: Strategic Recommendations
85
- **AI Agent Output**:
86
- - **Customer Segment Prioritization**: Ranked list of segments to interview
87
- - **Assumption Validation Map**: Key assumptions mapped to validation questions
88
- - **Interview Sequence**: Recommended order and rationale
89
- - **Risk Mitigation**: Potential risks and mitigation strategies
90
-
91
- ### Step 5: Document Creation
92
- **AI Agent Final Action**:
93
- - Create strategic recommendations document in `docs/customer-insights/`
94
- - Use intuitive naming based on context (e.g., `ashley-market-niche-strategy.md`)
95
- - Include all research findings and recommendations
96
- - Provide actionable next steps for PM
97
- - Save document for future reference and team sharing
98
-
99
- ## OUTPUT TEMPLATE
100
-
101
- ### Strategic Recommendations Document
102
- **Template**: `.ai-agents/templates/customer-development/strategic-recommendations-template.md`
103
- **File Location**: `docs/customer-development/[contextual-name].md` (e.g., `ashley-market-niche-strategy.md`)
104
-
105
- **Process**:
106
- 1. Use the strategic recommendations template
107
- 2. Fill in all sections with research findings and recommendations
108
- 3. Save with contextual naming based on strategic focus
109
- 4. Include specific customer names and companies when available
110
- 5. Reference existing business documents and customer insights
111
-
112
- ## EXAMPLES
113
-
114
- ### Good: Strategic Brainstorming Session
115
- ```
116
- PM: "Starting Strategic Brainstorming. We're building Ashley, an AI executive assistant. I think enterprise execs want AI to handle calendar conflicts automatically, but I'm not sure if that's right."
117
-
118
- AI Agent: "Perfect! I've reviewed your business docs and customer insights. I can see from your WhyAshley document that you're positioning Ashley as 'Your Agent for Time, Priorities, and Human Potential' and targeting enterprise executives. I also noticed from your existing customer interviews with Ronil, Sanjoy, and Olga that calendar conflict resolution came up as a key pain point across all three customers.
119
-
120
- However, I'm curious about your 'automatic' assumption. From your existing customer insights, I see that trust and delegation control were major themes - Ronil mentioned 'Don't have trust in tech to read email - need to have hands in it' and Sanjoy was concerned about 'Disturb during highly important meetings'. What makes you think they want it 'automatic' vs 'suggested'?"
121
-
122
- PM: "Good point. I guess I'm assuming automatic because that's what would save them the most time."
123
-
124
- AI Agent: "That's a reasonable assumption, but based on your existing customer feedback, there's a clear pattern of wanting control and oversight. Ronil mentioned wanting to 'observe for 2 weeks, then take action' and Olga talked about how good EAs 'enforce workback plans based on business priorities' rather than just doing what they're asked. What if we validated this by asking about their current delegation patterns and comfort levels with AI autonomy?"
125
-
126
- [Brainstorming continues with deep context...]
127
-
128
- AI Agent Output: Strategic recommendations document informed by existing business context and customer insights
129
- ```
130
-
131
- ### Bad: Surface-Level Brainstorming
132
- ```
133
- PM: "We're building Ashley, an AI executive assistant. Who should we interview?"
134
-
135
- AI Agent: "You should interview enterprise executives."
136
-
137
- PM: "Okay, thanks."
138
-
139
- Result: No deep thinking, no assumption validation, no strategic recommendations
140
- ```
141
-
142
- ## SUCCESS METRICS
143
- - **Assumption Validation**: 80% of key assumptions have clear validation questions
144
- - **Customer Segment Clarity**: Clear rationale for segment prioritization
145
- - **Interview Readiness**: PM has clear next steps for customer interviews
146
- - **Risk Awareness**: Potential risks identified and mitigated
1
+ # Strategic Brainstorming Workflow
2
+
3
+ ## INTENT
4
+ To help Product Managers brainstorm and validate their customer development strategy through deep thinking, assumption challenging, and market research to identify the right customer segments and validation questions.
5
+
6
+ ## PRINCIPLES
7
+ - **Assumption Challenging**: Question PM's assumptions and explore alternatives
8
+ - **Deep Research**: Use market research to validate/invalidate assumptions
9
+ - **Strategic Thinking**: Help PM think through customer segments and priorities
10
+ - **Actionable Output**: Provide clear recommendations for customer interviews
11
+
12
+ ## WORKFLOW TRIGGER
13
+ **PM explicitly starts this workflow**:
14
+ - "Starting Strategic Brainstorming"
15
+ - "I need to brainstorm our customer development strategy"
16
+ - "Following the strategic brainstorming workflow"
17
+
18
+ ## AI AGENT PROCESS
19
+
20
+ ### Step 1: Deep Context Gathering
21
+ **AI Agent First Actions**:
22
+ 1. **Read Business Context**
23
+ - Read through `docs/business/` folder for company vision, strategy, and market positioning
24
+ - Review existing business documents and market analysis
25
+ - Understand current product positioning and competitive landscape
26
+
27
+ 2. **Read Customer Insights**
28
+ - Review `docs/customer-insights/` folder for existing customer research
29
+ - Analyze previous customer interviews and feedback
30
+ - Identify patterns in customer pain points and needs
31
+
32
+ 3. **Analyze Codebase Context**
33
+ - Review current product features and capabilities
34
+ - Understand technical architecture and constraints
35
+ - Identify areas where customer feedback could inform development
36
+
37
+ **PM provides unstructured input**:
38
+ - Raw thoughts about current product state
39
+ - Initial assumptions and hypotheses
40
+ - Unstructured priorities and concerns
41
+ - Any existing customer insights
42
+ - Competitive concerns or opportunities
43
+
44
+ **AI Agent Response**:
45
+ - Acknowledge the brainstorming session with deep context understanding
46
+ - Reference specific insights from business docs and customer research
47
+ - Ask probing questions informed by existing knowledge
48
+ - Identify key assumptions to explore based on current context
49
+
50
+ ### Step 2: Assumption Challenging & Exploration
51
+ **AI Agent Brainstorming Process**:
52
+ 1. **Challenge Assumptions**
53
+ - "What makes you think [assumption] is true?"
54
+ - "Have you considered alternative perspectives?"
55
+ - "What evidence do you have for this assumption?"
56
+
57
+ 2. **Explore Alternatives**
58
+ - Suggest different customer segments
59
+ - Propose alternative problem definitions
60
+ - Consider different solution approaches
61
+
62
+ 3. **Identify Gaps**
63
+ - What assumptions need validation?
64
+ - What customer segments are missing?
65
+ - What competitive threats exist?
66
+
67
+ ### Step 3: Market Research & Validation
68
+ **AI Agent Research Process**:
69
+ 1. **Market Research**
70
+ - Research industry trends and customer pain points
71
+ - Analyze competitive landscape
72
+ - Identify potential customer personas
73
+
74
+ 2. **Assumption Validation**
75
+ - Find evidence for/against assumptions
76
+ - Identify similar companies and their approaches
77
+ - Research customer segment characteristics
78
+
79
+ 3. **Opportunity Analysis**
80
+ - Identify market gaps
81
+ - Assess segment attractiveness
82
+ - Evaluate competitive positioning
83
+
84
+ ### Step 4: Strategic Recommendations
85
+ **AI Agent Output**:
86
+ - **Customer Segment Prioritization**: Ranked list of segments to interview
87
+ - **Assumption Validation Map**: Key assumptions mapped to validation questions
88
+ - **Interview Sequence**: Recommended order and rationale
89
+ - **Risk Mitigation**: Potential risks and mitigation strategies
90
+
91
+ ### Step 5: Document Creation
92
+ **AI Agent Final Action**:
93
+ - Create strategic recommendations document in `docs/customer-insights/`
94
+ - Use intuitive naming based on context (e.g., `ashley-market-niche-strategy.md`)
95
+ - Include all research findings and recommendations
96
+ - Provide actionable next steps for PM
97
+ - Save document for future reference and team sharing
98
+
99
+ ## OUTPUT TEMPLATE
100
+
101
+ ### Strategic Recommendations Document
102
+ **Template**: Retrieve via `get_fraim_file({ path: "templates/customer-development/strategic-recommendations-template.md" })`
103
+ **File Location**: `docs/customer-development/[contextual-name].md` (e.g., `ashley-market-niche-strategy.md`)
104
+
105
+ **Process**:
106
+ 1. Use the strategic recommendations template
107
+ 2. Fill in all sections with research findings and recommendations
108
+ 3. Save with contextual naming based on strategic focus
109
+ 4. Include specific customer names and companies when available
110
+ 5. Reference existing business documents and customer insights
111
+
112
+ ## EXAMPLES
113
+
114
+ ### Good: Strategic Brainstorming Session
115
+ ```
116
+ PM: "Starting Strategic Brainstorming. We're building Ashley, an AI executive assistant. I think enterprise execs want AI to handle calendar conflicts automatically, but I'm not sure if that's right."
117
+
118
+ AI Agent: "Perfect! I've reviewed your business docs and customer insights. I can see from your WhyAshley document that you're positioning Ashley as 'Your Agent for Time, Priorities, and Human Potential' and targeting enterprise executives. I also noticed from your existing customer interviews with Ronil, Sanjoy, and Olga that calendar conflict resolution came up as a key pain point across all three customers.
119
+
120
+ However, I'm curious about your 'automatic' assumption. From your existing customer insights, I see that trust and delegation control were major themes - Ronil mentioned 'Don't have trust in tech to read email - need to have hands in it' and Sanjoy was concerned about 'Disturb during highly important meetings'. What makes you think they want it 'automatic' vs 'suggested'?"
121
+
122
+ PM: "Good point. I guess I'm assuming automatic because that's what would save them the most time."
123
+
124
+ AI Agent: "That's a reasonable assumption, but based on your existing customer feedback, there's a clear pattern of wanting control and oversight. Ronil mentioned wanting to 'observe for 2 weeks, then take action' and Olga talked about how good EAs 'enforce workback plans based on business priorities' rather than just doing what they're asked. What if we validated this by asking about their current delegation patterns and comfort levels with AI autonomy?"
125
+
126
+ [Brainstorming continues with deep context...]
127
+
128
+ AI Agent Output: Strategic recommendations document informed by existing business context and customer insights
129
+ ```
130
+
131
+ ### Bad: Surface-Level Brainstorming
132
+ ```
133
+ PM: "We're building Ashley, an AI executive assistant. Who should we interview?"
134
+
135
+ AI Agent: "You should interview enterprise executives."
136
+
137
+ PM: "Okay, thanks."
138
+
139
+ Result: No deep thinking, no assumption validation, no strategic recommendations
140
+ ```
141
+
142
+ ## SUCCESS METRICS
143
+ - **Assumption Validation**: 80% of key assumptions have clear validation questions
144
+ - **Customer Segment Clarity**: Clear rationale for segment prioritization
145
+ - **Interview Readiness**: PM has clear next steps for customer interviews
146
+ - **Risk Awareness**: Potential risks identified and mitigated