@crewpilot/agent 2.0.0 → 3.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. package/README.md +131 -131
  2. package/dist-npm/cli.js +5 -5
  3. package/dist-npm/index.js +100 -100
  4. package/package.json +69 -69
  5. package/prompts/agent.md +282 -282
  6. package/prompts/copilot-instructions.md +36 -36
  7. package/prompts/{catalyst.config.json → crewpilot.config.json} +72 -72
  8. package/prompts/skills/assure-code-quality/SKILL.md +112 -112
  9. package/prompts/skills/assure-pr-intelligence/SKILL.md +148 -148
  10. package/prompts/skills/assure-review-functional/SKILL.md +114 -114
  11. package/prompts/skills/assure-review-standards/SKILL.md +106 -106
  12. package/prompts/skills/assure-threat-model/SKILL.md +182 -182
  13. package/prompts/skills/assure-vulnerability-scan/SKILL.md +146 -146
  14. package/prompts/skills/autopilot-meeting/SKILL.md +434 -434
  15. package/prompts/skills/autopilot-worker/SKILL.md +737 -737
  16. package/prompts/skills/daily-digest/SKILL.md +188 -188
  17. package/prompts/skills/deliver-change-management/SKILL.md +132 -132
  18. package/prompts/skills/deliver-deploy-guard/SKILL.md +144 -144
  19. package/prompts/skills/deliver-doc-governance/SKILL.md +130 -130
  20. package/prompts/skills/engineer-feature-builder/SKILL.md +270 -270
  21. package/prompts/skills/engineer-root-cause-analysis/SKILL.md +150 -150
  22. package/prompts/skills/engineer-test-first/SKILL.md +148 -148
  23. package/prompts/skills/insights-knowledge-base/SKILL.md +202 -202
  24. package/prompts/skills/insights-pattern-detection/SKILL.md +142 -142
  25. package/prompts/skills/strategize-architecture-planner/SKILL.md +141 -141
  26. package/prompts/skills/strategize-solution-design/SKILL.md +118 -118
  27. package/scripts/postinstall.js +108 -108
@@ -1,106 +1,106 @@
1
- # Code Review — Standards & Conventions
2
-
3
- > **Pillar**: Assure | **ID**: `assure-review-standards`
4
-
5
- ## Purpose
6
-
7
- Focused code review that evaluates **coding standards, naming conventions, test patterns, and consistency** with the existing codebase. Separated from functional review so each can be delegated to a specialized subagent or run independently.
8
-
9
- ## Activation Triggers
10
-
11
- - "standards review", "conventions check", "consistency review", "does this match our style"
12
- - Automatically invoked by autopilot-worker Phase 6 via subagent delegation (role: `standards-reviewer`)
13
- - Can be run standalone for targeted reviews
14
-
15
- ## Methodology
16
-
17
- ### Step 1 — Discover Codebase Conventions
18
-
19
- Before reviewing, establish the project's conventions by scanning:
20
- 1. **Naming**: variable/function/class naming style (camelCase, snake_case, PascalCase)
21
- 2. **File structure**: directory layout, module organization, barrel exports
22
- 3. **Error handling**: how errors are thrown/caught/logged (Result types? try/catch? error codes?)
23
- 4. **Test patterns**: test framework, file naming (`*.test.ts` vs `*.spec.ts`), describe/it structure, setup/teardown
24
- 5. **Import style**: absolute vs relative, barrel imports, import ordering
25
- 6. **Type patterns**: explicit types vs inference, use of `any`, union types vs enums
26
-
27
- Read `.editorconfig`, `.eslintrc`, `tsconfig.json`, or similar config files if they exist.
28
-
29
- ### Step 2 — Convention Compliance Check
30
-
31
- For each changed file, check against the discovered conventions:
32
-
33
- | Category | What to Check |
34
- |----------|---------------|
35
- | **Naming** | Functions, variables, types, files match project style |
36
- | **Structure** | New files placed in correct directory, exports follow project pattern |
37
- | **Error handling** | Matches project's error handling style (not just "has error handling") |
38
- | **Tests** | Test file structure mirrors source, uses same describe/it/expect patterns |
39
- | **Types** | Follows project's type annotation style (strict types vs inference) |
40
- | **Imports** | Import ordering, relative vs absolute paths, no circular imports |
41
- | **Comments** | JSDoc where project uses JSDoc, no commented-out code |
42
-
43
- ### Step 3 — Consistency Analysis
44
-
45
- 1. Compare the diff against the 5 nearest files in the same directory
46
- 2. Flag any deviation from the local style (even if technically valid)
47
- 3. Check for copy-paste code that should be extracted
48
- 4. Verify new code follows the same patterns as existing code in the same module
49
-
50
- ### Step 4 — Pattern Detection Integration
51
-
52
- 1. Query `catalyst_knowledge_search` (type: `pattern`) for known conventions and anti-patterns
53
- 2. Check if any flagged deviation is a **repeat offense** from past reviews
54
- 3. If repeat offense found, flag prominently:
55
- ```
56
- ⚠️ Recurring Convention Violation: {description}
57
- Previously flagged in: {previous context}
58
- Suggestion: Consider adding a lint rule or pre-commit hook.
59
- ```
60
-
61
- ### Synthesis
62
-
63
- 1. Categorize findings: `convention-violation | inconsistency | repeat-offense | suggestion`
64
- 2. Filter by confidence threshold
65
- 3. Group by category
66
- 4. If invoked as subagent, write output as artifact via `catalyst_artifact_write` (phase: `review-standards`)
67
-
68
- ## Tools Required
69
-
70
- - `catalyst_knowledge_search` — Query known patterns and past convention violations
71
- - `catalyst_artifact_write` — Persist review findings as artifact (when run as subagent)
72
- - `catalyst_artifact_read` — Read prior analysis artifacts for context
73
-
74
- ## Output Format
75
-
76
- ```
77
- ## [Catalyst → Standards Review]
78
-
79
- ### Summary
80
- {N} findings across {files}: {violations} violations, {inconsistencies} inconsistencies, {repeat} repeat offenses
81
-
82
- ### Convention Violations
83
- | Category | File:Line | Convention | Violation | Fix |
84
- |----------|-----------|------------|-----------|-----|
85
- | ... | ... | ... | ... | ... |
86
-
87
- ### Inconsistencies
88
- | File:Line | Expected Pattern | Actual | Nearest Example |
89
- |-----------|------------------|--------|-----------------|
90
- | ... | ... | ... | ... |
91
-
92
- ### Repeat Offenses
93
- | Issue | Previous Occurrence | Suggestion |
94
- |-------|---------------------|------------|
95
- | ... | ... | ... |
96
-
97
- ### Verdict
98
- {PASS | PASS_WITH_WARNINGS | FAIL}
99
- Confidence: {N}/10
100
- ```
101
-
102
- ## Chains To
103
-
104
- - `assure-review-functional` — Companion skill for correctness/security/performance review
105
- - `assure-code-quality` — Full 4-pass review (superset of this skill)
106
- - `insights-pattern-detection` — Deep codebase-wide pattern analysis
1
+ # Code Review — Standards & Conventions
2
+
3
+ > **Pillar**: Assure | **ID**: `assure-review-standards`
4
+
5
+ ## Purpose
6
+
7
+ Focused code review that evaluates **coding standards, naming conventions, test patterns, and consistency** with the existing codebase. Separated from functional review so each can be delegated to a specialized subagent or run independently.
8
+
9
+ ## Activation Triggers
10
+
11
+ - "standards review", "conventions check", "consistency review", "does this match our style"
12
+ - Automatically invoked by autopilot-worker Phase 6 via subagent delegation (role: `standards-reviewer`)
13
+ - Can be run standalone for targeted reviews
14
+
15
+ ## Methodology
16
+
17
+ ### Step 1 — Discover Codebase Conventions
18
+
19
+ Before reviewing, establish the project's conventions by scanning:
20
+ 1. **Naming**: variable/function/class naming style (camelCase, snake_case, PascalCase)
21
+ 2. **File structure**: directory layout, module organization, barrel exports
22
+ 3. **Error handling**: how errors are thrown/caught/logged (Result types? try/catch? error codes?)
23
+ 4. **Test patterns**: test framework, file naming (`*.test.ts` vs `*.spec.ts`), describe/it structure, setup/teardown
24
+ 5. **Import style**: absolute vs relative, barrel imports, import ordering
25
+ 6. **Type patterns**: explicit types vs inference, use of `any`, union types vs enums
26
+
27
+ Read `.editorconfig`, `.eslintrc`, `tsconfig.json`, or similar config files if they exist.
28
+
29
+ ### Step 2 — Convention Compliance Check
30
+
31
+ For each changed file, check against the discovered conventions:
32
+
33
+ | Category | What to Check |
34
+ |----------|---------------|
35
+ | **Naming** | Functions, variables, types, files match project style |
36
+ | **Structure** | New files placed in correct directory, exports follow project pattern |
37
+ | **Error handling** | Matches project's error handling style (not just "has error handling") |
38
+ | **Tests** | Test file structure mirrors source, uses same describe/it/expect patterns |
39
+ | **Types** | Follows project's type annotation style (strict types vs inference) |
40
+ | **Imports** | Import ordering, relative vs absolute paths, no circular imports |
41
+ | **Comments** | JSDoc where project uses JSDoc, no commented-out code |
42
+
43
+ ### Step 3 — Consistency Analysis
44
+
45
+ 1. Compare the diff against the 5 nearest files in the same directory
46
+ 2. Flag any deviation from the local style (even if technically valid)
47
+ 3. Check for copy-paste code that should be extracted
48
+ 4. Verify new code follows the same patterns as existing code in the same module
49
+
50
+ ### Step 4 — Pattern Detection Integration
51
+
52
+ 1. Query `crewpilot_knowledge_search` (type: `pattern`) for known conventions and anti-patterns
53
+ 2. Check if any flagged deviation is a **repeat offense** from past reviews
54
+ 3. If repeat offense found, flag prominently:
55
+ ```
56
+ ⚠️ Recurring Convention Violation: {description}
57
+ Previously flagged in: {previous context}
58
+ Suggestion: Consider adding a lint rule or pre-commit hook.
59
+ ```
60
+
61
+ ### Synthesis
62
+
63
+ 1. Categorize findings: `convention-violation | inconsistency | repeat-offense | suggestion`
64
+ 2. Filter by confidence threshold
65
+ 3. Group by category
66
+ 4. If invoked as subagent, write output as artifact via `crewpilot_artifact_write` (phase: `review-standards`)
67
+
68
+ ## Tools Required
69
+
70
+ - `crewpilot_knowledge_search` — Query known patterns and past convention violations
71
+ - `crewpilot_artifact_write` — Persist review findings as artifact (when run as subagent)
72
+ - `crewpilot_artifact_read` — Read prior analysis artifacts for context
73
+
74
+ ## Output Format
75
+
76
+ ```
77
+ ## [CrewPilot → Standards Review]
78
+
79
+ ### Summary
80
+ {N} findings across {files}: {violations} violations, {inconsistencies} inconsistencies, {repeat} repeat offenses
81
+
82
+ ### Convention Violations
83
+ | Category | File:Line | Convention | Violation | Fix |
84
+ |----------|-----------|------------|-----------|-----|
85
+ | ... | ... | ... | ... | ... |
86
+
87
+ ### Inconsistencies
88
+ | File:Line | Expected Pattern | Actual | Nearest Example |
89
+ |-----------|------------------|--------|-----------------|
90
+ | ... | ... | ... | ... |
91
+
92
+ ### Repeat Offenses
93
+ | Issue | Previous Occurrence | Suggestion |
94
+ |-------|---------------------|------------|
95
+ | ... | ... | ... |
96
+
97
+ ### Verdict
98
+ {PASS | PASS_WITH_WARNINGS | FAIL}
99
+ Confidence: {N}/10
100
+ ```
101
+
102
+ ## Chains To
103
+
104
+ - `assure-review-functional` — Companion skill for correctness/security/performance review
105
+ - `assure-code-quality` — Full 4-pass review (superset of this skill)
106
+ - `insights-pattern-detection` — Deep codebase-wide pattern analysis
@@ -1,182 +1,182 @@
1
- # Threat Model — STRIDE
2
-
3
- > **Pillar**: Assure | **ID**: `assure-threat-model`
4
-
5
- ## Purpose
6
-
7
- Systematic threat modeling using the STRIDE framework. Identifies threats across Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Produces a threat register with risk scores and mitigations that informs design decisions and security reviews.
8
-
9
- ## Activation Triggers
10
-
11
- - "threat model", "stride", "threat analysis", "security architecture"
12
- - "what could go wrong", "attack vectors", "threat register"
13
- - Label-gated: automatically invoked by autopilot-worker Phase 2.5d when `needs-threat-model` or `security-sensitive` label detected
14
- - Routed from `security-auditor` subagent role for architecture-level security analysis
15
-
16
- ## Methodology
17
-
18
- ### Process Flow
19
-
20
- ```dot
21
- digraph threat_model {
22
- rankdir=TB;
23
- node [shape=box];
24
-
25
- scope [label="Phase 1\nScope & Data Flow"];
26
- decompose [label="Phase 2\nComponent Decomposition"];
27
- stride [label="Phase 3\nSTRIDE Analysis"];
28
- risk [label="Phase 4\nRisk Assessment"];
29
- mitigate [label="Phase 5\nMitigation Planning"];
30
- register [label="Phase 6\nThreat Register", shape=doublecircle];
31
-
32
- scope -> decompose;
33
- decompose -> stride;
34
- stride -> risk;
35
- risk -> mitigate;
36
- mitigate -> register;
37
- }
38
- ```
39
-
40
- ### Phase 1 — Scope & Data Flow
41
-
42
- 1. Define the system boundary — what's being threat-modeled (entire system, single feature, or API surface)
43
- 2. Identify actors: end users, admins, external services, background jobs
44
- 3. Map data flows:
45
- - User input → processing → storage → output
46
- - Service-to-service communication
47
- - External API calls
48
- 4. Identify trust boundaries:
49
- - Authenticated vs unauthenticated zones
50
- - Internal vs external network
51
- - Client-side vs server-side
52
- - Different privilege levels
53
- 5. **(Optional) Fetch security context from M365**: If `mcp_workiq_ask_work_iq` is available, query for relevant compliance and security context:
54
- - Call `mcp_workiq_accept_eula` with `eulaUrl: "https://github.com/microsoft/work-iq-mcp"` (idempotent)
55
- - **Compliance requirements**: `mcp_workiq_ask_work_iq` → "What compliance requirements, security policies, or regulatory constraints apply to {system/feature}? Check emails, docs, and Teams messages."
56
- - **Past security discussions**: `mcp_workiq_ask_work_iq` → "What security concerns or vulnerabilities have been discussed about {system/feature} in recent emails and meetings?"
57
- - **Architecture decisions**: `mcp_workiq_ask_work_iq` → "What architecture or security design decisions were made about {system/feature} in meetings or design docs?"
58
- - Feed this context into the STRIDE analysis to ensure threats are evaluated against the organization's actual compliance posture and known security concerns.
59
- - If unavailable, proceed without — the threat model works from code analysis alone.
60
-
61
- ### Phase 2 — Component Decomposition
62
-
63
- 1. List each component in the data flow:
64
- - Frontend (SPA, mobile app, CLI)
65
- - API gateway / load balancer
66
- - Application server(s)
67
- - Database(s)
68
- - Cache layer
69
- - Message queue / event bus
70
- - External services / third-party APIs
71
- - File storage / CDN
72
- 2. For each component, note:
73
- - Technology stack
74
- - Authentication mechanism
75
- - Data stored/processed
76
- - Network exposure (public, internal, VPN)
77
-
78
- ### Phase 3 — STRIDE Analysis
79
-
80
- For each component and each data flow crossing a trust boundary, evaluate all six STRIDE categories:
81
-
82
- | Category | Threat | Key Questions |
83
- |----------|--------|---------------|
84
- | **S**poofing | Identity impersonation | Can an attacker pretend to be another user/service? Is authentication enforced at every entry point? Are tokens/sessions properly validated? |
85
- | **T**ampering | Data modification | Can data be modified in transit or at rest? Are inputs validated? Is there integrity checking (HMAC, checksums)? Can request parameters be manipulated? |
86
- | **R**epudiation | Deniability of actions | Are actions logged with sufficient detail? Can a user deny performing an action? Are audit logs tamper-proof? |
87
- | **I**nformation Disclosure | Data exposure | Can sensitive data leak through error messages, logs, API responses, or side channels? Is PII/secrets encrypted at rest and in transit? |
88
- | **D**enial of Service | Availability threats | Are there rate limits? Can a single request exhaust resources (memory, CPU, disk)? Are there circuit breakers? Can an attacker trigger expensive operations? |
89
- | **E**levation of Privilege | Unauthorized access | Can a regular user access admin functions? Are authorization checks at every layer (not just frontend)? Can parameters be manipulated to bypass access controls? |
90
-
91
- ### Phase 4 — Risk Assessment
92
-
93
- For each identified threat, assess:
94
-
95
- 1. **Likelihood** (1-5): How easy is this to exploit?
96
- - 1 = Requires deep insider knowledge + sophisticated tools
97
- - 3 = Moderately skilled attacker with publicly available tools
98
- - 5 = Trivial exploitation, automated scanners can find it
99
- 2. **Impact** (1-5): What's the damage if exploited?
100
- - 1 = Minor inconvenience, no data loss
101
- - 3 = Service disruption, limited data exposure
102
- - 5 = Full data breach, system compromise, regulatory impact
103
- 3. **Risk Score** = Likelihood × Impact (1-25)
104
- - 1-6: Low → Accept or monitor
105
- - 7-14: Medium → Mitigate within normal development
106
- - 15-25: High/Critical → Block release until mitigated
107
-
108
- ### Phase 5 — Mitigation Planning
109
-
110
- For each threat with risk score ≥ 7:
111
-
112
- 1. Propose a specific mitigation (not generic "add security")
113
- 2. Classify the mitigation:
114
- - **Prevent**: Eliminate the threat entirely (e.g., parameterized queries for SQLi)
115
- - **Detect**: Monitor and alert (e.g., anomaly detection for DoS)
116
- - **Respond**: Limit damage (e.g., circuit breakers, rate limits)
117
- - **Transfer**: Shift risk (e.g., use managed service with SLA)
118
- 3. Estimate implementation effort: Low / Medium / High
119
- 4. Identify which phase of the worker pipeline should implement the mitigation:
120
- - Phase 4 (Implementation): Code-level fixes
121
- - Phase 5 (Change Mgmt): Configuration changes
122
- - Phase 7 (Deploy Guard): Operational checks
123
-
124
- ### Phase 6 — Threat Register
125
-
126
- Compile all findings into a structured threat register and:
127
- 1. Store via `catalyst_knowledge_store` (type: `threat-model`) for future reference
128
- 2. Write as artifact via `catalyst_artifact_write` (phase: `threat-model`)
129
- 3. Feed high-risk items into the Phase 3 plan as mandatory implementation steps
130
-
131
- ## Tools Required
132
-
133
- - `catalyst_knowledge_store` — Store threat model for future reference
134
- - `catalyst_knowledge_search` — Query past threat models and security findings
135
- - `catalyst_artifact_write` — Persist threat register as workflow artifact
136
- - `catalyst_artifact_read` — Read analysis/architecture artifacts for context
137
- - `catalyst_metrics_complexity` — Identify complex code that may have more attack surface
138
- - `mcp_workiq_accept_eula` — (optional) Accept Work IQ EULA before first query
139
- - `mcp_workiq_ask_work_iq` — (optional) Query M365 for compliance requirements, security discussions, and architecture decisions
140
-
141
- ## Output Format
142
-
143
- ```
144
- ## [Catalyst → Threat Model (STRIDE)]
145
-
146
- ### Scope
147
- **System**: {what's being modeled}
148
- **Actors**: {user types}
149
- **Trust Boundaries**: {boundary list}
150
-
151
- ### Data Flow Diagram
152
- ```
153
- {text-based data flow: Actor → Component → Data Store → Output}
154
- ```
155
-
156
- ### Threat Register
157
-
158
- | ID | STRIDE | Component | Threat | Likelihood | Impact | Risk | Mitigation | Effort |
159
- |----|--------|-----------|--------|------------|--------|------|------------|--------|
160
- | T1 | S | Auth API | ... | 4 | 5 | 20 | ... | Medium |
161
- | T2 | T | ... | ... | 3 | 3 | 9 | ... | Low |
162
- | ...| ... | ... | ... | ... | ... | ... | ... | ... |
163
-
164
- ### Risk Summary
165
- - **Critical** (15-25): {count} threats → Must mitigate before release
166
- - **Medium** (7-14): {count} threats → Mitigate within sprint
167
- - **Low** (1-6): {count} threats → Accept/monitor
168
-
169
- ### Recommended Mitigations (Priority Order)
170
- 1. {T-ID}: {mitigation} — {effort} — Phase {N}
171
- 2. {T-ID}: {mitigation} — {effort} — Phase {N}
172
- 3. ...
173
-
174
- ### Confidence: {N}/10
175
- ```
176
-
177
- ## Chains To
178
-
179
- - `assure-vulnerability-scan` — Complements STRIDE with OWASP/CWE code-level scanning
180
- - `assure-review-functional` — Security pass covers code-level implementation of mitigations
181
- - `strategize-architecture-planner` — Architecture decisions should reference the threat model
182
- - `insights-knowledge-base` — Past threat models inform future analysis
1
+ # Threat Model — STRIDE
2
+
3
+ > **Pillar**: Assure | **ID**: `assure-threat-model`
4
+
5
+ ## Purpose
6
+
7
+ Systematic threat modeling using the STRIDE framework. Identifies threats across Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Produces a threat register with risk scores and mitigations that informs design decisions and security reviews.
8
+
9
+ ## Activation Triggers
10
+
11
+ - "threat model", "stride", "threat analysis", "security architecture"
12
+ - "what could go wrong", "attack vectors", "threat register"
13
+ - Label-gated: automatically invoked by autopilot-worker Phase 2.5d when `needs-threat-model` or `security-sensitive` label detected
14
+ - Routed from `security-auditor` subagent role for architecture-level security analysis
15
+
16
+ ## Methodology
17
+
18
+ ### Process Flow
19
+
20
+ ```dot
21
+ digraph threat_model {
22
+ rankdir=TB;
23
+ node [shape=box];
24
+
25
+ scope [label="Phase 1\nScope & Data Flow"];
26
+ decompose [label="Phase 2\nComponent Decomposition"];
27
+ stride [label="Phase 3\nSTRIDE Analysis"];
28
+ risk [label="Phase 4\nRisk Assessment"];
29
+ mitigate [label="Phase 5\nMitigation Planning"];
30
+ register [label="Phase 6\nThreat Register", shape=doublecircle];
31
+
32
+ scope -> decompose;
33
+ decompose -> stride;
34
+ stride -> risk;
35
+ risk -> mitigate;
36
+ mitigate -> register;
37
+ }
38
+ ```
39
+
40
+ ### Phase 1 — Scope & Data Flow
41
+
42
+ 1. Define the system boundary — what's being threat-modeled (entire system, single feature, or API surface)
43
+ 2. Identify actors: end users, admins, external services, background jobs
44
+ 3. Map data flows:
45
+ - User input → processing → storage → output
46
+ - Service-to-service communication
47
+ - External API calls
48
+ 4. Identify trust boundaries:
49
+ - Authenticated vs unauthenticated zones
50
+ - Internal vs external network
51
+ - Client-side vs server-side
52
+ - Different privilege levels
53
+ 5. **(Optional) Fetch security context from M365**: If `mcp_workiq_ask_work_iq` is available, query for relevant compliance and security context:
54
+ - Call `mcp_workiq_accept_eula` with `eulaUrl: "https://github.com/microsoft/work-iq-mcp"` (idempotent)
55
+ - **Compliance requirements**: `mcp_workiq_ask_work_iq` → "What compliance requirements, security policies, or regulatory constraints apply to {system/feature}? Check emails, docs, and Teams messages."
56
+ - **Past security discussions**: `mcp_workiq_ask_work_iq` → "What security concerns or vulnerabilities have been discussed about {system/feature} in recent emails and meetings?"
57
+ - **Architecture decisions**: `mcp_workiq_ask_work_iq` → "What architecture or security design decisions were made about {system/feature} in meetings or design docs?"
58
+ - Feed this context into the STRIDE analysis to ensure threats are evaluated against the organization's actual compliance posture and known security concerns.
59
+ - If unavailable, proceed without — the threat model works from code analysis alone.
60
+
61
+ ### Phase 2 — Component Decomposition
62
+
63
+ 1. List each component in the data flow:
64
+ - Frontend (SPA, mobile app, CLI)
65
+ - API gateway / load balancer
66
+ - Application server(s)
67
+ - Database(s)
68
+ - Cache layer
69
+ - Message queue / event bus
70
+ - External services / third-party APIs
71
+ - File storage / CDN
72
+ 2. For each component, note:
73
+ - Technology stack
74
+ - Authentication mechanism
75
+ - Data stored/processed
76
+ - Network exposure (public, internal, VPN)
77
+
78
+ ### Phase 3 — STRIDE Analysis
79
+
80
+ For each component and each data flow crossing a trust boundary, evaluate all six STRIDE categories:
81
+
82
+ | Category | Threat | Key Questions |
83
+ |----------|--------|---------------|
84
+ | **S**poofing | Identity impersonation | Can an attacker pretend to be another user/service? Is authentication enforced at every entry point? Are tokens/sessions properly validated? |
85
+ | **T**ampering | Data modification | Can data be modified in transit or at rest? Are inputs validated? Is there integrity checking (HMAC, checksums)? Can request parameters be manipulated? |
86
+ | **R**epudiation | Deniability of actions | Are actions logged with sufficient detail? Can a user deny performing an action? Are audit logs tamper-proof? |
87
+ | **I**nformation Disclosure | Data exposure | Can sensitive data leak through error messages, logs, API responses, or side channels? Is PII/secrets encrypted at rest and in transit? |
88
+ | **D**enial of Service | Availability threats | Are there rate limits? Can a single request exhaust resources (memory, CPU, disk)? Are there circuit breakers? Can an attacker trigger expensive operations? |
89
+ | **E**levation of Privilege | Unauthorized access | Can a regular user access admin functions? Are authorization checks at every layer (not just frontend)? Can parameters be manipulated to bypass access controls? |
90
+
91
+ ### Phase 4 — Risk Assessment
92
+
93
+ For each identified threat, assess:
94
+
95
+ 1. **Likelihood** (1-5): How easy is this to exploit?
96
+ - 1 = Requires deep insider knowledge + sophisticated tools
97
+ - 3 = Moderately skilled attacker with publicly available tools
98
+ - 5 = Trivial exploitation, automated scanners can find it
99
+ 2. **Impact** (1-5): What's the damage if exploited?
100
+ - 1 = Minor inconvenience, no data loss
101
+ - 3 = Service disruption, limited data exposure
102
+ - 5 = Full data breach, system compromise, regulatory impact
103
+ 3. **Risk Score** = Likelihood × Impact (1-25)
104
+ - 1-6: Low → Accept or monitor
105
+ - 7-14: Medium → Mitigate within normal development
106
+ - 15-25: High/Critical → Block release until mitigated
107
+
108
+ ### Phase 5 — Mitigation Planning
109
+
110
+ For each threat with risk score ≥ 7:
111
+
112
+ 1. Propose a specific mitigation (not generic "add security")
113
+ 2. Classify the mitigation:
114
+ - **Prevent**: Eliminate the threat entirely (e.g., parameterized queries for SQLi)
115
+ - **Detect**: Monitor and alert (e.g., anomaly detection for DoS)
116
+ - **Respond**: Limit damage (e.g., circuit breakers, rate limits)
117
+ - **Transfer**: Shift risk (e.g., use managed service with SLA)
118
+ 3. Estimate implementation effort: Low / Medium / High
119
+ 4. Identify which phase of the worker pipeline should implement the mitigation:
120
+ - Phase 4 (Implementation): Code-level fixes
121
+ - Phase 5 (Change Mgmt): Configuration changes
122
+ - Phase 7 (Deploy Guard): Operational checks
123
+
124
+ ### Phase 6 — Threat Register
125
+
126
+ Compile all findings into a structured threat register and:
127
+ 1. Store via `crewpilot_knowledge_store` (type: `threat-model`) for future reference
128
+ 2. Write as artifact via `crewpilot_artifact_write` (phase: `threat-model`)
129
+ 3. Feed high-risk items into the Phase 3 plan as mandatory implementation steps
130
+
131
+ ## Tools Required
132
+
133
+ - `crewpilot_knowledge_store` — Store threat model for future reference
134
+ - `crewpilot_knowledge_search` — Query past threat models and security findings
135
+ - `crewpilot_artifact_write` — Persist threat register as workflow artifact
136
+ - `crewpilot_artifact_read` — Read analysis/architecture artifacts for context
137
+ - `crewpilot_metrics_complexity` — Identify complex code that may have more attack surface
138
+ - `mcp_workiq_accept_eula` — (optional) Accept Work IQ EULA before first query
139
+ - `mcp_workiq_ask_work_iq` — (optional) Query M365 for compliance requirements, security discussions, and architecture decisions
140
+
141
+ ## Output Format
142
+
143
+ ```
144
+ ## [CrewPilot → Threat Model (STRIDE)]
145
+
146
+ ### Scope
147
+ **System**: {what's being modeled}
148
+ **Actors**: {user types}
149
+ **Trust Boundaries**: {boundary list}
150
+
151
+ ### Data Flow Diagram
152
+ ```
153
+ {text-based data flow: Actor → Component → Data Store → Output}
154
+ ```
155
+
156
+ ### Threat Register
157
+
158
+ | ID | STRIDE | Component | Threat | Likelihood | Impact | Risk | Mitigation | Effort |
159
+ |----|--------|-----------|--------|------------|--------|------|------------|--------|
160
+ | T1 | S | Auth API | ... | 4 | 5 | 20 | ... | Medium |
161
+ | T2 | T | ... | ... | 3 | 3 | 9 | ... | Low |
162
+ | ...| ... | ... | ... | ... | ... | ... | ... | ... |
163
+
164
+ ### Risk Summary
165
+ - **Critical** (15-25): {count} threats → Must mitigate before release
166
+ - **Medium** (7-14): {count} threats → Mitigate within sprint
167
+ - **Low** (1-6): {count} threats → Accept/monitor
168
+
169
+ ### Recommended Mitigations (Priority Order)
170
+ 1. {T-ID}: {mitigation} — {effort} — Phase {N}
171
+ 2. {T-ID}: {mitigation} — {effort} — Phase {N}
172
+ 3. ...
173
+
174
+ ### Confidence: {N}/10
175
+ ```
176
+
177
+ ## Chains To
178
+
179
+ - `assure-vulnerability-scan` — Complements STRIDE with OWASP/CWE code-level scanning
180
+ - `assure-review-functional` — Security pass covers code-level implementation of mitigations
181
+ - `strategize-architecture-planner` — Architecture decisions should reference the threat model
182
+ - `insights-knowledge-base` — Past threat models inform future analysis