@securityreviewai/securityreview-kit 0.1.49 → 0.1.50

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,187 @@
1
+ ---
2
+ name: guardrails-selection
3
+ description: Analyze the developer request, infer the security categories and likely threats involved, shortlist the most relevant project guardrails, then hydrate the exact guardrails with get_guardrail_by_id before implementation. Use for every security-relevant code task before code is written and preserve the shortlist for the final VibeReview sync.
4
+ ---
5
+
6
+ # Guardrails Selection
7
+
8
+ Configured SRAI project name: `<SRAI_PROJECT_NAME>`
9
+
10
+ Use this skill whenever code will be created or modified and the task has any security surface.
11
+
12
+ This skill exists to stop the IDE from treating the full `get_guardrails` result as an unstructured blob. The workflow is:
13
+
14
+ 1. Understand the request deeply.
15
+ 2. Infer which security categories are in play.
16
+ 3. Predict the threats that might occur for this exact task.
17
+ 4. Shortlist only the guardrails that mitigate those threats.
18
+ 5. Fetch the exact shortlisted guardrails with `get_guardrail_by_id`.
19
+ 6. Carry that same shortlist forward into implementation and the final VibeReview markdown sync.
20
+
21
+ Do not skip the analysis step. Do not rely on title-matching alone. Do not dump every guardrail into the final answer.
22
+
23
+ ## Inputs You Must Analyze First
24
+
25
+ Before calling `get_guardrails`, extract the actual development intent from the prompt and surrounding code:
26
+
27
+ - What is being built, changed, fixed, or refactored?
28
+ - Which components are affected: API, UI, background jobs, auth flow, webhook, file upload, admin tooling, AI agent flow, infra code, data pipeline?
29
+ - Which trust boundaries are crossed?
30
+ - Which sensitive assets are touched: tokens, credentials, sessions, PII, tenancy boundaries, audit logs, secrets, internal APIs, signed URLs, payment state, workflow approvals?
31
+ - Which technologies and patterns are involved in the existing code?
32
+ - What abuse cases are plausible if this change is implemented poorly?
33
+
34
+ You are not only selecting guardrails for the obvious functionality. You are selecting guardrails for the threats that might materialize around that functionality.
35
+
36
+ ## Category Inference Workflow
37
+
38
+ Derive a category set for the task before shortlisting guardrails. Common categories include:
39
+
40
+ - `authentication`
41
+ - `authorization`
42
+ - `session_management`
43
+ - `input_validation`
44
+ - `output_encoding`
45
+ - `secrets`
46
+ - `cryptography`
47
+ - `logging`
48
+ - `monitoring`
49
+ - `file_uploads`
50
+ - `deserialization`
51
+ - `data_access`
52
+ - `rate_limiting`
53
+ - `network`
54
+ - `client_side`
55
+ - `business_logic`
56
+ - `tenant_isolation`
57
+ - `admin_workflows`
58
+
59
+ Use both the user request and the codebase patterns to infer the category set. A task can involve multiple categories even if the prompt mentions only one feature.
60
+
61
+ Examples:
62
+
63
+ - “Add magic-link login” likely involves `authentication`, `session_management`, `cryptography`, `logging`, `rate_limiting`, and `client_side`.
64
+ - “Add org admin API to update member roles” likely involves `authorization`, `tenant_isolation`, `logging`, `business_logic`, and `data_access`.
65
+ - “Add CSV import” likely involves `input_validation`, `file_uploads`, `data_access`, `deserialization`, `logging`, and denial-of-service protections.
66
+ - “Add client-side token refresh” likely involves `authentication`, `session_management`, `client_side`, `logging`, and `cryptography`.
67
+
68
+ ## Threat Mapping Requirement
69
+
70
+ After identifying categories, infer the threat families that might occur. Use the reference file at `{{GUARDRAILS_SELECTION_SKILL_DIR}}/references/category-threat-map.md` every time you need to reason about category-to-threat mapping.
71
+
72
+ Your goal is not to enumerate every possible weakness. Your goal is to pick the threats that should influence guardrail selection for this task.
73
+
74
+ At minimum, consider whether the task can create:
75
+
76
+ - authentication bypass
77
+ - authorization bypass
78
+ - privilege escalation
79
+ - information disclosure
80
+ - repudiation gaps
81
+ - denial of service
82
+ - unsafe client-side trust
83
+ - insecure logging or audit gaps
84
+ - injection-triggered security failures
85
+ - serialization-triggered security failures
86
+ - business-logic-triggered bypasses
87
+
88
+ The shortlist should be threat-led, not catalog-led.
89
+
90
+ ## Guardrail Selection Procedure
91
+
92
+ ### Step 1: Resolve the project and load the catalog
93
+
94
+ 1. Call `find_project_by_name` with `name="<SRAI_PROJECT_NAME>"` to obtain `project_id`.
95
+ 2. Call `get_guardrails` with `project_id`.
96
+
97
+ Treat `get_guardrails` as the broad catalog. Do not treat it as the final set of instructions.
98
+
99
+ Assume each returned guardrail includes the fields needed for selection, including a stable identifier for follow-up retrieval, plus:
100
+
101
+ - `title`
102
+ - `rule_type`
103
+ - `category`
104
+ - `instruction`
105
+
106
+ If an identifier is absent, fall back to the best available stable reference exposed by the tool, but prefer the real guardrail id whenever available.
107
+
108
+ ### Step 2: Build a shortlist
109
+
110
+ Shortlist guardrails using all of the following:
111
+
112
+ - direct category match with the task
113
+ - mitigation value against the likely threats you inferred
114
+ - relevance to the technologies and code paths being touched
115
+ - support for adjacent controls that prevent bypass chains
116
+ - duplication removal
117
+
118
+ Do not select a guardrail only because it sounds generally useful. Select it because it materially constrains the risky part of the current task.
119
+
120
+ Examples:
121
+
122
+ - If the task touches login, token issuance, password reset, session refresh, or identity proofing, prioritize authentication, session, crypto, logging, and brute-force defense guardrails.
123
+ - If the task changes role checks, tenant scoping, admin APIs, resource ownership, or query filters, prioritize authorization, tenant isolation, data access, business-logic, and audit guardrails.
124
+ - If the task introduces parsing, uploads, template expansion, or object hydration, prioritize input validation, file handling, deserialization, and denial-of-service guardrails.
125
+ - If the task moves security decisions into the browser or mobile client, prioritize client-side trust, token storage, server-side revalidation, and privilege-boundary guardrails.
126
+
127
+ ### Step 3: Hydrate exact shortlisted guardrails
128
+
129
+ For every shortlisted existing guardrail, call `get_guardrail_by_id` to retrieve the exact guardrail that will govern implementation.
130
+
131
+ - Use `get_guardrail_by_id` for the shortlisted ids only.
132
+ - If the tool supports batching, batch the shortlisted ids.
133
+ - If the tool only supports one id at a time, call it once per shortlisted id.
134
+
135
+ Implementation must be driven by the hydrated shortlist from `get_guardrail_by_id`, not by vague memory from the broad catalog listing.
136
+
137
+ ### Step 4: Track the active shortlist in context
138
+
139
+ Maintain an explicit in-context list of the shortlisted existing guardrails that will govern the task. For each shortlisted existing guardrail, keep:
140
+
141
+ - `id`
142
+ - `title`
143
+ - `rule_type`
144
+ - `category`
145
+ - `instruction`
146
+ - `why_selected`
147
+
148
+ Also track any new guardrails created during the task as `ide_generated`.
149
+
150
+ This shortlist is the source of truth for the rest of the session.
151
+
152
+ ## Implementation Rules
153
+
154
+ Once the shortlist is hydrated:
155
+
156
+ - Every applicable `must` guardrail is mandatory.
157
+ - Every applicable `must_not` guardrail is a hard prohibition.
158
+ - If two shortlisted guardrails appear to conflict, explain the conflict and resolve it before coding.
159
+ - If the task reveals a real gap not covered by the shortlisted existing guardrails, create an `ide_generated` guardrail and apply it immediately.
160
+
161
+ When deciding whether a guardrail applies, prefer security-preserving inclusion over risky omission. If it plausibly mitigates a realistic path to abuse for the current task, keep it in scope.
162
+
163
+ ## VibeReview Sync Contract
164
+
165
+ The final sync step must reuse the shortlist from this skill. It must not call `get_guardrails` or `get_guardrail_by_id` again.
166
+
167
+ Before `sync_ai_ide_markdown` is called, ensure the main agent context clearly contains:
168
+
169
+ - the exact existing guardrails shortlisted earlier
170
+ - which of them were applied
171
+ - whether each one was satisfied
172
+ - any notes about partial compliance, conflicts, or rationale
173
+ - every `ide_generated` guardrail created during the task
174
+
175
+ If a guardrail was shortlisted but not fully satisfied, still include it in the handoff with `satisfied: false` and a note. Do not silently drop it.
176
+
177
+ ## Selection Quality Bar
178
+
179
+ A good selection does all of the following:
180
+
181
+ - covers the feature’s real threat surface, not just its visible functionality
182
+ - captures adjacent controls that stop bypass chains
183
+ - avoids irrelevant noise
184
+ - produces a small, defensible set of guardrails that can actually guide implementation
185
+ - leaves the final VibeReview markdown with an exact list of what the IDE selected and enforced
186
+
187
+ If your shortlist feels generic, it is probably incomplete or over-broad. Re-check the prompt, the code patterns, and the threat map.
@@ -0,0 +1,256 @@
1
+ ---
2
+ name: PWNISMS Threat Modelling
3
+ description: Security-first threat modelling workflow for code and architecture tasks. Walks all 7 PWNISMS categories, enforces vibe guardrails (secure by code), and synchronizes findings via a direct VibeReview markdown sync. Use before, during, and after implementation.
4
+ ---
5
+
6
+ # PWNISMS — Security-First Threat Modelling
7
+
8
+ For EVERY security-relevant task (feature, bug fix, refactor, infra change, architecture design), run a threat model with PWNISMS.
9
+
10
+ - Walk through all 7 categories explicitly.
11
+ - If a category is not applicable, state it briefly and move on.
12
+ - Anchor analysis to linked files, diffs, PRs, API specs, and diagrams whenever available.
13
+ - Focus on realistic threats for the current context, not exhaustive attack catalogs.
14
+
15
+ ---
16
+
17
+ ## Phase 0 — Guardrail Context
18
+
19
+ Before deep analysis, ensure the project-specific guardrail shortlist exists:
20
+
21
+ 1. Use `{{GUARDRAILS_SELECTION_SKILL_DIR}}/SKILL.md`.
22
+ 2. Resolve the project with `find_project_by_name` using `name="<SRAI_PROJECT_NAME>"`.
23
+ 3. Call `get_guardrails`, shortlist intentionally for this task, then hydrate the exact shortlist with `get_guardrail_by_id`.
24
+ 4. Keep the shortlisted existing guardrails in context for implementation and the final VibeReview markdown sync.
25
+
26
+ Do not perform project-profile exploration as part of PWNISMS. The old profile tools are not part of this workflow. Ground the threat model in the user request, repository code, diffs, architecture docs the user provides, and the shortlisted guardrails.
27
+
28
+ If SRAI is not available, proceed with the user-provided context and repository evidence, then clearly note that project guardrails could not be fetched.
29
+
30
+ ---
31
+
32
+ ## Phase 1 — Inputs to Gather
33
+
34
+ Collect these quickly before deep analysis:
35
+
36
+ - **Scope**: What is changing (feature, component, service, migration, PR)?
37
+ - **Assets**: What must be protected (PII, credentials, tokens, configs, accounts, workflows)?
38
+ - **Entry points**: How data enters/leaves (HTTP, queues, schedulers, CLI, webhooks, integrations)?
39
+ - **Trust boundaries**: Where data crosses users/services/networks/privilege levels?
40
+ - **Existing guardrails**: What shortlisted project-specific dos and don'ts apply (from Phase 0)?
41
+
42
+ If the user provided specific code, diffs, or architecture artifacts, prioritize those as primary evidence.
43
+
44
+ ---
45
+
46
+ ## Phase 2 — Lightweight Workflow (PWNISMS)
47
+
48
+ 1. **Clarify scope and assumptions**
49
+ - Define the exact unit of analysis.
50
+ - State assumptions explicitly (auth model, deployment boundary, tenant model, etc.).
51
+
52
+ 2. **Map assets and flows**
53
+ - List high-value assets and critical data paths.
54
+ - List entry points and exits across trust boundaries.
55
+ - Note which assets are covered by existing guardrails and which are not.
56
+
57
+ 3. **Walk all 7 PWNISMS categories**
58
+ - Identify plausible threats for each category.
59
+ - Keep findings concrete and contextual.
60
+ - For each threat, check if an existing guardrail already addresses it.
61
+
62
+ 4. **Prioritize**
63
+ - Select the top 3-7 risks by impact and likelihood.
64
+ - Factor in existing mitigations from the codebase, user-provided context, and guardrails.
65
+
66
+ 5. **Mitigate**
67
+ - Propose concrete, implementable controls for each prioritized risk.
68
+ - Map mitigations to specific guardrails where applicable.
69
+ - If a mitigation represents a recurring pattern, propose it as a new guardrail candidate.
70
+
71
+ 6. **Summarize residual risk**
72
+ - Call out remaining risk, trade-offs, and follow-up actions.
73
+ - Call out unknowns instead of silently guessing.
74
+ - Note guardrail gaps — security patterns not yet captured by any guardrail.
75
+
76
+ ---
77
+
78
+ ## The 7 Categories (What to Check)
79
+
80
+ ### P — Product
81
+
82
+ Application and business-logic threats:
83
+
84
+ - Input validation, injection, insecure deserialization.
85
+ - Authorization gaps, privilege escalation, IDOR/BOLA.
86
+ - Business logic abuse, replay/race conditions, unsafe redirects.
87
+ - Error handling that leaks internals.
88
+ - **Guardrail check:** Are there `must` / `must_not` rules for input validation, authorization patterns, error handling?
89
+
90
+ ### W — Workload
91
+
92
+ Compute and infrastructure threats:
93
+
94
+ - Insecure container/runtime posture, over-privileged workload identity.
95
+ - Weak host/orchestrator controls and segmentation.
96
+ - Insecure data storage/backups and DB configuration.
97
+ - Queue/broker abuse and poison-message handling gaps.
98
+ - **Guardrail check:** Are there rules for container security, data-at-rest encryption, workload identity?
99
+
100
+ ### N — Network
101
+
102
+ Network and transport threats:
103
+
104
+ - Missing/weak TLS, insecure service-to-service communication.
105
+ - Exposed ports/endpoints and permissive ingress/egress.
106
+ - Weak segmentation or lateral movement paths.
107
+ - API-layer abuse controls missing (rate limits, request limits, CORS hardening).
108
+ - **Guardrail check:** Are there rules for TLS enforcement, CORS policy, rate limiting?
109
+
110
+ ### I — IAM (Identity & Access Management)
111
+
112
+ Identity and authorization threats:
113
+
114
+ - Broken authentication controls and token validation.
115
+ - Missing least-privilege RBAC/ABAC.
116
+ - Service-to-service auth gaps.
117
+ - Escalation paths across users, roles, or services.
118
+ - **Guardrail check:** Are there rules for auth mechanisms, session management, privilege boundaries?
119
+
120
+ ### S — Secrets
121
+
122
+ Credential and key management threats:
123
+
124
+ - Secrets in code, images, logs, CI output, or defaults.
125
+ - Weak rotation, revocation, or token lifetime policies.
126
+ - Over-shared secrets across components.
127
+ - Missing secret manager/KMS controls.
128
+ - **Guardrail check:** Are there `must_not` rules against hardcoded secrets, `must` rules for secret manager usage?
129
+
130
+ ### M — Monitoring (Logging & Observability)
131
+
132
+ Detection and auditability threats:
133
+
134
+ - Missing logs for auth, authorization, admin/data access events.
135
+ - Sensitive data leakage in logs.
136
+ - Missing alerts for abuse indicators.
137
+ - Incomplete audit trails or weak log integrity.
138
+ - **Guardrail check:** Are there rules for what must be logged and what must not appear in logs?
139
+
140
+ ### S — Supply Chain
141
+
142
+ Dependency and delivery threats:
143
+
144
+ - Unpinned/unverified dependencies and vulnerable packages.
145
+ - Third-party integration trust and scope overreach.
146
+ - CI/CD pipeline leakage or unreviewed build scripts.
147
+ - Unsigned/unprovenanced artifacts, missing SBOM.
148
+ - Treat AI-generated code as untrusted until validated.
149
+ - **Guardrail check:** Are there rules for dependency pinning, SBOM generation, artifact signing?
150
+
151
+ ---
152
+
153
+ ## Phase 3 — Guardrail Enforcement (Secure by Code)
154
+
155
+ After completing the PWNISMS analysis and before writing code:
156
+
157
+ 1. **Review the shortlisted hydrated guardrails** produced by `{{GUARDRAILS_SELECTION_SKILL_DIR}}/SKILL.md`.
158
+ 2. **Classify applicability** — For each shortlisted guardrail, determine if it applies to the current task.
159
+ 3. **Apply during code generation:**
160
+ - `must` rules → mandatory implementation requirements. Every applicable `must` guardrail must be satisfied.
161
+ - `must_not` rules → hard prohibitions. Code must never violate an applicable `must_not` guardrail.
162
+ 4. **Flag conflicts** — If a guardrail conflicts with the user's explicit instruction, flag it and ask for confirmation.
163
+ 5. **Create new guardrails on the fly** — When PWNISMS analysis or code review reveals a recurring security pattern not captured by existing guardrails, create and apply it as a new guardrail (marked `source: "ide_generated"` in the VibeReview markdown). Include `title`, `rule_type` (must/must_not), `category`, `instruction`, and rationale in the notes.
164
+
165
+ ---
166
+
167
+ ## Phase 4 — Security-First Code Generation Rules
168
+
169
+ When implementing code, enforce these baseline controls alongside project guardrails:
170
+
171
+ 1. Validate and constrain all untrusted input.
172
+ 2. Parameterize all queries and command-like invocations.
173
+ 3. Enforce least privilege for users, services, and workloads.
174
+ 4. Never hardcode secrets; use managed secret stores.
175
+ 5. Encrypt sensitive data in transit and at rest.
176
+ 6. Log security-relevant actions without leaking secrets/PII.
177
+ 7. Pin and verify dependencies and build artifacts.
178
+ 8. Return safe user errors; keep sensitive diagnostics internal.
179
+ 9. Add abuse protections (rate limits, lockouts, throttling) on exposed interfaces.
180
+
181
+ ---
182
+
183
+ ## Tailor for Architecture / Design Tasks
184
+
185
+ When discussing designs before code exists:
186
+
187
+ - Sketch a mental data flow: actors, data sent/received, storage, processing points.
188
+ - Mark trust boundaries explicitly (client-backend, backend-DB, service-service, cloud-third party).
189
+ - Identify where strong authentication/authorization is mandatory.
190
+ - Identify where encryption in transit and at rest is mandatory.
191
+ - Recommend concrete security patterns:
192
+ - Parameterized queries / ORM for DB access.
193
+ - Centralized authn/authz and role checks.
194
+ - Secrets manager / KMS for credentials and keys.
195
+ - mTLS or signed requests for service-to-service calls.
196
+ - Review existing guardrails for design-level constraints.
197
+
198
+ ---
199
+
200
+ ## Phase 5 — VibeReview Sync (Post Threat Modelling)
201
+
202
+ **MANDATORY:** After every threat modeling step that produces or modifies threat content, the main agent must update the `vibereview/*.md` artifact and call `sync_ai_ide_markdown` directly.
203
+
204
+ ### What triggers the VibeReview sync
205
+
206
+ - New threat model generated (any form: scenarios, data flows, attack trees, PWNISMS analysis)
207
+ - Existing threat model updated or extended (new threats, refined mitigations, additional components)
208
+ - Guardrails applied during a code-generation task (existing or IDE-generated)
209
+
210
+ ### What the VibeReview markdown must contain
211
+
212
+ The main agent writes a structured `.md` artifact under `vibereview/` and uploads it through `sync_ai_ide_markdown`. That markdown should contain:
213
+
214
+ - **Threat model findings**: threats mitigated, PWNISMS categories, severities, mitigations applied
215
+ - **Best practices achieved**: structured practice entries with `practice_name`, `description`, and `category`
216
+ - **Secure code snippets**: security-relevant code with explanations
217
+ - **Guardrails applied**: all guardrails enforced during this session — both existing ones shortlisted earlier via `get_guardrails` + `get_guardrail_by_id` (`source: "existing"`) and new ones the IDE agent created on the fly (`source: "ide_generated"`), each with satisfaction status
218
+ - **Workflow metadata**: `chat_session_id`, `event_name` or `title`, required `summary`, and optional `workflow_name` / `workflow_description`
219
+
220
+ ### How to sync
221
+
222
+ 1. Read and follow `{{VIBEREVIEW_SYNC_SKILL_DIR}}/SKILL.md`.
223
+ 2. Write or update a file under `vibereview/`, ideally `vibereview/<chat_session_id>-<slugified-title-or-event-name>.md`.
224
+ 3. Put `chat_session_id`, `summary`, and either `title` or `event_name` in frontmatter.
225
+ 4. Include the required sections:
226
+ - `Best Practices Achieved`
227
+ - `Threats Mitigated`
228
+ - `Secure Code Snippets`
229
+ - `Guardrails Applied`
230
+ - `OWASP Top 10 2025 Mappings`
231
+ 5. Validate that:
232
+ - every threat entry includes `threat_name`, `pwnisms_category`, `severity`, and `mitigation_applied`
233
+ - every best-practice entry includes `practice_name`, `description`, and `category`
234
+ - every guardrail includes `title`, `rule_type`, `source`, and `satisfied`
235
+ - OWASP mappings use exact IDs and names
236
+ - snippets are grounded in actual code, not invented text
237
+ - no sibling `.md` files in `vibereview/` were read just to infer format or content
238
+ 6. Call `sync_ai_ide_markdown` directly with the finished markdown artifact.
239
+ 7. If sync fails, leave the artifact in `vibereview/` and report the failure clearly.
240
+
241
+ ---
242
+
243
+ ## Post-Generation Checklist
244
+
245
+ Before finalizing output, confirm:
246
+
247
+ - [ ] Scope, assumptions, and trust boundaries were explicit.
248
+ - [ ] All 7 PWNISMS categories were checked (or marked N/A explicitly).
249
+ - [ ] Top risks were prioritized by impact and likelihood.
250
+ - [ ] Mitigations are concrete and actionable.
251
+ - [ ] Residual risk and follow-up actions are stated.
252
+ - [ ] Vibe guardrails were fetched and enforced (all applicable `must`/`must_not` rules satisfied).
253
+ - [ ] Guardrail compliance summary is included in the response (existing + IDE-generated).
254
+ - [ ] The VibeReview markdown was written under `vibereview/` and `sync_ai_ide_markdown` was called successfully.
255
+
256
+ If ANY box cannot be checked, you MUST flag the gap to the user with a specific remediation recommendation before finalizing the code.