agents-templated 2.2.18 → 2.2.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,58 +1,58 @@
1
- # /learn-loop
2
-
3
- ## A. Intent
4
- Capture deterministic retrospective outcomes and convert lessons into next-cycle actions.
5
-
6
- ## B. When to Use
7
- - Use after delivery milestones, incidents, or release cycles.
8
- - Do not use for pre-implementation planning.
9
-
10
- ## C. Context Assumptions
11
- - Cycle outcome data is available.
12
- - Owners for follow-up actions can be assigned.
13
- - Retrospective scope is defined.
14
-
15
- ## D. Required Inputs
16
- | Input | Type | Example |
17
- |---------------------|------------|----------------------------------|
18
- | `cycle_name` | string | "Sprint 18" |
19
- | `observations` | string[] | ["test flakiness", "scope churn"] |
20
- | `evidence_artifact` | artifact | metrics dashboard, incident notes, PR links |
21
-
22
- ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
- - [ ] observations are evidence-backed
24
- - [ ] action owners can be assigned
25
- - [ ] follow-up window is defined
26
-
27
- ## F. Execution Flow
28
- 1. Collect outcomes, wins, and misses.
29
- 2. Identify root process issues and patterns.
30
- 3. Prioritize actionable improvements.
31
- 4. Decision point ->
32
- - condition A -> no actionable item -> request clearer observations
33
- - condition B -> actionable set ready -> continue.
34
- 5. Map actions to owners and timelines.
35
- 6. Emit learn-loop action report.
36
-
37
- ## G. Output Schema
38
-
39
- ```json
40
- {
41
- "loop_id": "string",
42
- "actions": ["array","of","strings"],
43
- "urgency": "low | medium | high",
44
- "blocker": "string | null"
45
- }
46
- ```
47
-
48
- ## H. Output Target
49
- - Default delivery: stdout
50
- - Override flag: --output=<target>
51
-
52
- ## I. Stop Conditions <- abort with error message, never emit partial output
53
- - retrospective inputs are anecdotal without evidence
54
- - no owner can be assigned to critical actions
55
-
56
- ## J. Safety Constraints
57
- - Hard block: hard block on publishing blame-focused output without actionable remediation
58
- - Warn only: warn when metrics are incomplete but direction is still usable
1
+ # /learn-loop
2
+
3
+ ## A. Intent
4
+ Capture deterministic retrospective outcomes and convert lessons into next-cycle actions.
5
+
6
+ ## B. When to Use
7
+ - Use after delivery milestones, incidents, or release cycles.
8
+ - Do not use for pre-implementation planning.
9
+
10
+ ## C. Context Assumptions
11
+ - Cycle outcome data is available.
12
+ - Owners for follow-up actions can be assigned.
13
+ - Retrospective scope is defined.
14
+
15
+ ## D. Required Inputs
16
+ | Input | Type | Example |
17
+ |---------------------|------------|----------------------------------|
18
+ | `cycle_name` | string | "Sprint 18" |
19
+ | `observations` | string[] | ["test flakiness", "scope churn"] |
20
+ | `evidence_artifact` | artifact | metrics dashboard, incident notes, PR links |
21
+
22
+ ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
+ - [ ] observations are evidence-backed
24
+ - [ ] action owners can be assigned
25
+ - [ ] follow-up window is defined
26
+
27
+ ## F. Execution Flow
28
+ 1. Collect outcomes, wins, and misses.
29
+ 2. Identify root process issues and patterns.
30
+ 3. Prioritize actionable improvements.
31
+ 4. Decision point ->
32
+ - condition A -> no actionable item -> request clearer observations
33
+ - condition B -> actionable set ready -> continue.
34
+ 5. Map actions to owners and timelines.
35
+ 6. Emit learn-loop action report.
36
+
37
+ ## G. Output Schema
38
+
39
+ ```json
40
+ {
41
+ "loop_id": "string",
42
+ "actions": ["array","of","strings"],
43
+ "urgency": "low | medium | high",
44
+ "blocker": "string | null"
45
+ }
46
+ ```
47
+
48
+ ## H. Output Target
49
+ - Default delivery: stdout
50
+ - Override flag: --output=<target>
51
+
52
+ ## I. Stop Conditions <- abort with error message, never emit partial output
53
+ - retrospective inputs are anecdotal without evidence
54
+ - no owner can be assigned to critical actions
55
+
56
+ ## J. Safety Constraints
57
+ - Hard block: hard block on publishing blame-focused output without actionable remediation
58
+ - Warn only: warn when metrics are incomplete but direction is still usable
@@ -1,58 +1,58 @@
1
- # /perf
2
-
3
- ## A. Intent
4
- Define and execute deterministic performance optimization workflow against known baselines.
5
-
6
- ## B. When to Use
7
- - Use when improving latency, throughput, or resource efficiency.
8
- - Do not use for one-off smoke checks; use /perf-scan for quick regression comparison.
9
-
10
- ## C. Context Assumptions
11
- - Baseline metrics exist or can be captured.
12
- - Performance target is defined.
13
- - Measurement method is repeatable.
14
-
15
- ## D. Required Inputs
16
- | Input | Type | Example |
17
- |---------------------|------------|----------------------------------|
18
- | `performance_goal` | string | "p95 latency under 200ms" |
19
- | `baseline_metrics` | string[] | ["p95=260ms", "cpu=70%"] |
20
- | `benchmark_artifact` | artifact | profiling report or benchmark output |
21
-
22
- ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
- - [ ] goal is measurable
24
- - [ ] baseline metric set is present
25
- - [ ] benchmark method is consistent
26
-
27
- ## F. Execution Flow
28
- 1. Capture or validate baseline metrics.
29
- 2. Apply targeted optimization changes.
30
- 3. Measure post-change metrics.
31
- 4. Decision point ->
32
- - condition A -> target unmet -> iterate optimization
33
- - condition B -> target met -> continue.
34
- 5. Summarize gains, tradeoffs, and risks.
35
- 6. Emit performance optimization report.
36
-
37
- ## G. Output Schema
38
-
39
- ```json
40
- {
41
- "perf_run_id": "string",
42
- "metrics": ["array","of","strings"],
43
- "impact": "low | medium | high",
44
- "regression": "string | null"
45
- }
46
- ```
47
-
48
- ## H. Output Target
49
- - Default delivery: stdout
50
- - Override flag: --output=<target>
51
-
52
- ## I. Stop Conditions <- abort with error message, never emit partial output
53
- - baseline cannot be measured reliably
54
- - measurement method is non-deterministic
55
-
56
- ## J. Safety Constraints
57
- - Hard block: do not trade correctness/security for performance gains
58
- - Warn only: warn when gains are within noise threshold
1
+ # /perf
2
+
3
+ ## A. Intent
4
+ Define and execute deterministic performance optimization workflow against known baselines.
5
+
6
+ ## B. When to Use
7
+ - Use when improving latency, throughput, or resource efficiency.
8
+ - Do not use for one-off smoke checks; use /perf-scan for quick regression comparison.
9
+
10
+ ## C. Context Assumptions
11
+ - Baseline metrics exist or can be captured.
12
+ - Performance target is defined.
13
+ - Measurement method is repeatable.
14
+
15
+ ## D. Required Inputs
16
+ | Input | Type | Example |
17
+ |---------------------|------------|----------------------------------|
18
+ | `performance_goal` | string | "p95 latency under 200ms" |
19
+ | `baseline_metrics` | string[] | ["p95=260ms", "cpu=70%"] |
20
+ | `benchmark_artifact` | artifact | profiling report or benchmark output |
21
+
22
+ ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
+ - [ ] goal is measurable
24
+ - [ ] baseline metric set is present
25
+ - [ ] benchmark method is consistent
26
+
27
+ ## F. Execution Flow
28
+ 1. Capture or validate baseline metrics.
29
+ 2. Apply targeted optimization changes.
30
+ 3. Measure post-change metrics.
31
+ 4. Decision point ->
32
+ - condition A -> target unmet -> iterate optimization
33
+ - condition B -> target met -> continue.
34
+ 5. Summarize gains, tradeoffs, and risks.
35
+ 6. Emit performance optimization report.
36
+
37
+ ## G. Output Schema
38
+
39
+ ```json
40
+ {
41
+ "perf_run_id": "string",
42
+ "metrics": ["array","of","strings"],
43
+ "impact": "low | medium | high",
44
+ "regression": "string | null"
45
+ }
46
+ ```
47
+
48
+ ## H. Output Target
49
+ - Default delivery: stdout
50
+ - Override flag: --output=<target>
51
+
52
+ ## I. Stop Conditions <- abort with error message, never emit partial output
53
+ - baseline cannot be measured reliably
54
+ - measurement method is non-deterministic
55
+
56
+ ## J. Safety Constraints
57
+ - Hard block: do not trade correctness/security for performance gains
58
+ - Warn only: warn when gains are within noise threshold
@@ -1,58 +1,58 @@
1
- # /plan
2
-
3
- ## A. Intent
4
- Build a deterministic implementation plan with scoped phases and acceptance checks.
5
-
6
- ## B. When to Use
7
- - Use when a feature or change request is approved for planning before coding starts.
8
- - Do not use for post-incident debugging; use /debug-track instead.
9
-
10
- ## C. Context Assumptions
11
- - Problem statement and objective are available.
12
- - Primary stakeholders and delivery window are known.
13
- - Scope boundaries can be explicitly defined.
14
-
15
- ## D. Required Inputs
16
- | Input | Type | Example |
17
- |---------------------|------------|----------------------------------|
18
- | `objective` | string | "Ship onboarding v2" |
19
- | `constraints` | string[] | ["2-week deadline", "no schema rewrite"] |
20
- | `references` | artifact | PRD link, issue URL, screenshot |
21
-
22
- ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
- - [ ] objective is non-empty and testable
24
- - [ ] constraints are explicit and non-contradictory
25
- - [ ] required references are accessible
26
-
27
- ## F. Execution Flow
28
- 1. Collect requirements and constraints.
29
- 2. Split work into ordered phases and milestones.
30
- 3. Attach measurable acceptance criteria per phase.
31
- 4. Decision point ->
32
- - condition A -> phase risk > threshold -> add mitigation gate
33
- - condition B -> otherwise -> continue with baseline plan.
34
- 5. Assemble plan artifacts and dependency map.
35
- 6. Emit final plan package.
36
-
37
- ## G. Output Schema
38
-
39
- ```json
40
- {
41
- "plan_id": "string",
42
- "phases": ["array","of","strings"],
43
- "risk_level": "low | medium | high",
44
- "notes": "string | null"
45
- }
46
- ```
47
-
48
- ## H. Output Target
49
- - Default delivery: stdout
50
- - Override flag: --output=<target>
51
-
52
- ## I. Stop Conditions <- abort with error message, never emit partial output
53
- - any guard in section E fails
54
- - acceptance criteria cannot be made measurable
55
-
56
- ## J. Safety Constraints
57
- - Hard block: no hidden scope expansion beyond declared boundaries
58
- - Warn only: allow proceed with warning when estimate confidence is low
1
+ # /plan
2
+
3
+ ## A. Intent
4
+ Build a deterministic implementation plan with scoped phases and acceptance checks.
5
+
6
+ ## B. When to Use
7
+ - Use when a feature or change request is approved for planning before coding starts.
8
+ - Do not use for post-incident debugging; use /debug-track instead.
9
+
10
+ ## C. Context Assumptions
11
+ - Problem statement and objective are available.
12
+ - Primary stakeholders and delivery window are known.
13
+ - Scope boundaries can be explicitly defined.
14
+
15
+ ## D. Required Inputs
16
+ | Input | Type | Example |
17
+ |---------------------|------------|----------------------------------|
18
+ | `objective` | string | "Ship onboarding v2" |
19
+ | `constraints` | string[] | ["2-week deadline", "no schema rewrite"] |
20
+ | `references` | artifact | PRD link, issue URL, screenshot |
21
+
22
+ ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
+ - [ ] objective is non-empty and testable
24
+ - [ ] constraints are explicit and non-contradictory
25
+ - [ ] required references are accessible
26
+
27
+ ## F. Execution Flow
28
+ 1. Collect requirements and constraints.
29
+ 2. Split work into ordered phases and milestones.
30
+ 3. Attach measurable acceptance criteria per phase.
31
+ 4. Decision point ->
32
+ - condition A -> phase risk > threshold -> add mitigation gate
33
+ - condition B -> otherwise -> continue with baseline plan.
34
+ 5. Assemble plan artifacts and dependency map.
35
+ 6. Emit final plan package.
36
+
37
+ ## G. Output Schema
38
+
39
+ ```json
40
+ {
41
+ "plan_id": "string",
42
+ "phases": ["array","of","strings"],
43
+ "risk_level": "low | medium | high",
44
+ "notes": "string | null"
45
+ }
46
+ ```
47
+
48
+ ## H. Output Target
49
+ - Default delivery: stdout
50
+ - Override flag: --output=<target>
51
+
52
+ ## I. Stop Conditions <- abort with error message, never emit partial output
53
+ - any guard in section E fails
54
+ - acceptance criteria cannot be made measurable
55
+
56
+ ## J. Safety Constraints
57
+ - Hard block: no hidden scope expansion beyond declared boundaries
58
+ - Warn only: allow proceed with warning when estimate confidence is low
@@ -1,58 +1,58 @@
1
- # /pr
2
-
3
- ## A. Intent
4
- Prepare a deterministic pull request package with implementation and validation evidence.
5
-
6
- ## B. When to Use
7
- - Use after code changes and validation are complete and review package is needed.
8
- - Do not use when critical findings are still unresolved.
9
-
10
- ## C. Context Assumptions
11
- - Change set exists.
12
- - Validation evidence is available.
13
- - Linked issue/task context is known.
14
-
15
- ## D. Required Inputs
16
- | Input | Type | Example |
17
- |---------------------|------------|----------------------------------|
18
- | `change_summary` | string | "add retry policy to webhook worker" |
19
- | `linked_items` | string[] | ["ISSUE-18", "TASK-42"] |
20
- | `validation_evidence` | artifact | test report, benchmark output, screenshot |
21
-
22
- ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
- - [ ] change summary is complete
24
- - [ ] linked items are resolvable
25
- - [ ] validation evidence is present
26
-
27
- ## F. Execution Flow
28
- 1. Collect changed files and linked references.
29
- 2. Summarize intent, impact, and scope.
30
- 3. Attach validation and risk evidence.
31
- 4. Decision point ->
32
- - condition A -> critical blocker open -> abort PR package
33
- - condition B -> no blocker -> continue.
34
- 5. Build reviewer checklist and rollout notes.
35
- 6. Emit PR payload.
36
-
37
- ## G. Output Schema
38
-
39
- ```json
40
- {
41
- "title": "string",
42
- "files_changed": ["array","of","strings"],
43
- "risk_assessment": "low | medium | high",
44
- "blockers": "string | null"
45
- }
46
- ```
47
-
48
- ## H. Output Target
49
- - Default delivery: stdout
50
- - Override flag: --output=<target>
51
-
52
- ## I. Stop Conditions <- abort with error message, never emit partial output
53
- - validation evidence missing for critical changes
54
- - open critical findings remain unresolved
55
-
56
- ## J. Safety Constraints
57
- - Hard block: hard block if critical issues remain
58
- - Warn only: warn when non-critical follow-up items are deferred
1
+ # /pr
2
+
3
+ ## A. Intent
4
+ Prepare a deterministic pull request package with implementation and validation evidence.
5
+
6
+ ## B. When to Use
7
+ - Use after code changes and validation are complete and review package is needed.
8
+ - Do not use when critical findings are still unresolved.
9
+
10
+ ## C. Context Assumptions
11
+ - Change set exists.
12
+ - Validation evidence is available.
13
+ - Linked issue/task context is known.
14
+
15
+ ## D. Required Inputs
16
+ | Input | Type | Example |
17
+ |---------------------|------------|----------------------------------|
18
+ | `change_summary` | string | "add retry policy to webhook worker" |
19
+ | `linked_items` | string[] | ["ISSUE-18", "TASK-42"] |
20
+ | `validation_evidence` | artifact | test report, benchmark output, screenshot |
21
+
22
+ ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
+ - [ ] change summary is complete
24
+ - [ ] linked items are resolvable
25
+ - [ ] validation evidence is present
26
+
27
+ ## F. Execution Flow
28
+ 1. Collect changed files and linked references.
29
+ 2. Summarize intent, impact, and scope.
30
+ 3. Attach validation and risk evidence.
31
+ 4. Decision point ->
32
+ - condition A -> critical blocker open -> abort PR package
33
+ - condition B -> no blocker -> continue.
34
+ 5. Build reviewer checklist and rollout notes.
35
+ 6. Emit PR payload.
36
+
37
+ ## G. Output Schema
38
+
39
+ ```json
40
+ {
41
+ "title": "string",
42
+ "files_changed": ["array","of","strings"],
43
+ "risk_assessment": "low | medium | high",
44
+ "blockers": "string | null"
45
+ }
46
+ ```
47
+
48
+ ## H. Output Target
49
+ - Default delivery: stdout
50
+ - Override flag: --output=<target>
51
+
52
+ ## I. Stop Conditions <- abort with error message, never emit partial output
53
+ - validation evidence missing for critical changes
54
+ - open critical findings remain unresolved
55
+
56
+ ## J. Safety Constraints
57
+ - Hard block: hard block if critical issues remain
58
+ - Warn only: warn when non-critical follow-up items are deferred
@@ -1,58 +1,58 @@
1
- # /problem-map
2
-
3
- ## A. Intent
4
- Frame the real user problem and guarantee a clear problem statement before planning.
5
-
6
- ## B. When to Use
7
- - Use at the start of a feature cycle when pain points are unclear or broad.
8
- - Do not use for implementation details or code-level debugging.
9
-
10
- ## C. Context Assumptions
11
- - User objective is available.
12
- - Stakeholder context can be collected.
13
- - Outcome can be stated as a measurable problem.
14
-
15
- ## D. Required Inputs
16
- | Input | Type | Example |
17
- |---------------------|------------|----------------------------------|
18
- | `user_problem` | string | "onboarding drop-off at step 2" |
19
- | `signals` | string[] | ["support tickets", "analytics"] |
20
- | `evidence_artifact` | artifact | funnel screenshot, issue links |
21
-
22
- ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
- - [ ] problem statement is concrete
24
- - [ ] signals support the stated pain
25
- - [ ] scope remains problem-focused
26
-
27
- ## F. Execution Flow
28
- 1. Collect user pain signals and context.
29
- 2. Synthesize candidate problem statements.
30
- 3. Validate statement against evidence.
31
- 4. Decision point ->
32
- - condition A -> weak evidence -> request stronger signals
33
- - condition B -> strong evidence -> continue.
34
- 5. Produce framed problem and success criteria.
35
- 6. Emit problem map package.
36
-
37
- ## G. Output Schema
38
-
39
- ```json
40
- {
41
- "problem_id": "string",
42
- "core_pains": ["array","of","strings"],
43
- "urgency": "low | medium | high",
44
- "unknowns": "string | null"
45
- }
46
- ```
47
-
48
- ## H. Output Target
49
- - Default delivery: stdout
50
- - Override flag: --output=<target>
51
-
52
- ## I. Stop Conditions <- abort with error message, never emit partial output
53
- - problem statement remains vague
54
- - evidence contradicts proposed framing
55
-
56
- ## J. Safety Constraints
57
- - Hard block: hard block on fabricated assumptions presented as facts
58
- - Warn only: warn when evidence quality is limited
1
+ # /problem-map
2
+
3
+ ## A. Intent
4
+ Frame the real user problem and guarantee a clear problem statement before planning.
5
+
6
+ ## B. When to Use
7
+ - Use at the start of a feature cycle when pain points are unclear or broad.
8
+ - Do not use for implementation details or code-level debugging.
9
+
10
+ ## C. Context Assumptions
11
+ - User objective is available.
12
+ - Stakeholder context can be collected.
13
+ - Outcome can be stated as a measurable problem.
14
+
15
+ ## D. Required Inputs
16
+ | Input | Type | Example |
17
+ |---------------------|------------|----------------------------------|
18
+ | `user_problem` | string | "onboarding drop-off at step 2" |
19
+ | `signals` | string[] | ["support tickets", "analytics"] |
20
+ | `evidence_artifact` | artifact | funnel screenshot, issue links |
21
+
22
+ ## E. Pre-Execution Guards <- fail fast, check ALL before running
23
+ - [ ] problem statement is concrete
24
+ - [ ] signals support the stated pain
25
+ - [ ] scope remains problem-focused
26
+
27
+ ## F. Execution Flow
28
+ 1. Collect user pain signals and context.
29
+ 2. Synthesize candidate problem statements.
30
+ 3. Validate statement against evidence.
31
+ 4. Decision point ->
32
+ - condition A -> weak evidence -> request stronger signals
33
+ - condition B -> strong evidence -> continue.
34
+ 5. Produce framed problem and success criteria.
35
+ 6. Emit problem map package.
36
+
37
+ ## G. Output Schema
38
+
39
+ ```json
40
+ {
41
+ "problem_id": "string",
42
+ "core_pains": ["array","of","strings"],
43
+ "urgency": "low | medium | high",
44
+ "unknowns": "string | null"
45
+ }
46
+ ```
47
+
48
+ ## H. Output Target
49
+ - Default delivery: stdout
50
+ - Override flag: --output=<target>
51
+
52
+ ## I. Stop Conditions <- abort with error message, never emit partial output
53
+ - problem statement remains vague
54
+ - evidence contradicts proposed framing
55
+
56
+ ## J. Safety Constraints
57
+ - Hard block: hard block on fabricated assumptions presented as facts
58
+ - Warn only: warn when evidence quality is limited