jumpstart-mode 1.0.1 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/agents/jumpstart-analyst.agent.md +18 -0
- package/.github/agents/jumpstart-architect.agent.md +18 -0
- package/.github/agents/jumpstart-challenger.agent.md +18 -0
- package/.github/agents/jumpstart-developer.agent.md +18 -0
- package/.github/agents/jumpstart-pm.agent.md +18 -0
- package/.jumpstart/agents/analyst.md +45 -0
- package/.jumpstart/agents/architect.md +45 -0
- package/.jumpstart/agents/challenger.md +45 -0
- package/.jumpstart/agents/developer.md +45 -0
- package/.jumpstart/agents/pm.md +45 -0
- package/package.json +1 -1
|
@@ -41,6 +41,24 @@ You have access to VS Code Chat native tools:
|
|
|
41
41
|
- **ask_questions**: Use for persona validation, journey verification, scope discussions, and competitive analysis feedback.
|
|
42
42
|
- **manage_todo_list**: Track progress through the 8-step analysis protocol.
|
|
43
43
|
|
|
44
|
+
**Tool Invocation:**
|
|
45
|
+
```json
|
|
46
|
+
{
|
|
47
|
+
"questions": [
|
|
48
|
+
{
|
|
49
|
+
"header": "key", // max 12 chars, unique
|
|
50
|
+
"question": "Question text?",
|
|
51
|
+
"options": [ // 0 for free text, 2+ for choices (never 1)
|
|
52
|
+
{ "label": "Choice 1", "recommended": true },
|
|
53
|
+
{ "label": "Choice 2" }
|
|
54
|
+
]
|
|
55
|
+
}
|
|
56
|
+
]
|
|
57
|
+
}
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Response: `{ "answers": { "key": { "selected": ["Choice 1"], "freeText": null, "skipped": false } } }`
|
|
61
|
+
|
|
44
62
|
## Protocol
|
|
45
63
|
|
|
46
64
|
Follow the full 8-step Analysis Protocol in your agent file. Present the Product Brief and its insights file for explicit approval when complete. Both artifacts will be passed to Phase 2.
|
|
@@ -44,6 +44,24 @@ You have access to VS Code Chat native tools:
|
|
|
44
44
|
- **ask_questions**: Use for technology stack decisions with multiple valid options, deployment strategy selection, and architectural trade-off discussions.
|
|
45
45
|
- **manage_todo_list**: Track progress through the 9-step solutioning protocol and ADR generation.
|
|
46
46
|
|
|
47
|
+
**Tool Invocation:**
|
|
48
|
+
```json
|
|
49
|
+
{
|
|
50
|
+
"questions": [
|
|
51
|
+
{
|
|
52
|
+
"header": "key", // max 12 chars, unique
|
|
53
|
+
"question": "Question text?",
|
|
54
|
+
"options": [ // 0 for free text, 2+ for choices (never 1)
|
|
55
|
+
{ "label": "Choice 1", "recommended": true },
|
|
56
|
+
{ "label": "Choice 2" }
|
|
57
|
+
]
|
|
58
|
+
}
|
|
59
|
+
]
|
|
60
|
+
}
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
Response: `{ "answers": { "key": { "selected": ["Choice 1"], "freeText": null, "skipped": false } } }`
|
|
64
|
+
|
|
47
65
|
## Protocol
|
|
48
66
|
|
|
49
67
|
Follow the full 9-step Solutioning Protocol in your agent file. Present the Architecture Document, Implementation Plan, and insights file for explicit approval when complete. All artifacts including ADRs and insights will be passed to Phase 4.
|
|
@@ -36,6 +36,24 @@ You have access to two native VS Code Chat tools when working through the protoc
|
|
|
36
36
|
|
|
37
37
|
These are optional but recommended for a better user experience.
|
|
38
38
|
|
|
39
|
+
**Tool Invocation:**
|
|
40
|
+
```json
|
|
41
|
+
{
|
|
42
|
+
"questions": [
|
|
43
|
+
{
|
|
44
|
+
"header": "key", // max 12 chars, unique
|
|
45
|
+
"question": "Question text?",
|
|
46
|
+
"options": [ // 0 for free text, 2+ for choices (never 1)
|
|
47
|
+
{ "label": "Choice 1", "recommended": true },
|
|
48
|
+
{ "label": "Choice 2" }
|
|
49
|
+
]
|
|
50
|
+
}
|
|
51
|
+
]
|
|
52
|
+
}
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
Response: `{ "answers": { "key": { "selected": ["Choice 1"], "freeText": null, "skipped": false } } }`
|
|
56
|
+
|
|
39
57
|
## Starting the Conversation
|
|
40
58
|
|
|
41
59
|
If the human provided an initial idea with their message, use it as the starting point for Step 1 of the Elicitation Protocol in your agent file. If not, ask them to describe their idea, problem, or opportunity.
|
|
@@ -41,6 +41,24 @@ You have access to VS Code Chat native tools:
|
|
|
41
41
|
- **ask_questions**: Use for minor deviation decisions, library selection, test strategy choices, and unanticipated edge case handling.
|
|
42
42
|
- **manage_todo_list**: Track implementation progress task-by-task and milestone-by-milestone. Essential for Phase 4 transparency.
|
|
43
43
|
|
|
44
|
+
**Tool Invocation:**
|
|
45
|
+
```json
|
|
46
|
+
{
|
|
47
|
+
"questions": [
|
|
48
|
+
{
|
|
49
|
+
"header": "key", // max 12 chars, unique
|
|
50
|
+
"question": "Question text?",
|
|
51
|
+
"options": [ // 0 for free text, 2+ for choices (never 1)
|
|
52
|
+
{ "label": "Choice 1", "recommended": true },
|
|
53
|
+
{ "label": "Choice 2" }
|
|
54
|
+
]
|
|
55
|
+
}
|
|
56
|
+
]
|
|
57
|
+
}
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Response: `{ "answers": { "key": { "selected": ["Choice 1"], "freeText": null, "skipped": false } } }`
|
|
61
|
+
|
|
44
62
|
## Deviation Rules
|
|
45
63
|
|
|
46
64
|
- **Minor deviations** (utility functions, import paths, implied error handling): handle autonomously, document as a note on the task.
|
|
@@ -41,6 +41,24 @@ You have access to VS Code Chat native tools:
|
|
|
41
41
|
- **ask_questions**: Use for epic validation, story granularity decisions, prioritization discussions, and acceptance criteria clarification.
|
|
42
42
|
- **manage_todo_list**: Track progress through the 9-step planning protocol. Particularly useful when decomposing many stories.
|
|
43
43
|
|
|
44
|
+
**Tool Invocation:**
|
|
45
|
+
```json
|
|
46
|
+
{
|
|
47
|
+
"questions": [
|
|
48
|
+
{
|
|
49
|
+
"header": "key", // max 12 chars, unique
|
|
50
|
+
"question": "Question text?",
|
|
51
|
+
"options": [ // 0 for free text, 2+ for choices (never 1)
|
|
52
|
+
{ "label": "Choice 1", "recommended": true },
|
|
53
|
+
{ "label": "Choice 2" }
|
|
54
|
+
]
|
|
55
|
+
}
|
|
56
|
+
]
|
|
57
|
+
}
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Response: `{ "answers": { "key": { "selected": ["Choice 1"], "freeText": null, "skipped": false } } }`
|
|
61
|
+
|
|
44
62
|
## Protocol
|
|
45
63
|
|
|
46
64
|
Follow the full 9-step Planning Protocol in your agent file. Present the PRD and its insights file for explicit approval when complete. Both artifacts plus all prior insights will be passed to Phase 3.
|
|
@@ -61,6 +61,51 @@ Use this tool to gather structured feedback and make collaborative choices durin
|
|
|
61
61
|
- Step 6 (Scope Recommendation): When discussing Must Have vs. Should Have items that could go either way
|
|
62
62
|
- Any time you need user input to resolve ambiguity or validate findings
|
|
63
63
|
|
|
64
|
+
**How to invoke ask_questions:**
|
|
65
|
+
|
|
66
|
+
The tool accepts a `questions` array. Each question requires:
|
|
67
|
+
- `header` (string, required): Unique identifier, max 12 chars, used as key in response
|
|
68
|
+
- `question` (string, required): The question text to display
|
|
69
|
+
- `multiSelect` (boolean, optional): Allow multiple selections (default: false)
|
|
70
|
+
- `options` (array, optional): 0 options = free text input, 2+ options = choice menu
|
|
71
|
+
- Each option has: `label` (required), `description` (optional), `recommended` (optional)
|
|
72
|
+
- `allowFreeformInput` (boolean, optional): Allow custom text alongside options (default: false)
|
|
73
|
+
|
|
74
|
+
**Validation rules:**
|
|
75
|
+
- ❌ Single-option questions are INVALID (must be 0 for free text or 2+ for choices)
|
|
76
|
+
- ✓ Maximum 4 questions per invocation
|
|
77
|
+
- ✓ Maximum 6 options per question
|
|
78
|
+
- ✓ Headers must be unique within the questions array
|
|
79
|
+
|
|
80
|
+
**Tool invocation format:**
|
|
81
|
+
```json
|
|
82
|
+
{
|
|
83
|
+
"questions": [
|
|
84
|
+
{
|
|
85
|
+
"header": "choice",
|
|
86
|
+
"question": "Which approach do you prefer?",
|
|
87
|
+
"options": [
|
|
88
|
+
{ "label": "Option A", "description": "Brief explanation", "recommended": true },
|
|
89
|
+
{ "label": "Option B", "description": "Alternative approach" }
|
|
90
|
+
]
|
|
91
|
+
}
|
|
92
|
+
]
|
|
93
|
+
}
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
**Response format:**
|
|
97
|
+
```json
|
|
98
|
+
{
|
|
99
|
+
"answers": {
|
|
100
|
+
"choice": {
|
|
101
|
+
"selected": ["Option A"],
|
|
102
|
+
"freeText": null,
|
|
103
|
+
"skipped": false
|
|
104
|
+
}
|
|
105
|
+
}
|
|
106
|
+
}
|
|
107
|
+
```
|
|
108
|
+
|
|
64
109
|
**Example usage:**
|
|
65
110
|
```
|
|
66
111
|
When presenting 3-4 personas, use ask_questions to let the human select which ones feel accurate and flag any that need revision.
|
|
@@ -66,6 +66,51 @@ Use this tool when architectural decisions require human input or when multiple
|
|
|
66
66
|
- Step 6 (ADRs): When a decision has meaningful trade-offs and you want to confirm the human agrees with your assessment
|
|
67
67
|
- Deployment strategy: Cloud provider selection, hosting approach, CI/CD tooling
|
|
68
68
|
|
|
69
|
+
**How to invoke ask_questions:**
|
|
70
|
+
|
|
71
|
+
The tool accepts a `questions` array. Each question requires:
|
|
72
|
+
- `header` (string, required): Unique identifier, max 12 chars, used as key in response
|
|
73
|
+
- `question` (string, required): The question text to display
|
|
74
|
+
- `multiSelect` (boolean, optional): Allow multiple selections (default: false)
|
|
75
|
+
- `options` (array, optional): 0 options = free text input, 2+ options = choice menu
|
|
76
|
+
- Each option has: `label` (required), `description` (optional), `recommended` (optional)
|
|
77
|
+
- `allowFreeformInput` (boolean, optional): Allow custom text alongside options (default: false)
|
|
78
|
+
|
|
79
|
+
**Validation rules:**
|
|
80
|
+
- ❌ Single-option questions are INVALID (must be 0 for free text or 2+ for choices)
|
|
81
|
+
- ✓ Maximum 4 questions per invocation
|
|
82
|
+
- ✓ Maximum 6 options per question
|
|
83
|
+
- ✓ Headers must be unique within the questions array
|
|
84
|
+
|
|
85
|
+
**Tool invocation format:**
|
|
86
|
+
```json
|
|
87
|
+
{
|
|
88
|
+
"questions": [
|
|
89
|
+
{
|
|
90
|
+
"header": "choice",
|
|
91
|
+
"question": "Which approach do you prefer?",
|
|
92
|
+
"options": [
|
|
93
|
+
{ "label": "Option A", "description": "Brief explanation", "recommended": true },
|
|
94
|
+
{ "label": "Option B", "description": "Alternative approach" }
|
|
95
|
+
]
|
|
96
|
+
}
|
|
97
|
+
]
|
|
98
|
+
}
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
**Response format:**
|
|
102
|
+
```json
|
|
103
|
+
{
|
|
104
|
+
"answers": {
|
|
105
|
+
"choice": {
|
|
106
|
+
"selected": ["Option A"],
|
|
107
|
+
"freeText": null,
|
|
108
|
+
"skipped": false
|
|
109
|
+
}
|
|
110
|
+
}
|
|
111
|
+
}
|
|
112
|
+
```
|
|
113
|
+
|
|
69
114
|
**Example usage:**
|
|
70
115
|
```
|
|
71
116
|
When choosing between serverless and container-based deployment, present both options with pros/cons
|
|
@@ -53,6 +53,51 @@ Use this tool to gather clarifications and user choices during the elicitation p
|
|
|
53
53
|
- Testing the human's knowledge (no recommended options for quiz-like questions)
|
|
54
54
|
- Forcing choices when open discussion would be better
|
|
55
55
|
|
|
56
|
+
**How to invoke ask_questions:**
|
|
57
|
+
|
|
58
|
+
The tool accepts a `questions` array. Each question requires:
|
|
59
|
+
- `header` (string, required): Unique identifier, max 12 chars, used as key in response
|
|
60
|
+
- `question` (string, required): The question text to display
|
|
61
|
+
- `multiSelect` (boolean, optional): Allow multiple selections (default: false)
|
|
62
|
+
- `options` (array, optional): 0 options = free text input, 2+ options = choice menu
|
|
63
|
+
- Each option has: `label` (required), `description` (optional), `recommended` (optional)
|
|
64
|
+
- `allowFreeformInput` (boolean, optional): Allow custom text alongside options (default: false)
|
|
65
|
+
|
|
66
|
+
**Validation rules:**
|
|
67
|
+
- ❌ Single-option questions are INVALID (must be 0 for free text or 2+ for choices)
|
|
68
|
+
- ✓ Maximum 4 questions per invocation
|
|
69
|
+
- ✓ Maximum 6 options per question
|
|
70
|
+
- ✓ Headers must be unique within the questions array
|
|
71
|
+
|
|
72
|
+
**Tool invocation format:**
|
|
73
|
+
```json
|
|
74
|
+
{
|
|
75
|
+
"questions": [
|
|
76
|
+
{
|
|
77
|
+
"header": "choice",
|
|
78
|
+
"question": "Which approach do you prefer?",
|
|
79
|
+
"options": [
|
|
80
|
+
{ "label": "Option A", "description": "Brief explanation", "recommended": true },
|
|
81
|
+
{ "label": "Option B", "description": "Alternative approach" }
|
|
82
|
+
]
|
|
83
|
+
}
|
|
84
|
+
]
|
|
85
|
+
}
|
|
86
|
+
```
|
|
87
|
+
|
|
88
|
+
**Response format:**
|
|
89
|
+
```json
|
|
90
|
+
{
|
|
91
|
+
"answers": {
|
|
92
|
+
"choice": {
|
|
93
|
+
"selected": ["Option A"],
|
|
94
|
+
"freeText": null,
|
|
95
|
+
"skipped": false
|
|
96
|
+
}
|
|
97
|
+
}
|
|
98
|
+
}
|
|
99
|
+
```
|
|
100
|
+
|
|
56
101
|
**Example usage pattern:**
|
|
57
102
|
```
|
|
58
103
|
When presenting 2-3 reframed problem statements, use ask_questions to let the human select their preferred reframe or indicate they want to write their own.
|
|
@@ -66,6 +66,51 @@ Use this tool when you encounter situations requiring human guidance during impl
|
|
|
66
66
|
- **Test strategy:** When acceptance criteria could be verified with different test approaches
|
|
67
67
|
- **Error handling:** When an error scenario wasn't anticipated in acceptance criteria and you need guidance on desired behavior
|
|
68
68
|
|
|
69
|
+
**How to invoke ask_questions:**
|
|
70
|
+
|
|
71
|
+
The tool accepts a `questions` array. Each question requires:
|
|
72
|
+
- `header` (string, required): Unique identifier, max 12 chars, used as key in response
|
|
73
|
+
- `question` (string, required): The question text to display
|
|
74
|
+
- `multiSelect` (boolean, optional): Allow multiple selections (default: false)
|
|
75
|
+
- `options` (array, optional): 0 options = free text input, 2+ options = choice menu
|
|
76
|
+
- Each option has: `label` (required), `description` (optional), `recommended` (optional)
|
|
77
|
+
- `allowFreeformInput` (boolean, optional): Allow custom text alongside options (default: false)
|
|
78
|
+
|
|
79
|
+
**Validation rules:**
|
|
80
|
+
- ❌ Single-option questions are INVALID (must be 0 for free text or 2+ for choices)
|
|
81
|
+
- ✓ Maximum 4 questions per invocation
|
|
82
|
+
- ✓ Maximum 6 options per question
|
|
83
|
+
- ✓ Headers must be unique within the questions array
|
|
84
|
+
|
|
85
|
+
**Tool invocation format:**
|
|
86
|
+
```json
|
|
87
|
+
{
|
|
88
|
+
"questions": [
|
|
89
|
+
{
|
|
90
|
+
"header": "choice",
|
|
91
|
+
"question": "Which approach do you prefer?",
|
|
92
|
+
"options": [
|
|
93
|
+
{ "label": "Option A", "description": "Brief explanation", "recommended": true },
|
|
94
|
+
{ "label": "Option B", "description": "Alternative approach" }
|
|
95
|
+
]
|
|
96
|
+
}
|
|
97
|
+
]
|
|
98
|
+
}
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
**Response format:**
|
|
102
|
+
```json
|
|
103
|
+
{
|
|
104
|
+
"answers": {
|
|
105
|
+
"choice": {
|
|
106
|
+
"selected": ["Option A"],
|
|
107
|
+
"freeText": null,
|
|
108
|
+
"skipped": false
|
|
109
|
+
}
|
|
110
|
+
}
|
|
111
|
+
}
|
|
112
|
+
```
|
|
113
|
+
|
|
69
114
|
**Example usage:**
|
|
70
115
|
```
|
|
71
116
|
When you encounter an edge case not covered in acceptance criteria ("What should happen when a user
|
package/.jumpstart/agents/pm.md
CHANGED
|
@@ -64,6 +64,51 @@ Use this tool for collaborative prioritization and clarification of requirements
|
|
|
64
64
|
- Prioritization decisions: When using RICE or ICE scoring, gather human input on scores
|
|
65
65
|
- Any time you need to resolve a judgment call between two valid options
|
|
66
66
|
|
|
67
|
+
**How to invoke ask_questions:**
|
|
68
|
+
|
|
69
|
+
The tool accepts a `questions` array. Each question requires:
|
|
70
|
+
- `header` (string, required): Unique identifier, max 12 chars, used as key in response
|
|
71
|
+
- `question` (string, required): The question text to display
|
|
72
|
+
- `multiSelect` (boolean, optional): Allow multiple selections (default: false)
|
|
73
|
+
- `options` (array, optional): 0 options = free text input, 2+ options = choice menu
|
|
74
|
+
- Each option has: `label` (required), `description` (optional), `recommended` (optional)
|
|
75
|
+
- `allowFreeformInput` (boolean, optional): Allow custom text alongside options (default: false)
|
|
76
|
+
|
|
77
|
+
**Validation rules:**
|
|
78
|
+
- ❌ Single-option questions are INVALID (must be 0 for free text or 2+ for choices)
|
|
79
|
+
- ✓ Maximum 4 questions per invocation
|
|
80
|
+
- ✓ Maximum 6 options per question
|
|
81
|
+
- ✓ Headers must be unique within the questions array
|
|
82
|
+
|
|
83
|
+
**Tool invocation format:**
|
|
84
|
+
```json
|
|
85
|
+
{
|
|
86
|
+
"questions": [
|
|
87
|
+
{
|
|
88
|
+
"header": "choice",
|
|
89
|
+
"question": "Which approach do you prefer?",
|
|
90
|
+
"options": [
|
|
91
|
+
{ "label": "Option A", "description": "Brief explanation", "recommended": true },
|
|
92
|
+
{ "label": "Option B", "description": "Alternative approach" }
|
|
93
|
+
]
|
|
94
|
+
}
|
|
95
|
+
]
|
|
96
|
+
}
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
**Response format:**
|
|
100
|
+
```json
|
|
101
|
+
{
|
|
102
|
+
"answers": {
|
|
103
|
+
"choice": {
|
|
104
|
+
"selected": ["Option A"],
|
|
105
|
+
"freeText": null,
|
|
106
|
+
"skipped": false
|
|
107
|
+
}
|
|
108
|
+
}
|
|
109
|
+
}
|
|
110
|
+
```
|
|
111
|
+
|
|
67
112
|
**Example usage:**
|
|
68
113
|
```
|
|
69
114
|
When a story feels large but not clearly splittable, present the options:
|
package/package.json
CHANGED