@schilling.mark.a/software-methodology 1.0.0 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/copilot-instructions.md +159 -0
- package/README.md +172 -6
- package/docs/story-map/backbone.md +141 -0
- package/docs/story-map/releases/r1-walking-skeleton.md +152 -0
- package/docs/story-map/user-tasks/ACT-001-task-001.md +45 -0
- package/docs/story-map/user-tasks/ACT-001-task-002.md +48 -0
- package/docs/story-map/user-tasks/ACT-002-task-001.md +47 -0
- package/docs/story-map/user-tasks/ACT-002-task-002.md +47 -0
- package/docs/story-map/user-tasks/ACT-002-task-003.md +46 -0
- package/docs/story-map/user-tasks/ACT-003-task-001.md +47 -0
- package/docs/story-map/user-tasks/ACT-003-task-002.md +46 -0
- package/docs/story-map/user-tasks/ACT-003-task-003.md +49 -0
- package/docs/story-map/user-tasks/ACT-003-task-004.md +47 -0
- package/docs/story-map/user-tasks/ACT-004-task-001.md +48 -0
- package/docs/story-map/user-tasks/ACT-004-task-002.md +49 -0
- package/docs/story-map/user-tasks/ACT-004-task-003.md +47 -0
- package/docs/story-map/user-tasks/ACT-005-task-001.md +47 -0
- package/docs/story-map/user-tasks/ACT-005-task-002.md +48 -0
- package/docs/story-map/user-tasks/ACT-005-task-003.md +48 -0
- package/docs/story-map/user-tasks/ACT-005-task-004.md +48 -0
- package/docs/story-map/user-tasks/ACT-006-task-001.md +47 -0
- package/docs/story-map/user-tasks/ACT-006-task-002.md +46 -0
- package/docs/story-map/user-tasks/ACT-006-task-003.md +47 -0
- package/docs/story-map/user-tasks/ACT-006-task-004.md +46 -0
- package/docs/story-map/user-tasks/ACT-007-task-001.md +48 -0
- package/docs/story-map/user-tasks/ACT-007-task-002.md +47 -0
- package/docs/story-map/user-tasks/ACT-007-task-003.md +47 -0
- package/docs/story-map/user-tasks/ACT-007-task-004.md +48 -0
- package/docs/story-map/user-tasks/ACT-008-task-001.md +48 -0
- package/docs/story-map/user-tasks/ACT-008-task-002.md +48 -0
- package/docs/story-map/user-tasks/ACT-008-task-003.md +47 -0
- package/docs/story-map/user-tasks/ACT-008-task-004.md +48 -0
- package/docs/story-map/walking-skeleton.md +95 -0
- package/docs/value-proposition-canvas.md +171 -0
- package/features/mcp-server/query-vpc.feature +48 -0
- package/features/mcp-server/read-reference.feature +41 -0
- package/features/mcp-server/read-skill.feature +33 -0
- package/features/mcp-server/search-guidance.feature +42 -0
- package/features/mcp-server/suggest-next-step.feature +61 -0
- package/features/mcp-server/validate-gherkin.feature +54 -0
- package/mcp-server/QUICKSTART.md +172 -0
- package/mcp-server/README.md +171 -0
- package/mcp-server/dist/index.d.ts +12 -0
- package/mcp-server/dist/index.js +296 -0
- package/mcp-server/dist/repository.d.ts +59 -0
- package/mcp-server/dist/repository.js +211 -0
- package/mcp-server/dist/tools/gherkin-validator.d.ts +16 -0
- package/mcp-server/dist/tools/gherkin-validator.js +152 -0
- package/mcp-server/dist/tools/guidance-searcher.d.ts +11 -0
- package/mcp-server/dist/tools/guidance-searcher.js +34 -0
- package/mcp-server/dist/tools/next-step-suggester.d.ts +16 -0
- package/mcp-server/dist/tools/next-step-suggester.js +210 -0
- package/mcp-server/dist/tools/reference-reader.d.ts +17 -0
- package/mcp-server/dist/tools/reference-reader.js +57 -0
- package/mcp-server/dist/tools/skill-reader.d.ts +17 -0
- package/mcp-server/dist/tools/skill-reader.js +38 -0
- package/mcp-server/dist/tools/vpc-querier.d.ts +37 -0
- package/mcp-server/dist/tools/vpc-querier.js +158 -0
- package/mcp-server/package.json +42 -0
- package/mcp-server/src/index.ts +331 -0
- package/mcp-server/src/repository.ts +254 -0
- package/mcp-server/src/tools/gherkin-validator.ts +206 -0
- package/mcp-server/src/tools/guidance-searcher.ts +42 -0
- package/mcp-server/src/tools/next-step-suggester.ts +243 -0
- package/mcp-server/src/tools/reference-reader.ts +71 -0
- package/mcp-server/src/tools/skill-reader.ts +47 -0
- package/mcp-server/src/tools/vpc-querier.ts +201 -0
- package/mcp-server/tsconfig.json +17 -0
- package/package.json +8 -2
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# ACT-005-task-003: Select UI Components
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-005: Design User Experience
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a UI designer or developer
|
|
8
|
+
I want to choose appropriate UI components for each interaction pattern
|
|
9
|
+
So that the interface is consistent and uses familiar patterns
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Inconsistent UI patterns confuse users; Developers build custom components unnecessarily
|
|
13
|
+
- Gain created: Faster implementation; Consistent user experience; Accessible by default
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Component selection documented for key screens
|
|
17
|
+
- Components chosen from design system (reference: ui-design-system)
|
|
18
|
+
- Each component mapped to interaction pattern it supports
|
|
19
|
+
- Accessibility considerations documented per component
|
|
20
|
+
- Components support scenarios from Gherkin feature files
|
|
21
|
+
- Selection documented in `/docs/component-selection.md`
|
|
22
|
+
- Follows approach from `ui-design-workflow/references/component-selection.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Should Have
|
|
26
|
+
- **Business Value:** Medium
|
|
27
|
+
- **Complexity:** Low
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 3 - UI/UX Design
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-005-task-002: Create screen flows (need screens to select components for)
|
|
34
|
+
- ui-design-system skill (reference for available components)
|
|
35
|
+
|
|
36
|
+
## Feature File
|
|
37
|
+
`/features/ui-design-workflow/component-selection.feature`
|
|
38
|
+
|
|
39
|
+
## Status
|
|
40
|
+
- [x] Acceptance criteria defined
|
|
41
|
+
- [ ] Example mapping complete
|
|
42
|
+
- [ ] Scenarios written
|
|
43
|
+
- [ ] Acceptance tests created
|
|
44
|
+
- [ ] Implementation complete
|
|
45
|
+
- [ ] Deployed
|
|
46
|
+
|
|
47
|
+
## Notes
|
|
48
|
+
Component selection bridges design intent and implementation. Using design system components ensures consistency and accessibility.
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# ACT-005-task-004: Define Acceptance Targets
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-005: Design User Experience
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a UX designer or product owner
|
|
8
|
+
I want to define measurable usability targets for key interactions
|
|
9
|
+
So that we can verify the design meets usability standards
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Features ship but are hard to use; No objective measure of design quality
|
|
13
|
+
- Gain created: Users can accomplish tasks efficiently; Design decisions based on data
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Acceptance targets defined for 3-5 key user tasks
|
|
17
|
+
- Each target includes: task, success metric, target value, measurement method
|
|
18
|
+
- Targets are specific and measurable (not vague like "easy to use")
|
|
19
|
+
- Examples: "Create invoice in under 2 minutes", "Find payment status in 3 clicks or less"
|
|
20
|
+
- Targets trace to VPC gains (desired outcomes)
|
|
21
|
+
- Targets documented in `/docs/acceptance-targets.md`
|
|
22
|
+
- Follows format from `ui-design-workflow/references/acceptance-targets.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Could Have
|
|
26
|
+
- **Business Value:** Medium
|
|
27
|
+
- **Complexity:** Medium
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 3 - UI/UX Design
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-005-task-002: Create screen flows (need flows to measure)
|
|
34
|
+
- ACT-001-task-002: Create Value Proposition Canvas (source for desired gains)
|
|
35
|
+
|
|
36
|
+
## Feature File
|
|
37
|
+
`/features/ui-design-workflow/acceptance-targets.feature`
|
|
38
|
+
|
|
39
|
+
## Status
|
|
40
|
+
- [x] Acceptance criteria defined
|
|
41
|
+
- [ ] Example mapping complete
|
|
42
|
+
- [ ] Scenarios written
|
|
43
|
+
- [ ] Acceptance tests created
|
|
44
|
+
- [ ] Implementation complete
|
|
45
|
+
- [ ] Deployed
|
|
46
|
+
|
|
47
|
+
## Notes
|
|
48
|
+
Acceptance targets make usability measurable. They define "good enough" before implementation and can be tested post-launch.
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
# ACT-006-task-001: Write Failing Acceptance Test (RED)
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-006: Implement Features
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a developer
|
|
8
|
+
I want to write an automated test that verifies a Gherkin scenario before implementing the feature
|
|
9
|
+
So that I have clear success criteria and catch regressions early
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Writing tests after code feels like extra work and is often skipped
|
|
13
|
+
- Gain created: Have clear criteria for "done"; Spend less time debugging
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Test written that automates one Gherkin scenario
|
|
17
|
+
- Test fails with clear error message (RED phase)
|
|
18
|
+
- Test uses Given-When-Then structure matching scenario
|
|
19
|
+
- Test setup includes necessary test data
|
|
20
|
+
- Test assertion checks observable behavior, not implementation
|
|
21
|
+
- Test follows project test strategy format
|
|
22
|
+
- Test can be run with a single command
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Must Have
|
|
26
|
+
- **Business Value:** High
|
|
27
|
+
- **Complexity:** Medium
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 1 - Walking Skeleton
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-004-task-002: Write Gherkin scenarios (source for test specification)
|
|
34
|
+
|
|
35
|
+
## Feature File
|
|
36
|
+
`/features/atdd-workflow/red-phase.feature`
|
|
37
|
+
|
|
38
|
+
## Status
|
|
39
|
+
- [x] Acceptance criteria defined
|
|
40
|
+
- [ ] Example mapping complete
|
|
41
|
+
- [ ] Scenarios written
|
|
42
|
+
- [ ] Acceptance tests created
|
|
43
|
+
- [ ] Implementation complete
|
|
44
|
+
- [ ] Deployed
|
|
45
|
+
|
|
46
|
+
## Notes
|
|
47
|
+
RED phase is critical. Test must fail for the right reason (feature not implemented) not wrong reason (bad test setup). Walking skeleton version focuses on one happy path scenario.
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
# ACT-006-task-002: Make Test Pass with Minimum Code (GREEN)
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-006: Implement Features
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a developer
|
|
8
|
+
I want to write the simplest implementation that makes the failing test pass
|
|
9
|
+
So that I deliver working functionality quickly before optimizing
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Code becomes messy over time without clear patterns
|
|
13
|
+
- Gain created: Ship smaller increments more frequently; Feel confident refactoring
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Previously failing test now passes
|
|
17
|
+
- Implementation includes only code necessary to pass the test
|
|
18
|
+
- No premature optimization or "future-proofing"
|
|
19
|
+
- Code follows basic language idioms and conventions
|
|
20
|
+
- Test runs green with no errors or warnings
|
|
21
|
+
- Implementation addresses the scenario's "So that" business value
|
|
22
|
+
|
|
23
|
+
## Priority
|
|
24
|
+
- **MoSCoW:** Must Have
|
|
25
|
+
- **Business Value:** High
|
|
26
|
+
- **Complexity:** Medium
|
|
27
|
+
|
|
28
|
+
## Release
|
|
29
|
+
Release 1 - Walking Skeleton
|
|
30
|
+
|
|
31
|
+
## Dependencies
|
|
32
|
+
- ACT-006-task-001: Write failing acceptance test (must have RED before GREEN)
|
|
33
|
+
|
|
34
|
+
## Feature File
|
|
35
|
+
`/features/atdd-workflow/green-phase.feature`
|
|
36
|
+
|
|
37
|
+
## Status
|
|
38
|
+
- [x] Acceptance criteria defined
|
|
39
|
+
- [ ] Example mapping complete
|
|
40
|
+
- [ ] Scenarios written
|
|
41
|
+
- [ ] Acceptance tests created
|
|
42
|
+
- [ ] Implementation complete
|
|
43
|
+
- [ ] Deployed
|
|
44
|
+
|
|
45
|
+
## Notes
|
|
46
|
+
GREEN phase focuses on making it work, not making it beautiful. Resist urge to refactor here. That's the next step (REFACTOR phase). Walking skeleton version implements simplest path only.
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
# ACT-006-task-003: Refactor for Maintainability
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-006: Implement Features
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a developer
|
|
8
|
+
I want to improve code structure while keeping tests green
|
|
9
|
+
So that the code is maintainable and follows clean code principles
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Code becomes messy over time without clear patterns
|
|
13
|
+
- Gain created: Feel confident refactoring and extending code; Build transferable skills
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- All tests remain green throughout refactoring
|
|
17
|
+
- At least one SOLID principle explicitly applied
|
|
18
|
+
- Code duplications extracted to shared functions/methods
|
|
19
|
+
- Variable and function names use ubiquitous language from domain
|
|
20
|
+
- Code structure improves (measured by reduced complexity or improved readability)
|
|
21
|
+
- No new functionality added (behavior unchanged)
|
|
22
|
+
- Refactoring documented in commit message or code comments
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Must Have
|
|
26
|
+
- **Business Value:** High
|
|
27
|
+
- **Complexity:** Medium
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 1 - Walking Skeleton
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-006-task-002: Make test pass with minimum code (must have GREEN before REFACTOR)
|
|
34
|
+
|
|
35
|
+
## Feature File
|
|
36
|
+
`/features/atdd-workflow/refactor-phase.feature`
|
|
37
|
+
|
|
38
|
+
## Status
|
|
39
|
+
- [x] Acceptance criteria defined
|
|
40
|
+
- [ ] Example mapping complete
|
|
41
|
+
- [ ] Scenarios written
|
|
42
|
+
- [ ] Acceptance tests created
|
|
43
|
+
- [ ] Implementation complete
|
|
44
|
+
- [ ] Deployed
|
|
45
|
+
|
|
46
|
+
## Notes
|
|
47
|
+
REFACTOR phase is where clean code principles apply. Reference `clean-code/references/solid.md` for guidance. Walking skeleton version applies one principle explicitly. Full version reviews all SOLID principles.
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
# ACT-006-task-004: Review Code Against SOLID Principles
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-006: Implement Features
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a developer or code reviewer
|
|
8
|
+
I want to evaluate code against SOLID principles systematically
|
|
9
|
+
So that we maintain code quality and identify improvement opportunities
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Code becomes messy over time without clear patterns
|
|
13
|
+
- Gain created: Build transferable skills; Feel confident refactoring; Code reviews focus on meaningful issues
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Code reviewed against all 5 SOLID principles: Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, Dependency Inversion
|
|
17
|
+
- Each principle evaluated with concrete evidence (not subjective opinion)
|
|
18
|
+
- Violations identified with specific line references
|
|
19
|
+
- Refactoring opportunities documented (not all violations require immediate fix)
|
|
20
|
+
- Review documented in PR comments or code review notes
|
|
21
|
+
- References `clean-code/references/solid.md` for principle definitions
|
|
22
|
+
|
|
23
|
+
## Priority
|
|
24
|
+
- **MoSCoW:** Should Have
|
|
25
|
+
- **Business Value:** Medium
|
|
26
|
+
- **Complexity:** Medium
|
|
27
|
+
|
|
28
|
+
## Release
|
|
29
|
+
Release 2 - Enhanced Implementation
|
|
30
|
+
|
|
31
|
+
## Dependencies
|
|
32
|
+
- ACT-006-task-003: Refactor for maintainability (review happens after GREEN+REFACTOR)
|
|
33
|
+
|
|
34
|
+
## Feature File
|
|
35
|
+
`/features/atdd-workflow/code-review.feature`
|
|
36
|
+
|
|
37
|
+
## Status
|
|
38
|
+
- [x] Acceptance criteria defined
|
|
39
|
+
- [ ] Example mapping complete
|
|
40
|
+
- [ ] Scenarios written
|
|
41
|
+
- [ ] Acceptance tests created
|
|
42
|
+
- [ ] Implementation complete
|
|
43
|
+
- [ ] Deployed
|
|
44
|
+
|
|
45
|
+
## Notes
|
|
46
|
+
Systematic SOLID review prevents "code review is just opinion" problem. Walking skeleton version applies one principle during REFACTOR; full version reviews all five.
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# ACT-007-task-001: Set Up Pipeline Stages
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-007: Deploy Safely
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a developer
|
|
8
|
+
I want an automated pipeline that builds, tests, and deploys code
|
|
9
|
+
So that I can ship to production with confidence and without manual steps
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Fear of breaking existing functionality; Manual deployment errors
|
|
13
|
+
- Gain created: Make confident changes without breaking existing functionality; Deploy smaller increments more frequently
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Pipeline configuration file created
|
|
17
|
+
- Build stage: compiles/bundles code, fails on build errors
|
|
18
|
+
- Test stage: runs automated tests, fails if any test fails
|
|
19
|
+
- Deploy stage: deploys to target environment on successful test
|
|
20
|
+
- Pipeline runs automatically on code push
|
|
21
|
+
- Pipeline status visible (pass/fail) in version control system
|
|
22
|
+
- Failed pipeline prevents deployment
|
|
23
|
+
- Pipeline completes in under 10 minutes
|
|
24
|
+
|
|
25
|
+
## Priority
|
|
26
|
+
- **MoSCoW:** Must Have
|
|
27
|
+
- **Business Value:** High
|
|
28
|
+
- **Complexity:** High
|
|
29
|
+
|
|
30
|
+
## Release
|
|
31
|
+
Release 1 - Walking Skeleton
|
|
32
|
+
|
|
33
|
+
## Dependencies
|
|
34
|
+
- ACT-006-task-003: Refactor for maintainability (need working code with tests to deploy)
|
|
35
|
+
|
|
36
|
+
## Feature File
|
|
37
|
+
`/features/cicd-pipeline/pipeline-stages.feature`
|
|
38
|
+
|
|
39
|
+
## Status
|
|
40
|
+
- [x] Acceptance criteria defined
|
|
41
|
+
- [ ] Example mapping complete
|
|
42
|
+
- [ ] Scenarios written
|
|
43
|
+
- [ ] Acceptance tests created
|
|
44
|
+
- [ ] Implementation complete
|
|
45
|
+
- [ ] Deployed
|
|
46
|
+
|
|
47
|
+
## Notes
|
|
48
|
+
Pipeline is the delivery mechanism. Walking skeleton version: single environment, linear stages. Full version adds: multiple environments, blue-green deployment, automated rollback, performance testing.
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
# ACT-007-task-002: Configure Environment Promotion
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-007: Deploy Safely
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a DevOps engineer or tech lead
|
|
8
|
+
I want to define how code progresses through environments (dev → staging → production)
|
|
9
|
+
So that we test changes thoroughly before production deployment
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Fear of breaking existing functionality; Production issues that weren't caught in testing
|
|
13
|
+
- Gain created: Make confident changes; Catch issues before users see them
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- At least 2 environments defined (e.g., staging and production)
|
|
17
|
+
- Promotion criteria defined for each environment (e.g., "all tests pass" to reach staging, "manual approval" to reach production)
|
|
18
|
+
- Environment differences documented (data, integrations, config)
|
|
19
|
+
- Promotion process automated (no manual file copying)
|
|
20
|
+
- Failed promotion blocks further progression
|
|
21
|
+
- Environment configuration saved in `/docs/environment-promotion.md`
|
|
22
|
+
- Follows guidance from `cicd-pipeline/references/environment-promotion.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Should Have
|
|
26
|
+
- **Business Value:** High
|
|
27
|
+
- **Complexity:** High
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 2 - Enhanced CI/CD
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-007-task-001: Set up pipeline stages (need basic pipeline before adding environments)
|
|
34
|
+
|
|
35
|
+
## Feature File
|
|
36
|
+
`/features/cicd-pipeline/environment-promotion.feature`
|
|
37
|
+
|
|
38
|
+
## Status
|
|
39
|
+
- [x] Acceptance criteria defined
|
|
40
|
+
- [ ] Example mapping complete
|
|
41
|
+
- [ ] Scenarios written
|
|
42
|
+
- [ ] Acceptance tests created
|
|
43
|
+
- [ ] Implementation complete
|
|
44
|
+
- [ ] Deployed
|
|
45
|
+
|
|
46
|
+
## Notes
|
|
47
|
+
Environment promotion adds safety gates. Walking skeleton deploys to single environment; full version progresses through multiple environments with gates.
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
# ACT-007-task-003: Implement Deployment Strategy
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-007: Deploy Safely
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a DevOps engineer or tech lead
|
|
8
|
+
I want to deploy using a strategy that minimizes downtime and risk
|
|
9
|
+
So that we can release frequently without disrupting users
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Fear of breaking existing functionality; Deployments require downtime
|
|
13
|
+
- Gain created: Deploy smaller increments more frequently; Zero-downtime releases
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Deployment strategy selected (blue-green, canary, rolling, feature flags)
|
|
17
|
+
- Strategy implemented and documented
|
|
18
|
+
- Deployment can complete without user-facing downtime
|
|
19
|
+
- Ability to serve traffic from old and new versions simultaneously (if applicable)
|
|
20
|
+
- Deployment monitored for errors during rollout
|
|
21
|
+
- Strategy documented in `/docs/deployment-strategy.md`
|
|
22
|
+
- Follows patterns from `cicd-pipeline/references/deployment-rollback.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Should Have
|
|
26
|
+
- **Business Value:** High
|
|
27
|
+
- **Complexity:** High
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 2 - Enhanced CI/CD
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-007-task-002: Configure environment promotion (need environments to deploy to)
|
|
34
|
+
|
|
35
|
+
## Feature File
|
|
36
|
+
`/features/cicd-pipeline/deployment-strategy.feature`
|
|
37
|
+
|
|
38
|
+
## Status
|
|
39
|
+
- [x] Acceptance criteria defined
|
|
40
|
+
- [ ] Example mapping complete
|
|
41
|
+
- [ ] Scenarios written
|
|
42
|
+
- [ ] Acceptance tests created
|
|
43
|
+
- [ ] Implementation complete
|
|
44
|
+
- [ ] Deployed
|
|
45
|
+
|
|
46
|
+
## Notes
|
|
47
|
+
Deployment strategy enables frequent, low-risk releases. Blue-green and canary strategies provide instant rollback capability.
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# ACT-007-task-004: Test Rollback Procedure
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-007: Deploy Safely
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a DevOps engineer or developer
|
|
8
|
+
I want to verify we can rollback a deployment quickly and safely
|
|
9
|
+
So that we can recover from bad deployments without extended outages
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Fear of breaking existing functionality; Bad deployments cause extended downtime
|
|
13
|
+
- Gain created: Make confident changes; Deploy more frequently knowing we can undo
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Rollback procedure documented step-by-step
|
|
17
|
+
- Rollback tested in non-production environment
|
|
18
|
+
- Rollback completes in under 5 minutes (target)
|
|
19
|
+
- Rollback verified to restore previous working state
|
|
20
|
+
- Data migrations considered (can they be rolled back?)
|
|
21
|
+
- Rollback can be triggered with single command or button
|
|
22
|
+
- Procedure documented in `/docs/rollback-procedure.md`
|
|
23
|
+
- Follows guidance from `cicd-pipeline/references/deployment-rollback.md`
|
|
24
|
+
|
|
25
|
+
## Priority
|
|
26
|
+
- **MoSCoW:** Should Have
|
|
27
|
+
- **Business Value:** High
|
|
28
|
+
- **Complexity:** Medium
|
|
29
|
+
|
|
30
|
+
## Release
|
|
31
|
+
Release 2 - Enhanced CI/CD
|
|
32
|
+
|
|
33
|
+
## Dependencies
|
|
34
|
+
- ACT-007-task-003: Implement deployment strategy (need deployment before testing rollback)
|
|
35
|
+
|
|
36
|
+
## Feature File
|
|
37
|
+
`/features/cicd-pipeline/rollback.feature`
|
|
38
|
+
|
|
39
|
+
## Status
|
|
40
|
+
- [x] Acceptance criteria defined
|
|
41
|
+
- [ ] Example mapping complete
|
|
42
|
+
- [ ] Scenarios written
|
|
43
|
+
- [ ] Acceptance tests created
|
|
44
|
+
- [ ] Implementation complete
|
|
45
|
+
- [ ] Deployed
|
|
46
|
+
|
|
47
|
+
## Notes
|
|
48
|
+
Rollback is safety net for deployments. Teams that can rollback quickly deploy more frequently and with less fear.
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# ACT-008-task-001: Define Success Metrics
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-008: Measure and Improve
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a product owner or engineering leader
|
|
8
|
+
I want to define measurable outcomes that indicate success
|
|
9
|
+
So that we can objectively evaluate if delivered features create value
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Hard to prove ROI of quality practices; Disconnect between built vs needed
|
|
13
|
+
- Gain created: Demonstrate quality practices increase speed; Connect engineering to business outcomes
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- 3-5 key metrics defined per release or feature
|
|
17
|
+
- Mix of business metrics (adoption, usage) and quality metrics (defect rate, cycle time)
|
|
18
|
+
- Each metric has: definition, target value, measurement method, collection frequency
|
|
19
|
+
- Metrics trace to VPC gains (measure if gains were actually created)
|
|
20
|
+
- Baseline values captured before release
|
|
21
|
+
- Metrics documented in `/docs/metrics.md` or in release plan
|
|
22
|
+
- Follows guidance from `continuous-improvement/references/measurement.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Should Have
|
|
26
|
+
- **Business Value:** High
|
|
27
|
+
- **Complexity:** Medium
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 3 - Continuous Improvement
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-003-task-003: Plan release 1 (need release to define metrics for)
|
|
34
|
+
- ACT-007-task-001: Set up pipeline stages (delivery capability must exist before measuring it)
|
|
35
|
+
|
|
36
|
+
## Feature File
|
|
37
|
+
`/features/continuous-improvement/metrics.feature`
|
|
38
|
+
|
|
39
|
+
## Status
|
|
40
|
+
- [x] Acceptance criteria defined
|
|
41
|
+
- [ ] Example mapping complete
|
|
42
|
+
- [ ] Scenarios written
|
|
43
|
+
- [ ] Acceptance tests created
|
|
44
|
+
- [ ] Implementation complete
|
|
45
|
+
- [ ] Deployed
|
|
46
|
+
|
|
47
|
+
## Notes
|
|
48
|
+
Metrics are how we validate VPC assumptions. They close the feedback loop from deployment back to product strategy.
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# ACT-008-task-002: Collect Measurement Data
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-008: Measure and Improve
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a developer or data analyst
|
|
8
|
+
I want to automatically collect metrics defined in success criteria
|
|
9
|
+
So that we have objective data about outcomes without manual reporting overhead
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Hard to prove ROI of quality practices; Manual reporting is time-consuming and error-prone
|
|
13
|
+
- Gain created: Data-driven decisions; Continuous visibility into key metrics
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Data collection automated (not manual spreadsheets)
|
|
17
|
+
- Data collected at defined frequency (daily, per-deployment, etc.)
|
|
18
|
+
- Data stored in queryable format (database, analytics platform, logs)
|
|
19
|
+
- Collection infrastructure documented
|
|
20
|
+
- Data quality validated (no missing or obviously incorrect values)
|
|
21
|
+
- Dashboards or reports created for key stakeholders
|
|
22
|
+
- Collection method documented in `/docs/data-collection.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Should Have
|
|
26
|
+
- **Business Value:** Medium
|
|
27
|
+
- **Complexity:** High
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 3 - Continuous Improvement
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-008-task-001: Define success metrics (need to know what to collect)
|
|
34
|
+
- ACT-007-task-001: Set up pipeline stages (pipeline can trigger collection)
|
|
35
|
+
|
|
36
|
+
## Feature File
|
|
37
|
+
`/features/continuous-improvement/data-collection.feature`
|
|
38
|
+
|
|
39
|
+
## Status
|
|
40
|
+
- [x] Acceptance criteria defined
|
|
41
|
+
- [ ] Example mapping complete
|
|
42
|
+
- [ ] Scenarios written
|
|
43
|
+
- [ ] Acceptance tests created
|
|
44
|
+
- [ ] Implementation complete
|
|
45
|
+
- [ ] Deployed
|
|
46
|
+
|
|
47
|
+
## Notes
|
|
48
|
+
Automated collection is critical. Manual metrics gathering doesn't scale and is often abandoned under pressure.
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
# ACT-008-task-003: Perform Root Cause Analysis
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-008: Measure and Improve
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As an engineering leader or team facilitator
|
|
8
|
+
I want to investigate when metrics show problems or unexpected results
|
|
9
|
+
So that we identify systemic issues rather than treating symptoms
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Same problems recur repeatedly; Firefighting without fixing root causes
|
|
13
|
+
- Gain created: Process improvements that prevent future issues; Team learning and growth
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Root cause analysis (5 Whys, Fishbone, etc.) performed for significant issues
|
|
17
|
+
- Analysis involves multiple team members, not just one person
|
|
18
|
+
- Root causes distinguished from symptoms
|
|
19
|
+
- Systemic issues identified (not just individual mistakes)
|
|
20
|
+
- Findings documented with evidence
|
|
21
|
+
- Analysis saved in `/docs/root-cause-analysis/[issue-date].md`
|
|
22
|
+
- Follows method from `continuous-improvement/references/root-cause-analysis.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Should Have
|
|
26
|
+
- **Business Value:** High
|
|
27
|
+
- **Complexity:** Medium
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 3 - Continuous Improvement
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-008-task-002: Collect measurement data (need data to analyze)
|
|
34
|
+
|
|
35
|
+
## Feature File
|
|
36
|
+
`/features/continuous-improvement/root-cause-analysis.feature`
|
|
37
|
+
|
|
38
|
+
## Status
|
|
39
|
+
- [x] Acceptance criteria defined
|
|
40
|
+
- [ ] Example mapping complete
|
|
41
|
+
- [ ] Scenarios written
|
|
42
|
+
- [ ] Acceptance tests created
|
|
43
|
+
- [ ] Implementation complete
|
|
44
|
+
- [ ] Deployed
|
|
45
|
+
|
|
46
|
+
## Notes
|
|
47
|
+
RCA is critical for learning. Without it, teams repeatedly treat symptoms without fixing underlying problems.
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# ACT-008-task-004: Update Upstream Artifacts
|
|
2
|
+
|
|
3
|
+
## Activity
|
|
4
|
+
ACT-008: Measure and Improve
|
|
5
|
+
|
|
6
|
+
## User Task
|
|
7
|
+
As a product owner or process facilitator
|
|
8
|
+
I want to update VPC, personas, story maps, or other artifacts based on learnings
|
|
9
|
+
So that future work builds on validated knowledge rather than original assumptions
|
|
10
|
+
|
|
11
|
+
## Value Proposition Link
|
|
12
|
+
- Pain relieved: Disconnect between what we built and what users needed; Same mistakes repeated
|
|
13
|
+
- Gain created: Deliver features users actually want; Continuous learning and improvement
|
|
14
|
+
|
|
15
|
+
## Acceptance Criteria
|
|
16
|
+
- Findings from metrics and RCA reviewed against existing artifacts
|
|
17
|
+
- Artifacts updated where learnings invalidate assumptions (e.g., VPC pains were wrong, persona goals incomplete)
|
|
18
|
+
- Updates documented with rationale and evidence
|
|
19
|
+
- Affected teams notified of significant artifact changes
|
|
20
|
+
- Update history tracked (when changed, why, based on what evidence)
|
|
21
|
+
- Process documented in `/docs/artifact-updates.md`
|
|
22
|
+
- Follows guidance from `continuous-improvement/references/process-update.md`
|
|
23
|
+
|
|
24
|
+
## Priority
|
|
25
|
+
- **MoSCoW:** Should Have
|
|
26
|
+
- **Business Value:** High
|
|
27
|
+
- **Complexity:** Medium
|
|
28
|
+
|
|
29
|
+
## Release
|
|
30
|
+
Release 3 - Continuous Improvement
|
|
31
|
+
|
|
32
|
+
## Dependencies
|
|
33
|
+
- ACT-008-task-003: Perform root cause analysis (need findings to feed back)
|
|
34
|
+
- All upstream artifacts (VPC, personas, story maps, etc.) - these get updated
|
|
35
|
+
|
|
36
|
+
## Feature File
|
|
37
|
+
`/features/continuous-improvement/process-update.feature`
|
|
38
|
+
|
|
39
|
+
## Status
|
|
40
|
+
- [x] Acceptance criteria defined
|
|
41
|
+
- [ ] Example mapping complete
|
|
42
|
+
- [ ] Scenarios written
|
|
43
|
+
- [ ] Acceptance tests created
|
|
44
|
+
- [ ] Implementation complete
|
|
45
|
+
- [ ] Deployed
|
|
46
|
+
|
|
47
|
+
## Notes
|
|
48
|
+
Artifact updates close the learning loop. This is where continuous improvement feeds back to product strategy, completing the methodology cycle.
|