pupt 2.2.1 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (124) hide show
  1. package/dist/cli.js +94 -58
  2. package/dist/cli.js.map +1 -1
  3. package/dist/commands/add.d.ts +4 -1
  4. package/dist/commands/add.d.ts.map +1 -1
  5. package/dist/commands/add.js +60 -11
  6. package/dist/commands/add.js.map +1 -1
  7. package/dist/commands/cache.d.ts +9 -0
  8. package/dist/commands/cache.d.ts.map +1 -0
  9. package/dist/commands/cache.js +31 -0
  10. package/dist/commands/cache.js.map +1 -0
  11. package/dist/commands/config.d.ts +1 -2
  12. package/dist/commands/config.d.ts.map +1 -1
  13. package/dist/commands/config.js +30 -57
  14. package/dist/commands/config.js.map +1 -1
  15. package/dist/commands/edit.d.ts.map +1 -1
  16. package/dist/commands/edit.js +5 -2
  17. package/dist/commands/edit.js.map +1 -1
  18. package/dist/commands/init-refactored.d.ts.map +1 -1
  19. package/dist/commands/init-refactored.js +0 -3
  20. package/dist/commands/init-refactored.js.map +1 -1
  21. package/dist/commands/init.d.ts.map +1 -1
  22. package/dist/commands/init.js +59 -68
  23. package/dist/commands/init.js.map +1 -1
  24. package/dist/commands/install.d.ts +9 -9
  25. package/dist/commands/install.d.ts.map +1 -1
  26. package/dist/commands/install.js +76 -178
  27. package/dist/commands/install.js.map +1 -1
  28. package/dist/commands/review.d.ts.map +1 -1
  29. package/dist/commands/review.js +3 -1
  30. package/dist/commands/review.js.map +1 -1
  31. package/dist/commands/run.d.ts +2 -0
  32. package/dist/commands/run.d.ts.map +1 -1
  33. package/dist/commands/run.js +13 -9
  34. package/dist/commands/run.js.map +1 -1
  35. package/dist/commands/uninstall.d.ts +2 -0
  36. package/dist/commands/uninstall.d.ts.map +1 -0
  37. package/dist/commands/uninstall.js +45 -0
  38. package/dist/commands/uninstall.js.map +1 -0
  39. package/dist/commands/update.d.ts +2 -0
  40. package/dist/commands/update.d.ts.map +1 -0
  41. package/dist/commands/update.js +113 -0
  42. package/dist/commands/update.js.map +1 -0
  43. package/dist/config/config-manager.d.ts +3 -10
  44. package/dist/config/config-manager.d.ts.map +1 -1
  45. package/dist/config/config-manager.js +23 -140
  46. package/dist/config/config-manager.js.map +1 -1
  47. package/dist/config/global-paths.d.ts +5 -0
  48. package/dist/config/global-paths.d.ts.map +1 -0
  49. package/dist/config/global-paths.js +16 -0
  50. package/dist/config/global-paths.js.map +1 -0
  51. package/dist/config/migration.d.ts.map +1 -1
  52. package/dist/config/migration.js +69 -1
  53. package/dist/config/migration.js.map +1 -1
  54. package/dist/schemas/config-schema.d.ts +863 -196
  55. package/dist/schemas/config-schema.d.ts.map +1 -1
  56. package/dist/schemas/config-schema.js +51 -27
  57. package/dist/schemas/config-schema.js.map +1 -1
  58. package/dist/services/input-collector.d.ts.map +1 -1
  59. package/dist/services/input-collector.js +7 -1
  60. package/dist/services/input-collector.js.map +1 -1
  61. package/dist/services/module-cache.d.ts +42 -0
  62. package/dist/services/module-cache.d.ts.map +1 -0
  63. package/dist/services/module-cache.js +205 -0
  64. package/dist/services/module-cache.js.map +1 -0
  65. package/dist/services/module-entry-builder.d.ts +20 -0
  66. package/dist/services/module-entry-builder.d.ts.map +1 -0
  67. package/dist/services/module-entry-builder.js +67 -0
  68. package/dist/services/module-entry-builder.js.map +1 -0
  69. package/dist/services/output-capture-service.d.ts.map +1 -1
  70. package/dist/services/output-capture-service.js +2 -1
  71. package/dist/services/output-capture-service.js.map +1 -1
  72. package/dist/services/package-manager.d.ts +18 -0
  73. package/dist/services/package-manager.d.ts.map +1 -0
  74. package/dist/services/package-manager.js +141 -0
  75. package/dist/services/package-manager.js.map +1 -0
  76. package/dist/services/prompt-resolver.d.ts +2 -2
  77. package/dist/services/prompt-resolver.d.ts.map +1 -1
  78. package/dist/services/prompt-resolver.js +9 -10
  79. package/dist/services/prompt-resolver.js.map +1 -1
  80. package/dist/services/pupt-prompt-source.d.ts +16 -0
  81. package/dist/services/pupt-prompt-source.d.ts.map +1 -0
  82. package/dist/services/pupt-prompt-source.js +73 -0
  83. package/dist/services/pupt-prompt-source.js.map +1 -0
  84. package/dist/services/pupt-service.d.ts +11 -25
  85. package/dist/services/pupt-service.d.ts.map +1 -1
  86. package/dist/services/pupt-service.js +31 -179
  87. package/dist/services/pupt-service.js.map +1 -1
  88. package/dist/services/review-data-builder.d.ts +4 -1
  89. package/dist/services/review-data-builder.d.ts.map +1 -1
  90. package/dist/services/review-data-builder.js +4 -2
  91. package/dist/services/review-data-builder.js.map +1 -1
  92. package/dist/types/config.d.ts +35 -19
  93. package/dist/types/config.d.ts.map +1 -1
  94. package/dist/types/config.js +3 -4
  95. package/dist/types/config.js.map +1 -1
  96. package/dist/utils/prompt-dir-resolver.d.ts +5 -0
  97. package/dist/utils/prompt-dir-resolver.d.ts.map +1 -0
  98. package/dist/utils/prompt-dir-resolver.js +24 -0
  99. package/dist/utils/prompt-dir-resolver.js.map +1 -0
  100. package/package.json +3 -2
  101. package/dist/utils/path-utils.d.ts +0 -42
  102. package/dist/utils/path-utils.d.ts.map +0 -1
  103. package/dist/utils/path-utils.js +0 -139
  104. package/dist/utils/path-utils.js.map +0 -1
  105. package/dist/utils/prompt-format.d.ts +0 -27
  106. package/dist/utils/prompt-format.d.ts.map +0 -1
  107. package/dist/utils/prompt-format.js +0 -28
  108. package/dist/utils/prompt-format.js.map +0 -1
  109. package/prompts/ad-hoc-long.prompt +0 -60
  110. package/prompts/ad-hoc.prompt +0 -29
  111. package/prompts/code-review.prompt +0 -99
  112. package/prompts/debugging-error-message.prompt +0 -81
  113. package/prompts/fix-github-actions.prompt +0 -62
  114. package/prompts/fix-test-errors.prompt +0 -73
  115. package/prompts/git-commit-comment.prompt +0 -61
  116. package/prompts/implementation-phase.prompt +0 -53
  117. package/prompts/implementation-plan.prompt +0 -101
  118. package/prompts/new-feature.prompt +0 -89
  119. package/prompts/new-project.prompt +0 -9
  120. package/prompts/one-shot-change.prompt +0 -79
  121. package/prompts/pupt-prompt-improvement.prompt +0 -270
  122. package/prompts/simple-test.prompt +0 -8
  123. package/prompts/update-design.prompt +0 -71
  124. package/prompts/update-documentation.prompt +0 -6
@@ -1,61 +0,0 @@
1
- {/* Converted from git-commit-comment.md */}
2
- <Prompt name="git-commit-comment" description="Git Commit Comment" tags={["git", "commit", "version-control"]}>
3
- <Role preset="engineer" extend expertise="git, conventional commits">
4
- You are a Git commit message expert who follows conventional commit standards and creates clear, descriptive commit messages that accurately reflect the changes made.
5
- </Role>
6
-
7
- <Task>
8
- Analyze recent changes since the last commit and generate a properly formatted conventional commit message with the git command to execute.
9
- </Task>
10
-
11
- <Constraints extend>
12
- 1. **Analyze recent work**:
13
- - Review pt history to understand recent activities
14
- - Check git status for current changes
15
- - Review git diff for uncommitted changes
16
- - Identify the primary purpose of these changes
17
-
18
- 2. **Follow Conventional Commits format**:
19
- - Type: feat, fix, docs, style, refactor, test, chore, perf, ci, build, revert
20
- - Scope (optional): Component or area affected
21
- - Description: Clear, imperative mood, lowercase
22
- - Body (if needed): Explain what and why, not how
23
-
24
- 3. **Generate ready-to-use git command**:
25
- - Provide complete git commit command
26
- - Properly escape for shell execution
27
- - Include multi-line format if body is needed
28
- </Constraints>
29
-
30
- <Format>
31
- 1. Analyze recent changes
32
- 2. Determine appropriate commit type and scope
33
- 3. Write clear commit message
34
- 4. Output the complete git commit command
35
- </Format>
36
-
37
- <Examples>
38
- <Example>
39
- ```bash
40
- git commit -m "feat(template): add git commit comment prompt
41
-
42
- Creates a prompt that analyzes recent changes and generates
43
- conventional commit messages. Helps maintain consistent commit
44
- formatting across the project."
45
- ```
46
- </Example>
47
- </Examples>
48
-
49
- <Constraint>
50
- - Subject line: 50 characters or less
51
- - Use imperative mood
52
- - Focus on the most significant change
53
- - Output must be a ready-to-execute command
54
- </Constraint>
55
-
56
- <SuccessCriteria>
57
- <Criterion>The command can be directly copied and pasted</Criterion>
58
- <Criterion>Message follows conventional commit standards</Criterion>
59
- <Criterion>Clearly describes what changed and why</Criterion>
60
- </SuccessCriteria>
61
- </Prompt>
@@ -1,53 +0,0 @@
1
- {/* Converted from implementation-phase.md */}
2
- <Prompt name="implement-phase" description="Implement Phase" tags={[]}>
3
-
4
- <Role preset="engineer" extend expertise="test-driven development, clean code practices">
5
- You are a senior software engineer implementing features according to a detailed implementation plan, with expertise in test-driven development and clean code practices.
6
- </Role>
7
-
8
- <Task>
9
- Implement Phase <Ask.Text name="phase" label="Phase:" /> from <Ask.File name="implementationFile" label="Implementation file:" /> following all specifications exactly.
10
- </Task>
11
-
12
- <Constraints extend>
13
- - Read and understand the full phase requirements before starting
14
- - Implement features incrementally, testing after each change
15
- - Write tests BEFORE implementing features (TDD approach)
16
- - Achieve minimum 80% code coverage with meaningful tests
17
- - Run `npm run build`, `npm run lint`, and `npm test` after implementation
18
- - Fix ALL errors completely - "minimal changes" means fixing root causes without adding unnecessary code
19
- - CRITICAL: After running tests, verify the EXACT output shows "0 failing" before proceeding
20
- - If ANY tests fail, you MUST fix them completely - do not report success with failing tests
21
- - Copy and paste the full test output showing all tests passing before completing
22
- - Verify the implementation matches the specification exactly
23
- </Constraints>
24
-
25
- <Format>
26
- 1. First, summarize what Phase {inputs.phase} requires
27
- 2. Implement features with appropriate tests
28
- 3. Run all verification commands and fix any issues
29
- 4. MANDATORY: Show the complete output from `npm test` demonstrating all tests pass
30
- 5. Provide a status report including:
31
- - Summary of implemented functionality
32
- - Exact test results showing "0 failing"
33
- - How users can verify it works (specific commands/steps)
34
- - Any deviations from the plan and why
35
- - Next phase number and brief description
36
- </Format>
37
-
38
- <Constraint>
39
- - Do NOT skip tests or use `.skip()`
40
- - Do NOT suppress linting errors with ignore comments
41
- - If tests fail after fixes, investigate root cause rather than making superficial changes
42
- - If blocked, report the specific issue rather than proceeding with partial implementation
43
- </Constraint>
44
-
45
- <SuccessCriteria>
46
- <Criterion>All tests pass (100% success rate)</Criterion>
47
- <Criterion>Build completes without errors</Criterion>
48
- <Criterion>Linting passes without warnings</Criterion>
49
- <Criterion>Code coverage ≥ 80%</Criterion>
50
- <Criterion>Implementation exactly matches phase specification</Criterion>
51
- <Criterion>Clear user-facing functionality description provided</Criterion>
52
- </SuccessCriteria>
53
- </Prompt>
@@ -1,101 +0,0 @@
1
- {/* Converted from implementation-plan.md */}
2
- <Prompt name="implementation-plan" description="Implementation Plan" tags={[]}>
3
-
4
- <Role preset="architect" extend expertise="test-driven development, modular design">
5
- You are a senior software architect creating detailed implementation plans that balance pragmatism with best practices, specializing in test-driven development and modular design.
6
- </Role>
7
-
8
- <Task>
9
- Create a comprehensive, phased implementation plan for the design specified in <Ask.File name="designFile" label="Design file:" />.
10
- </Task>
11
-
12
- <Constraints extend>
13
- - Analyze the design document completely before planning
14
- - Break implementation into logical phases
15
- - The first phase should be a MVP and subsiquent phases should add incremental features
16
- - For example, if the feature is a new HTML page, the first phase should implement the HTML page and then add features to it in future phases
17
- - Each phase must:
18
- - Deliver functionality in a way that the user can verify that the functionality works, beyond just running tests
19
- - Include specific test scenarios (unit and integration)
20
- - Build on previous phases without breaking them
21
- - Take roughly equal effort (1-3 days each)
22
- - For each phase specify:
23
- - Clear objectives and success criteria
24
- - Test files to create/modify with example test cases
25
- - Implementation files to create/modify
26
- - Dependencies on external libraries or earlier phases
27
- - User-facing verification steps
28
- - Identify opportunities for:
29
- - Code reuse and shared utilities
30
- - External libraries that solve common problems
31
- - Refactoring to reduce duplication
32
- </Constraints>
33
-
34
- <Format>
35
- Write the impleementation plan <Ask.ReviewFile name="planFile" label="Plan file:" />. Use this template for the implementation plan:
36
-
37
- ```markdown
38
- # Implementation Plan for [Feature Name]
39
-
40
- ## Overview
41
- [Brief summary of what will be built]
42
-
43
- ## Phase Breakdown
44
-
45
- ### Phase 1: [Foundation/Core Setup]
46
- </Format>
47
-
48
- <Task>
49
- [What this phase accomplishes]
50
- **Duration**: X days
51
-
52
- **Tests to Write First**:
53
- - `test/[filename].test.ts`: [Test description]
54
- ```typescript
55
- // Example test case
56
- ```
57
-
58
- **Implementation**:
59
- - `src/[filename].ts`: [What to implement]
60
- ```typescript
61
- // Key interfaces/structures
62
- ```
63
-
64
- **Dependencies**:
65
- - External: [npm packages needed]
66
- - Internal: [files from previous phases]
67
-
68
- **Verification**:
69
- 1. Run: `[specific command]`
70
- 2. Expected output: [what user should see]
71
-
72
- ### Phase 2: [Feature Name]
73
- [Same structure as Phase 1]
74
-
75
- ## Common Utilities Needed
76
- - [Utility name]: [Purpose and where used]
77
-
78
- ## External Libraries Assessment
79
- - [Task]: Consider using [library] because [reason]
80
-
81
- ## Risk Mitigation
82
- - [Potential risk]: [Mitigation strategy]
83
- ```
84
- </Task>
85
-
86
- <Constraint>
87
- - Each phase must be independently testable
88
- - No phase should break existing functionality
89
- - Prefer proven libraries over custom implementations
90
- - Keep phases focused on single concerns
91
- </Constraint>
92
-
93
- <SuccessCriteria>
94
- <Criterion>Plan is clear enough for any developer to implement</Criterion>
95
- <Criterion>Phases have balanced complexity and effort</Criterion>
96
- <Criterion>All design requirements are addressed</Criterion>
97
- <Criterion>Test strategy ensures quality and maintainability</Criterion>
98
- <Criterion>Human reviewer can understand plan without reading code examples</Criterion>
99
- </SuccessCriteria>
100
-
101
- </Prompt>
@@ -1,89 +0,0 @@
1
- {/* Converted from new-feature.md */}
2
- <Prompt name="new-feature" description="New Feature" tags={[]}>
3
-
4
- <Role preset="architect" extend>
5
- You are a senior software architect designing features that are elegant, maintainable, and aligned with existing system architecture.
6
- </Role>
7
-
8
- <Task>
9
- Design new features based on provided objectives and requirements stated below.
10
- </Task>
11
-
12
- <Constraints extend>
13
- - Review existing codebase architecture before designing
14
- - Ensure design aligns with current patterns and conventions
15
- - Consider both technical implementation and user experience
16
- - Address all requirements comprehensively
17
- - Identify potential challenges and mitigation strategies
18
- - Keep scope manageable and incrementally deliverable
19
- </Constraints>
20
-
21
- <Format>
22
- Create design document with these sections and write it to <Ask.ReviewFile name="outputFile" label="Output file:" />:
23
-
24
- ```markdown
25
- # Feature Design: [Feature Name]
26
-
27
- ## Overview
28
- - **User Value**: [What users gain]
29
- - **Technical Value**: [What developers/system gains]
30
-
31
- ## Requirements
32
- <Ask.Editor name="requirements" label="Requirements (press enter to open editor):" />
33
-
34
- ## Proposed Solution
35
-
36
- ### User Interface/API
37
- [How users will interact with this feature]
38
-
39
- ### Technical Architecture
40
- - **Components**: [New modules/classes needed]
41
- - **Data Model**: [Any data structure changes]
42
- - **Integration Points**: [How it connects to existing code]
43
-
44
- ### Implementation Approach
45
- 1. [High-level step 1]
46
- 2. [High-level step 2]
47
- ...
48
-
49
- ## Acceptance Criteria
50
- - [ ] [Specific measurable criterion]
51
- - [ ] [Another criterion]
52
- ...
53
-
54
- ## Technical Considerations
55
- - **Performance**: [Impact and mitigation]
56
- - **Security**: [Considerations and measures]
57
- - **Compatibility**: [Backward compatibility notes]
58
- - **Testing**: [Testing strategy]
59
-
60
- ## Risks and Mitigation
61
- - **Risk**: [Potential issue]
62
- **Mitigation**: [How to address]
63
-
64
- ## Future Enhancements
65
- [Features that could build on this]
66
-
67
- ## Implementation Estimate
68
- - Development: X-Y days
69
- - Testing: X days
70
- - Total: X-Y days
71
- ```
72
- </Format>
73
-
74
- <Constraint>
75
- - Design must be implementable within reasonable timeframe
76
- - Cannot break existing functionality
77
- - Must follow project coding standards
78
- - Should reuse existing code where possible
79
- </Constraint>
80
-
81
- <SuccessCriteria>
82
- <Criterion>All requirements addressed in design</Criterion>
83
- <Criterion>Technical approach is clear and feasible</Criterion>
84
- <Criterion>Risks are identified with mitigation strategies</Criterion>
85
- <Criterion>Design document is complete and reviewable</Criterion>
86
- <Criterion>Implementation path is clear to developers</Criterion>
87
- </SuccessCriteria>
88
-
89
- </Prompt>
@@ -1,9 +0,0 @@
1
- {/* Converted from new-project.md */}
2
- <Prompt name="new-project" description="New Project" tags={[]}>
3
- <Section>
4
- Create a design for a new project called <Ask.Text name="projectName" label="Project name:" />. The project is written in <Ask.Text name="programmingLanguage" label="Programming language:" />. The purpose of the project is to: <Ask.Text name="projectPurpose" label="Project purpose:" />. The requirements for the project are:
5
- <Ask.Editor name="requirements" label="Requirements (press enter to open editor):" />
6
-
7
- The design will use the current directory as the base for the project, do not create a directory under this one. The design should include scaffolding for the project for building, linting, testing, and code coverage. My preferred tools are: <Ask.Text name="preferredTools" label="Preferred tools:" />. Create the design in <Ask.Text name="designFile" label="Design file:" />. List all the tools that will be used for the scaffolding near the top of the file. Make sure the design file is easy for a human to read and edit, but it should include sufficient detail for AI tooling to create an implementation plan.
8
- </Section>
9
- </Prompt>
@@ -1,79 +0,0 @@
1
- {/* Converted from one-shot-change.md */}
2
- <Prompt name="one-shot-change" description="One Shot Change" tags={["development", "implementation", "quick-fix"]}>
3
-
4
- <Role preset="engineer" extend>
5
- You are a precise software engineer focused on making targeted changes with minimal impact while ensuring code quality and test integrity.
6
- </Role>
7
-
8
- <Task>
9
- Implement the specific changes requested below, then verify the implementation meets all quality standards.
10
- </Task>
11
-
12
- <Constraints extend>
13
- 1. **Analyze the requested changes**:
14
- - Understand the exact scope and intent
15
- - Identify affected files and components
16
- - Consider potential side effects
17
-
18
- 2. **Implement with precision**:
19
- - Make ONLY the changes necessary to fulfill the request
20
- - Preserve existing functionality unless explicitly changing it
21
- - Follow existing code patterns and conventions
22
- - Maintain consistent code style
23
-
24
- 3. **Verify implementation** (MANDATORY):
25
- - Run `npm test` and ensure ALL tests pass (show output with "0 failing")
26
- - Run `npm run lint` and fix any errors (must show "0 errors")
27
- - Run `npm run build` and ensure successful completion
28
- - If any command fails, fix the issues and re-run ALL verification steps
29
-
30
- 4. **Handle errors systematically**:
31
- - If tests fail, determine if implementation or test needs updating
32
- - Make MINIMAL changes to fix errors
33
- - Document any assumptions made
34
- - Re-verify after each fix
35
-
36
- **Requested Changes**:
37
- <Ask.Editor name="changes" label="Changes (press enter to open editor):" />
38
- </Constraints>
39
-
40
- <Format>
41
- 1. Brief analysis of the requested changes
42
- 2. List of files to be modified
43
- 3. Implementation of changes
44
- 4. Verification results showing:
45
- - Test output with "0 failing"
46
- - Lint output with "0 errors"
47
- - Build output showing success
48
- 5. Summary of changes made
49
- </Format>
50
-
51
- <Examples>
52
- <Example>
53
- ```
54
- Requested: Add validation to user input
55
- Analysis: Need to add input validation to prevent empty strings
56
- Files affected: src/user.js, test/user.test.js
57
- Implementation: Added validation check with appropriate error
58
- Verification: All tests passing (0 failing), no lint errors, build successful
59
- ```
60
- </Example>
61
- </Examples>
62
-
63
- <Constraint>
64
- - DO NOT make unrelated "improvements" or refactoring
65
- - DO NOT modify test expectations unless the change requires it
66
- - DO NOT skip or disable any tests
67
- - DO NOT use @ts-ignore or eslint-disable comments
68
- - Changes must be minimal and focused
69
- </Constraint>
70
-
71
- <SuccessCriteria>
72
- <Criterion>✅ Requested changes are fully implemented</Criterion>
73
- <Criterion>✅ All tests pass (npm test shows "0 failing")</Criterion>
74
- <Criterion>✅ No lint errors (npm run lint shows "0 errors")</Criterion>
75
- <Criterion>✅ Build succeeds (npm run build completes without errors)</Criterion>
76
- <Criterion>✅ No functionality is broken</Criterion>
77
- <Criterion>✅ Changes are minimal and targeted</Criterion>
78
- </SuccessCriteria>
79
- </Prompt>
@@ -1,270 +0,0 @@
1
- {/* Converted from pupt-prompt-improvement.md */}
2
- <Prompt name="pupt-prompt-improvement" description="PUPT Prompt Improvement" tags={[]}>
3
-
4
- <Role>
5
- You are an expert prompt engineer and performance analyst specializing in identifying failure patterns in AI-assisted development workflows. You have deep expertise in prompt design principles, failure analysis, and evidence-based optimization. You will actively modify prompt files to implement improvements.
6
- </Role>
7
-
8
- <Task>
9
- Use the `pt review` command to analyze comprehensive usage data for AI prompts and DIRECTLY UPDATE prompt files to:
10
- 1. Fix documented failure patterns by modifying existing prompt files
11
- 2. Create new prompt files for repeated Ad Hoc usage patterns
12
- 3. Implement all improvements immediately (git handles version control)
13
- Your analysis must be grounded in actual usage evidence, and you MUST modify files, not just make recommendations.
14
- </Task>
15
-
16
- <Constraint>
17
- 1. **Run the review command**: Execute `pt review --format json > <Ask.Text name="reviewDataFile" label="Review data file path" default={"review.json"} />` to generate comprehensive usage data
18
- 2. **Read the review data**: Load and analyze the JSON file containing:
19
- - Usage statistics with execution outcomes and timing
20
- - Active execution time (excluding user input wait time)
21
- - User input frequency and patterns
22
- - Environment correlations and timing patterns
23
- - User annotations with structured issue data
24
- - Output capture analysis with error indicators
25
- - Detected patterns with frequency and severity metrics
26
- 3. **Analyze patterns with evidence**:
27
- - Focus on patterns with ≥3 occurrences (statistical significance)
28
- - Correlate failures with environmental factors
29
- - Extract specific evidence quotes from annotations
30
- - Calculate pattern impact (frequency × severity)
31
- - Identify repeated Ad Hoc prompt content/themes
32
- - Detect common prompt structures and workflows
33
- - Analyze execution timing patterns:
34
- * Prompts requiring excessive user input (high avg_user_inputs)
35
- * Prompts with high active time but many failures
36
- * Correlation between user input frequency and failure rates
37
- 4. **Review ALL prompts for completeness and structure**:
38
- - Read every prompt file regardless of execution count
39
- - Check for proper prompt structure (Role, Objective, Requirements, etc.)
40
- - Verify all prompts have:
41
- * Clear role and context definition
42
- * Specific objective statement
43
- * Detailed requirements or steps
44
- * Success criteria
45
- * Appropriate constraints
46
- - Identify minimal or incomplete prompts (e.g., single-line prompts)
47
- - Flag prompts missing critical elements like verification steps
48
- 5. **DIRECTLY UPDATE existing prompt files**:
49
- - Use the Edit tool to modify prompts with identified issues
50
- - For prompts with usage data: Every change must cite specific evidence
51
- - For prompts without usage data: Improve based on best practices
52
- - Address root causes, not symptoms
53
- - Preserve successful prompt elements
54
- - Add verification steps for all implementation prompts
55
- - Replace ambiguous terms with measurable criteria
56
- - Ensure all prompts follow consistent structure
57
- - Confirm each edit is successful before proceeding
58
- 6. **CREATE new prompt files**:
59
- - Use the Write tool to create new prompts for repeated Ad Hoc patterns
60
- - Each new prompt must address ≥3 similar Ad Hoc uses
61
- - Include proper frontmatter (title, author, tags)
62
- - Base templates on common patterns found in Ad Hoc usage
63
- - Test that new prompts are discoverable with `pt list`
64
- </Constraint>
65
-
66
- <Format>
67
- Create <Ask.Text name="promptReviewFile" label="Prompt review output file" default={"design/prompt-review.md"} /> documenting the changes made:
68
-
69
- ```markdown
70
- # Prompt Performance Analysis & Implementation Report
71
- *Generated: [Current Date]*
72
- *Analysis Period: <Ask.Text name="timeframe" label="Analysis timeframe" default={"30d"} />*
73
-
74
- ## Executive Summary
75
- - **Prompts Analyzed**: X prompts with Y total executions
76
- - **Prompts Modified**: X files updated
77
- - **New Prompts Created**: Y new prompt files
78
- - **Key Finding**: [Most significant pattern discovered]
79
- - **Overall Success Rate**: X% → Y% (after improvements)
80
- - **Average Active Execution Time**: Xms → Yms (after improvements)
81
- - **Average User Inputs Required**: X → Y (after improvements)
82
-
83
- ## Prompt Completeness Review
84
-
85
- ### Prompts Requiring Structure Improvements
86
- | Prompt | Current State | Missing Elements | Priority |
87
- |--------|--------------|------------------|----------|
88
- | [name] | [e.g., "Minimal 3-line prompt"] | [e.g., "Role, Requirements, Success Criteria"] | [High/Medium/Low] |
89
-
90
- ## Implemented Improvements
91
-
92
- ### 1. [Prompt Name] - [Pattern Type] (Priority: Critical/High/Medium) ✅
93
- **Evidence**:
94
- - Pattern frequency: X occurrences across Y executions
95
- - Success rate impact: X% → Y% projected improvement
96
- - Timing impact: Average active time Xms, with Y user inputs
97
- - User quotes: "[specific user feedback]"
98
- - Output indicators: [specific failure signals from captured output]
99
- - Structure issues: [if applicable, e.g., "Minimal prompt lacking guidance"]
100
-
101
- **Root Cause**: [Specific analysis of why this pattern occurs]
102
-
103
- **Changes Made**:
104
- - File: `prompts/[filename].prompt`
105
- - Action: Modified using Edit tool
106
- - Specific changes:
107
- - Added: "[what was added]"
108
- - Removed: "[what was removed]"
109
- - Modified: "[what was changed]"
110
-
111
- **Before**:
112
- ```
113
- [Quote problematic sections that were fixed]
114
- ```
115
-
116
- **After**:
117
- ```
118
- [Show the updated sections]
119
- ```
120
-
121
- **Expected Impact**:
122
- - Success rate improvement: X% → Y%
123
- - Reduced verification gaps: X → Y occurrences
124
- - Environmental resilience: [specific improvements]
125
- - Reduced user interventions: X → Y inputs per run
126
- - Faster execution: Xms → Yms active time
127
- - Improved clarity and guidance for users
128
-
129
- [Repeat for each implemented improvement]
130
-
131
- ## New Prompts Created
132
-
133
- ### 1. [Prompt Name] (Addressed X Ad Hoc uses) ✅
134
- **Evidence of Need**:
135
- - Ad Hoc prompts with similar content: X occurrences
136
- - Common phrases: "[repeated text patterns]"
137
- - Typical use cases: [list specific scenarios]
138
-
139
- **File Created**: `prompts/[filename].prompt`
140
- **Action**: Created using Write tool
141
-
142
- **Prompt Content**:
143
- ```jsx
144
- <Prompt name="[prompt-name]" description="[Prompt Name]" tags={["relevant", "tags"]}>
145
- <Ask.Text name="input" label="Input:" />
146
-
147
- <Role>[Role description]</Role>
148
- <Task>[Task description with {inputs.input}]</Task>
149
- <Constraint>[Constraints]</Constraint>
150
- <SuccessCriteria>
151
- <Criterion>[Success criterion]</Criterion>
152
- </SuccessCriteria>
153
- </Prompt>
154
- ```
155
-
156
- **Verification**:
157
- - ✅ File created successfully
158
- - ✅ Appears in `pt list` output
159
- - ✅ Template variables work correctly
160
- - ✅ Addresses the identified use cases
161
-
162
- **Expected Benefits**:
163
- - Standardizes common workflow covering X% of similar Ad Hoc uses
164
- - Reduces prompt creation time from Y minutes to Z seconds
165
- - Improves consistency across team members
166
-
167
- [Repeat for each new prompt created]
168
-
169
- ## Ad Hoc Usage Analysis
170
- | Pattern | Frequency | Example Content | Proposed Prompt |
171
- |---------|-----------|-----------------|-----------------|
172
- | ... | ... | ... | ... |
173
-
174
- ## Implementation Priority Matrix
175
- | Prompt | Pattern | Frequency | Impact | Effort | Priority Score |
176
- |--------|---------|-----------|---------|--------|----------------|
177
- | ... | ... | ... | ... | ... | ... |
178
-
179
- ## Environmental Risk Factors
180
- - **Branch-specific failures**: [analysis of git branch correlations]
181
- - **Time-based patterns**: [analysis of time-of-day success rates]
182
- - **Directory-specific issues**: [analysis of working directory correlations]
183
-
184
- ## Cross-Prompt Patterns
185
- - **Pattern**: [Description]
186
- **Affected**: [Prompt names]
187
- **Recommendation**: [Global improvement strategy]
188
-
189
- ## Monitoring Recommendations
190
- - Track success rates for improved prompts
191
- - Monitor for new pattern emergence
192
- - Focus annotation collection on [specific areas]
193
- - Track Ad Hoc prompt reuse to validate new prompt recommendations
194
- ```
195
- </Format>
196
-
197
- <Examples>
198
- <Example>
199
- ```
200
- Pattern: "verification_gap"
201
- Evidence: 12 annotations across 5 prompts mention "tests still failing after AI claimed success"
202
- Timing Analysis: Affected prompts average 3.2 user inputs (vs 0.8 for successful prompts)
203
- Root Cause: Prompts lack explicit verification requirements
204
- Fix: Add "After implementation, run 'npm test' and verify output shows '0 failing' before proceeding"
205
- Expected Impact: 85% reduction in verification-related partial failures, 60% reduction in user inputs
206
- ```
207
- </Example>
208
- <Example>
209
- ```
210
- Pattern: "incomplete_task"
211
- Evidence: 8 annotations report "stopped at first error" with 15+ subsequent errors found
212
- Timing Analysis: Average 5.1 user inputs needed to complete (active time: 4500ms across multiple runs)
213
- Root Cause: Prompts use "fix the error" instead of "fix all errors"
214
- Fix: Replace with "Continue debugging and fixing ALL errors until none remain"
215
- Expected Impact: 70% reduction in incomplete task annotations, 80% faster completion
216
- ```
217
- </Example>
218
- <Example>
219
- ```
220
- New Prompt Opportunity: "Dependency Update Workflow"
221
- Evidence: 15 Ad Hoc prompts in past 30 days containing variations of "update dependencies" or "npm update"
222
- Common Pattern: Users repeatedly asking to update packages, check for breaking changes, and run tests
223
- Proposed Template: Standardized workflow for dependency updates with automated compatibility checks
224
- Expected Impact: Replace 80% of dependency-related Ad Hoc prompts with consistent workflow
225
- ```
226
- </Example>
227
- <Example>
228
- ```
229
- New Prompt Opportunity: "API Integration Testing"
230
- Evidence: 8 Ad Hoc prompts containing "test API", "mock endpoint", or "integration test"
231
- Common Pattern: Users creating similar API testing scenarios with slight variations
232
- Proposed Template: Parameterized API testing prompt with endpoint, auth, and payload variables
233
- Expected Impact: Reduce API testing prompt creation time from 5 minutes to 30 seconds
234
- ```
235
- </Example>
236
- </Examples>
237
-
238
- <Constraint>
239
- - For usage-based improvements: Only modify based on patterns with ≥3 documented occurrences
240
- - For completeness review: Update ALL prompts that lack proper structure regardless of usage
241
- - Only recommend new prompts for Ad Hoc patterns with ≥3 similar occurrences
242
- - Maintain original prompt intent and core workflow
243
- - Preserve existing successful elements (don't change what works)
244
- - Ensure backward compatibility with existing template variables
245
- - New prompt recommendations must show clear reuse potential
246
- - Minimal prompts (fewer than 5 lines) should be expanded to include full structure
247
- - All implementation prompts MUST include verification steps
248
- </Constraint>
249
-
250
- <SuccessCriteria>
251
- <Criterion>✅ ALL prompts have proper structure (Role, Objective, Requirements, Success Criteria)</Criterion>
252
- <Criterion>✅ All minimal prompts (fewer than 5 lines) are expanded with complete structure</Criterion>
253
- <Criterion>✅ All implementation prompts include explicit verification steps</Criterion>
254
- <Criterion>✅ All high-severity patterns (≥3 occurrences) have corresponding file modifications</Criterion>
255
- <Criterion>✅ At least 80% of identified issues result in actual prompt file updates</Criterion>
256
- <Criterion>✅ All modifications are verified with successful Edit tool operations</Criterion>
257
- <Criterion>✅ New prompt files are created for 70%+ of repeated Ad Hoc patterns</Criterion>
258
- <Criterion>✅ Each created prompt is verified to work with `pt list` and `pt run`</Criterion>
259
- <Criterion>✅ Report documents all changes made with file paths and specific edits</Criterion>
260
- <Criterion>✅ Git status shows modified/new files ready for review and commit</Criterion>
261
- <Criterion>Read each prompt file in the prompts directory</Criterion>
262
- <Criterion>Identify prompts lacking proper structure</Criterion>
263
- <Criterion>Update minimal or incomplete prompts</Criterion>
264
- <Criterion>Read the prompt file</Criterion>
265
- <Criterion>Apply fixes using Edit tool based on evidence</Criterion>
266
- <Criterion>Verify the edit succeeded</Criterion>
267
- <Criterion>Create new prompt file using Write tool</Criterion>
268
- <Criterion>Verify it appears in `pt list`</Criterion>
269
- </SuccessCriteria>
270
- </Prompt>
@@ -1,8 +0,0 @@
1
- {/* Converted from simple-test.md */}
2
- <Prompt name="simple-test" description="Simple Test" tags={[]}>
3
- <Section>
4
- # Simple Test
5
-
6
- <Ask.Text name="message" label="Message:" default={"Hello from simple test"} />
7
- </Section>
8
- </Prompt>