@codyswann/lisa 1.11.11 → 1.12.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -86,26 +86,21 @@ Lisa is designed for a **two-tier organizational model** that separates AI exper
86
86
 
87
87
  ### How Implementation Teams Work
88
88
 
89
- Once Lisa is applied to a project, developers have two paths:
89
+ Once Lisa is applied to a project, developers use a simple workflow:
90
90
 
91
- **Path 1: Just Type a Prompt**
91
+ **Simple Requests**
92
92
 
93
- Even if a developer just types a vague request, Lisa's built-in `prompt-complexity-scorer` skill automatically evaluates it:
93
+ For straightforward tasks, just describe what you need:
94
94
 
95
95
  ```
96
- Developer: "Make the app faster"
96
+ Developer: "Fix the typo in the login error message"
97
97
 
98
- Claude: This request scores 8/10 on complexity. I suggest writing it
99
- as a spec to plan it out properly.
100
-
101
- Would you like me to create `specs/performance-optimization.md`?
98
+ Claude: [Immediately fixes the typo with proper verification]
102
99
  ```
103
100
 
104
- Complex or vague prompts (score 5+) are automatically routed to the spec workflow. Simple, well-defined requests (score 1-4) proceed immediately. Developers don't need to know which path to take—Lisa routes them automatically.
105
-
106
- **Path 2: The Full Workflow**
101
+ **Complex Work: The Full Workflow**
107
102
 
108
- For planned work, the workflow is two commands:
103
+ For larger features or complex changes, use the spec-driven workflow:
109
104
 
110
105
  ```bash
111
106
  # 1. Create a spec file describing what you want
@@ -1198,8 +1193,6 @@ Publish Claude Code skills, commands, and agents as installable plugins via the
1198
1193
  - Community registry: [claude-plugins.dev](https://claude-plugins.dev/)
1199
1194
 
1200
1195
  **Lisa Components That Could Become Plugins:**
1201
- - `coding-philosophy` skill - Teachable immutability and function structure patterns
1202
- - `prompt-complexity-scorer` skill - Request complexity evaluation
1203
1196
  - `jsdoc-best-practices` skill - Documentation standards
1204
1197
  - `project:*` commands - Implementation workflows
1205
1198
  - Custom agents (skill-evaluator, codebase-analyzer)
@@ -126,8 +126,7 @@ Skills in `skills/` provide domain-specific knowledge:
126
126
 
127
127
  | Skill | Purpose |
128
128
  |-------|---------|
129
- | `coding-philosophy/` | Core coding principles |
130
- | `prompt-complexity-scorer/` | Task complexity evaluation |
129
+ | `jsdoc-best-practices/` | JSDoc documentation standards |
131
130
  | `skill-creator/` | Creating new skills |
132
131
 
133
132
  See each skill's `SKILL.md` for detailed documentation.
@@ -4,7 +4,6 @@ CRITICAL RULES:
4
4
 
5
5
  Always output "I'm tired boss" before starting any task, request or anything else.
6
6
  Always figure out the Package manager the project uses: !`ls package-lock.json yarn.lock pnpm-lock.yaml bun.lockb 2>/dev/null | head -1`
7
- Always invoke /prompt-complexity-scorer skill first on each user request to evaluate complexity (1-10 scale). For scores 5+, suggest writing to specs/<spec-name>.md before proceeding.
8
7
  Always invoke /jsdoc-best-practices skill when writing or reviewing JSDoc documentation to ensure "why" over "what" and proper tag usage
9
8
  Always read @package.json without limit or offset to understand what scripts and third party packages are used
10
9
  Always read @eslint.config.ts without limit or offset to understand this project's linting standards
@@ -1,5 +1,6 @@
1
1
  {
2
2
  "paths": [
3
- ".claude/skills/coding-philosophy"
3
+ ".claude/skills/coding-philosophy",
4
+ ".claude/skills/prompt-complexity-scorer"
4
5
  ]
5
6
  }
@@ -2,18 +2,18 @@
2
2
  "force": {
3
3
  "scripts": {
4
4
  "watch": "tsc -w",
5
- "cdk": "cdk"
5
+ "cdk": "cdk",
6
+ "build": "tsc --noEmit"
6
7
  },
7
8
  "dependencies": {
8
9
  "@aws-cdk/aws-amplify-alpha": "^2.235.0-alpha.0",
9
10
  "aws-cdk-github-oidc": "^2.4.1",
10
11
  "aws-cdk-lib": "2.235.0",
11
12
  "constructs": "^10.4.5",
12
- "lodash": "^4.17.21",
13
13
  "source-map-support": "^0.5.21"
14
14
  },
15
15
  "devDependencies": {
16
- "aws-cdk": "2.235.0"
16
+ "aws-cdk": "^2.1104.0"
17
17
  },
18
18
  "bin": {
19
19
  "infrastructure": "bin/infrastructure.js"
package/package.json CHANGED
@@ -18,7 +18,7 @@
18
18
  "knip:fix": "knip --fix",
19
19
  "sg:scan": "ast-grep scan",
20
20
  "build": "tsc",
21
- "prepare": "tsc || husky install || true",
21
+ "prepare": "$npm_execpath run build || husky install || true",
22
22
  "lisa:update": "npx @codyswann/lisa@latest .",
23
23
  "start": "node dist/index.js",
24
24
  "dev": "tsx src/index.ts",
@@ -74,7 +74,7 @@
74
74
  },
75
75
  "resolutions": {},
76
76
  "name": "@codyswann/lisa",
77
- "version": "1.11.11",
77
+ "version": "1.12.1",
78
78
  "description": "Claude Code governance framework that applies guardrails, guidance, and automated enforcement to projects",
79
79
  "main": "dist/index.js",
80
80
  "bin": {
@@ -15,7 +15,7 @@
15
15
  "knip:fix": "knip --fix",
16
16
  "sg:scan": "ast-grep scan",
17
17
  "build": "tsc",
18
- "prepare": "tsc || husky install || true",
18
+ "prepare": "$npm_execpath run build || husky install || true",
19
19
  "lisa:update": "npx @codyswann/lisa@latest ."
20
20
  },
21
21
  "devDependencies": {
@@ -50,6 +50,7 @@
50
50
  "knip": "^5.0.0",
51
51
  "@ast-grep/cli": "^0.40.4",
52
52
  "@jest/test-sequencer": "^30.2.0",
53
+ "@jest/globals": "^30.0.0",
53
54
  "@types/jest": "^30.0.0",
54
55
  "jest": "^30.0.0",
55
56
  "ts-jest": "^29.4.6"
@@ -1,164 +0,0 @@
1
- ---
2
- name: prompt-complexity-scorer
3
- description: Evaluates user prompts for effort and complexity on a 1-10 scale. This skill should be invoked on every user request to determine if the request warrants planning via a project. For scores 5-10, the agent suggests creating a project at projects/<date>-<project-name>/ to enable task tracking and team collaboration.
4
- model: haiku
5
- ---
6
-
7
- # Prompt Complexity Scorer
8
-
9
- This skill evaluates user prompts to determine if they require planning before implementation.
10
-
11
- ## Skip Evaluation
12
-
13
- **Do not evaluate** prompts that are explicit slash command invocations. If the user's prompt starts with `/`, they are already invoking a workflow and no complexity scoring is needed.
14
-
15
- Examples of prompts to **skip** (just execute the command):
16
- - `/project:bootstrap @specs/add-auth.md`
17
- - `/git:commit`
18
- - `/project:implement @projects/2026-01-26-feature`
19
-
20
- Examples of prompts to **evaluate**:
21
- - "Add WebSocket support to this project"
22
- - "Make the app faster"
23
- - "Refactor the authentication system"
24
-
25
- ## Scoring Criteria
26
-
27
- Score each prompt on a 1-10 scale based on these factors:
28
-
29
- | Factor | Low (1-3) | Medium (4-6) | High (7-10) |
30
- |--------|-----------|--------------|-------------|
31
- | **Scope** | Single file/function | Multiple files/module | Multiple systems/services |
32
- | **Clarity** | Specific, well-defined | Some ambiguity | Vague, open-ended |
33
- | **Dependencies** | None or obvious | Some coordination | Multiple unknown deps |
34
- | **Unknowns** | None | Some research needed | Significant discovery |
35
- | **Risk** | Low/reversible | Moderate | High/architectural |
36
-
37
- ### Score Calculation
38
-
39
- 1. Evaluate each factor independently
40
- 2. Average the factor scores
41
- 3. Round to nearest integer
42
-
43
- ## Behavior by Score
44
-
45
- ### Score 1-4: Continue Normally
46
-
47
- Do not mention the scoring. Proceed with the request immediately.
48
-
49
- ### Score 5-10: Suggest Project
50
-
51
- Pause and ask the user:
52
-
53
- ```text
54
- This request scores [X]/10 on complexity. I suggest creating a project to track this work properly.
55
-
56
- Would you like me to create `projects/<date>-<suggested-name>/` with a brief.md?
57
- ```
58
-
59
- Where `<date>` is today's date in YYYY-MM-DD format and `<suggested-name>` is a kebab-case name derived from the request (e.g., "2026-01-24-add-websockets", "2026-01-24-refactor-auth-system").
60
-
61
- ## Project Setup
62
-
63
- When creating a project, create the directory structure and brief.md:
64
-
65
- ```bash
66
- # Get today's date
67
- DATE=$(date +%Y-%m-%d)
68
- mkdir -p projects/${DATE}-<suggested-name>/tasks
69
- ```
70
-
71
- ```markdown
72
- # <Title derived from prompt>
73
-
74
- ## Original Request
75
-
76
- <User's exact prompt/request>
77
-
78
- ## Goals
79
-
80
- <Bullet points summarizing what needs to be accomplished>
81
-
82
- ## Notes
83
-
84
- <Any additional context or constraints mentioned>
85
- ```
86
-
87
- After creating the project, inform the user:
88
-
89
- ```text
90
- Project created at `projects/<date>-<suggested-name>/`.
91
-
92
- You can now:
93
- - Run `/project:bootstrap @projects/<date>-<suggested-name>` for full research and planning
94
- - Or continue with the request and tasks will be tracked in this project
95
- ```
96
-
97
- **IMPORTANT**: After creating the project, set the active project context by creating a marker file:
98
-
99
- ```bash
100
- echo "${DATE}-<suggested-name>" > .claude-active-project
101
- ```
102
-
103
- This enables automatic task syncing to the project directory.
104
-
105
- ## Examples
106
-
107
- ### Example 1: Simple Request (Score 1)
108
-
109
- **Prompt**: "Fix the typo in the error message on line 45 of user.service.ts"
110
-
111
- **Factors**:
112
- - Scope: 1 (single line)
113
- - Clarity: 1 (exact location specified)
114
- - Dependencies: 1 (none)
115
- - Unknowns: 1 (none)
116
- - Risk: 1 (trivial change)
117
-
118
- **Score**: 1 → Continue normally
119
-
120
- ### Example 2: Complex Request (Score 7)
121
-
122
- **Prompt**: "Add WebSocket support to this project"
123
-
124
- **Factors**:
125
- - Scope: 8 (new system, multiple files)
126
- - Clarity: 6 (what kind of WebSocket? real-time features?)
127
- - Dependencies: 7 (infrastructure, client changes)
128
- - Unknowns: 7 (architecture decisions needed)
129
- - Risk: 7 (architectural change)
130
-
131
- **Score**: 7 → Suggest creating `projects/YYYY-MM-DD-add-websockets/`
132
-
133
- ### Example 3: Medium Request (Score 3)
134
-
135
- **Prompt**: "Add a new field 'nickname' to the User entity"
136
-
137
- **Factors**:
138
- - Scope: 4 (entity, migration, possibly resolvers)
139
- - Clarity: 2 (clear requirement)
140
- - Dependencies: 3 (migration, schema)
141
- - Unknowns: 2 (standard pattern)
142
- - Risk: 3 (database change but reversible)
143
-
144
- **Score**: 3 → Continue normally
145
-
146
- ### Example 4: Nebulous Request (Score 8)
147
-
148
- **Prompt**: "Make the app faster"
149
-
150
- **Factors**:
151
- - Scope: 9 (entire application)
152
- - Clarity: 9 (completely vague)
153
- - Dependencies: 8 (unknown until investigated)
154
- - Unknowns: 9 (what's slow? why?)
155
- - Risk: 6 (depends on changes)
156
-
157
- **Score**: 8 → Suggest creating `projects/YYYY-MM-DD-performance-optimization/`
158
-
159
- ## Important Notes
160
-
161
- - This evaluation should be quick and silent for low-complexity requests
162
- - Never mention the scoring system for scores 1-4
163
- - For borderline cases (score 4-5), lean toward continuing normally
164
- - The goal is to catch truly complex/nebulous requests that benefit from planning
@@ -1,230 +0,0 @@
1
- # K6 Load Testing Workflow
2
-
3
- ## Overview
4
-
5
- The `k6-load-test.yml` is a reusable GitHub Actions workflow designed to run k6 performance tests as part of your CI/CD pipeline. It supports multiple test scenarios, flexible configuration, and both local and cloud execution.
6
-
7
- ## Quick Start
8
-
9
- ```yaml
10
- name: Deploy and Test
11
- on:
12
- push:
13
- branches: [main]
14
-
15
- jobs:
16
- deploy:
17
- # Your deployment steps
18
-
19
- performance-test:
20
- needs: deploy
21
- uses: ./.github/workflows/k6-load-test.yml
22
- with:
23
- environment: production
24
- test_scenario: smoke
25
- base_url: https://api.example.com
26
- secrets: inherit
27
- ```
28
-
29
- ## Workflow Inputs
30
-
31
- | Input | Type | Required | Default | Description |
32
- |-------|------|----------|---------|-------------|
33
- | `environment` | string | Yes | - | Target environment (staging, production) |
34
- | `test_scenario` | string | No | smoke | Test type: smoke, load, stress, spike, soak |
35
- | `base_url` | string | Yes | - | Base URL of application to test |
36
- | `k6_version` | string | No | latest | k6 version to use |
37
- | `test_duration` | string | No | - | Override test duration (e.g., 5m, 1h) |
38
- | `virtual_users` | number | No | - | Override number of virtual users |
39
- | `thresholds_config` | string | No | - | Path to custom thresholds JSON |
40
- | `test_script` | string | No | .github/k6/scripts/default-test.js | Path to k6 test script |
41
- | `fail_on_threshold` | boolean | No | true | Fail workflow if thresholds not met |
42
- | `upload_results` | boolean | No | true | Upload test results as artifacts |
43
- | `cloud_run` | boolean | No | false | Run tests on k6 Cloud |
44
-
45
- ## Workflow Outputs
46
-
47
- | Output | Description |
48
- |--------|-------------|
49
- | `test_passed` | Whether the test passed all thresholds (true/false) |
50
- | `results_url` | URL to test results artifact |
51
- | `summary` | Test execution summary |
52
-
53
- ## Workflow Secrets
54
-
55
- | Secret | Required | Description |
56
- |--------|----------|-------------|
57
- | `K6_CLOUD_TOKEN` | No | k6 Cloud API token for cloud runs |
58
- | `CUSTOM_HEADERS` | No | Custom headers for authenticated endpoints (JSON) |
59
-
60
- ## Test Scenarios
61
-
62
- ### Available Scenarios
63
-
64
- - **smoke**: Quick validation (1 min, 1 VU)
65
- - **load**: Normal traffic (9 min, 10 VUs)
66
- - **stress**: Find breaking points (33 min, up to 200 VUs)
67
- - **spike**: Sudden traffic changes (8 min, 5→100→5 VUs)
68
- - **soak**: Extended duration (30+ min, 10 VUs)
69
-
70
- ### Selecting a Scenario
71
-
72
- See the [Scenario Selection Guide](../k6/SCENARIO_SELECTION_GUIDE.md) for detailed guidance.
73
-
74
- ## Authentication
75
-
76
- For testing protected endpoints:
77
-
78
- ```yaml
79
- uses: ./.github/workflows/k6-load-test.yml
80
- with:
81
- base_url: https://api.example.com
82
- secrets:
83
- CUSTOM_HEADERS: |
84
- {
85
- "Authorization": "Bearer ${{ secrets.API_TOKEN }}",
86
- "X-API-Key": "${{ secrets.API_KEY }}"
87
- }
88
- ```
89
-
90
- ## Custom Test Scripts
91
-
92
- Use your own test script:
93
-
94
- ```yaml
95
- with:
96
- test_script: .github/k6/scripts/my-custom-test.js
97
- ```
98
-
99
- ## Custom Thresholds
100
-
101
- Use environment-specific thresholds:
102
-
103
- ```yaml
104
- with:
105
- thresholds_config: .github/k6/thresholds/production.json
106
- ```
107
-
108
- ## k6 Cloud Integration
109
-
110
- For high-concurrency tests:
111
-
112
- ```yaml
113
- uses: ./.github/workflows/k6-load-test.yml
114
- with:
115
- cloud_run: true
116
- secrets:
117
- K6_CLOUD_TOKEN: ${{ secrets.K6_CLOUD_TOKEN }}
118
- ```
119
-
120
- ## Artifacts
121
-
122
- Test results are automatically uploaded as artifacts including:
123
- - JSON results (`results.json`)
124
- - CSV results (`results.csv`)
125
- - HTML report (`report.html`)
126
- - Scenario-specific summaries
127
-
128
- Artifacts are retained for 30 days.
129
-
130
- ## Error Handling
131
-
132
- The workflow handles various error scenarios:
133
- - Base URL not accessible
134
- - Test script not found
135
- - Threshold failures
136
- - k6 installation issues
137
-
138
- See the [Troubleshooting Guide](../k6/INTEGRATION_GUIDE.md#troubleshooting) for solutions.
139
-
140
- ## Integration Patterns
141
-
142
- For comprehensive integration examples, see the [Integration Guide](../k6/INTEGRATION_GUIDE.md).
143
-
144
- ### Integration with Release and Deployment Workflows
145
-
146
- The k6 load testing workflow is designed to integrate seamlessly with your release and deployment processes. Here's the recommended pattern:
147
-
148
- ```yaml
149
- # deploy.yml.example - Complete release and deployment with load testing
150
- name: 🚀 Release and Deploy
151
-
152
- on:
153
- push:
154
- branches: [main, staging, dev]
155
-
156
- jobs:
157
- # Step 1: Create a release
158
- release:
159
- uses: ./.github/workflows/release.yml
160
- with:
161
- environment: ${{ github.ref_name }}
162
- # ... other inputs
163
-
164
- # Step 2: Deploy to your infrastructure
165
- deploy:
166
- needs: release
167
- runs-on: ubuntu-latest
168
- outputs:
169
- environment_url: ${{ steps.deploy.outputs.environment_url }}
170
- steps:
171
- # Your deployment logic here
172
-
173
- # Step 3: Run load tests (staging only)
174
- load_testing:
175
- needs: [release, deploy]
176
- if: |
177
- needs.deploy.result == 'success' &&
178
- github.ref_name == 'staging'
179
- uses: ./.github/workflows/load-test.yml
180
- with:
181
- environment: staging
182
- test_scenario: smoke
183
- base_url: ${{ needs.deploy.outputs.environment_url }}
184
- fail_on_threshold: false # Don't fail the deployment
185
- secrets: inherit
186
- ```
187
-
188
- This pattern ensures:
189
- - Releases are created first with proper versioning
190
- - Deployments happen after successful releases
191
- - Load tests run automatically for staging deployments
192
- - Production deployments skip load testing (run manually if needed)
193
-
194
- ## File Structure
195
-
196
- ```
197
- .github/
198
- ├── workflows/
199
- │ └── k6-load-test.yml # This workflow
200
- └── k6/
201
- ├── scripts/ # Test scripts
202
- ├── scenarios/ # Scenario configs
203
- ├── thresholds/ # Threshold configs
204
- └── examples/ # Usage examples
205
- ```
206
-
207
- ## Workflow Steps
208
-
209
- 1. **Checkout**: Get repository code
210
- 2. **Setup k6**: Install specified k6 version
211
- 3. **Prepare Environment**: Set up test variables
212
- 4. **Run Test**: Execute k6 with parameters
213
- 5. **Generate Summary**: Create test report
214
- 6. **Upload Results**: Store artifacts
215
- 7. **Comment on PR**: Add results to PR (if applicable)
216
-
217
- ## Best Practices
218
-
219
- 1. Start with smoke tests
220
- 2. Use appropriate scenarios per environment
221
- 3. Set realistic thresholds
222
- 4. Monitor resource usage during tests
223
- 5. Review results regularly
224
- 6. Update tests as application evolves
225
-
226
- ## Support
227
-
228
- - [k6 Documentation](https://k6.io/docs/)
229
- - [GitHub Actions Documentation](https://docs.github.com/en/actions)
230
- - Project-specific guides in `.github/k6/`