@codyswann/lisa 1.11.11 → 1.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -86,26 +86,21 @@ Lisa is designed for a **two-tier organizational model** that separates AI exper
86
86
 
87
87
  ### How Implementation Teams Work
88
88
 
89
- Once Lisa is applied to a project, developers have two paths:
89
+ Once Lisa is applied to a project, developers use a simple workflow:
90
90
 
91
- **Path 1: Just Type a Prompt**
91
+ **Simple Requests**
92
92
 
93
- Even if a developer just types a vague request, Lisa's built-in `prompt-complexity-scorer` skill automatically evaluates it:
93
+ For straightforward tasks, just describe what you need:
94
94
 
95
95
  ```
96
- Developer: "Make the app faster"
96
+ Developer: "Fix the typo in the login error message"
97
97
 
98
- Claude: This request scores 8/10 on complexity. I suggest writing it
99
- as a spec to plan it out properly.
100
-
101
- Would you like me to create `specs/performance-optimization.md`?
98
+ Claude: [Immediately fixes the typo with proper verification]
102
99
  ```
103
100
 
104
- Complex or vague prompts (score 5+) are automatically routed to the spec workflow. Simple, well-defined requests (score 1-4) proceed immediately. Developers don't need to know which path to take—Lisa routes them automatically.
105
-
106
- **Path 2: The Full Workflow**
101
+ **Complex Work: The Full Workflow**
107
102
 
108
- For planned work, the workflow is two commands:
103
+ For larger features or complex changes, use the spec-driven workflow:
109
104
 
110
105
  ```bash
111
106
  # 1. Create a spec file describing what you want
@@ -1198,8 +1193,6 @@ Publish Claude Code skills, commands, and agents as installable plugins via the
1198
1193
  - Community registry: [claude-plugins.dev](https://claude-plugins.dev/)
1199
1194
 
1200
1195
  **Lisa Components That Could Become Plugins:**
1201
- - `coding-philosophy` skill - Teachable immutability and function structure patterns
1202
- - `prompt-complexity-scorer` skill - Request complexity evaluation
1203
1196
  - `jsdoc-best-practices` skill - Documentation standards
1204
1197
  - `project:*` commands - Implementation workflows
1205
1198
  - Custom agents (skill-evaluator, codebase-analyzer)
@@ -126,8 +126,7 @@ Skills in `skills/` provide domain-specific knowledge:
126
126
 
127
127
  | Skill | Purpose |
128
128
  |-------|---------|
129
- | `coding-philosophy/` | Core coding principles |
130
- | `prompt-complexity-scorer/` | Task complexity evaluation |
129
+ | `jsdoc-best-practices/` | JSDoc documentation standards |
131
130
  | `skill-creator/` | Creating new skills |
132
131
 
133
132
  See each skill's `SKILL.md` for detailed documentation.
@@ -4,7 +4,6 @@ CRITICAL RULES:
4
4
 
5
5
  Always output "I'm tired boss" before starting any task, request or anything else.
6
6
  Always figure out the Package manager the project uses: !`ls package-lock.json yarn.lock pnpm-lock.yaml bun.lockb 2>/dev/null | head -1`
7
- Always invoke /prompt-complexity-scorer skill first on each user request to evaluate complexity (1-10 scale). For scores 5+, suggest writing to specs/<spec-name>.md before proceeding.
8
7
  Always invoke /jsdoc-best-practices skill when writing or reviewing JSDoc documentation to ensure "why" over "what" and proper tag usage
9
8
  Always read @package.json without limit or offset to understand what scripts and third party packages are used
10
9
  Always read @eslint.config.ts without limit or offset to understand this project's linting standards
@@ -1,5 +1,6 @@
1
1
  {
2
2
  "paths": [
3
- ".claude/skills/coding-philosophy"
3
+ ".claude/skills/coding-philosophy",
4
+ ".claude/skills/prompt-complexity-scorer"
4
5
  ]
5
6
  }
package/package.json CHANGED
@@ -74,7 +74,7 @@
74
74
  },
75
75
  "resolutions": {},
76
76
  "name": "@codyswann/lisa",
77
- "version": "1.11.11",
77
+ "version": "1.12.0",
78
78
  "description": "Claude Code governance framework that applies guardrails, guidance, and automated enforcement to projects",
79
79
  "main": "dist/index.js",
80
80
  "bin": {
@@ -50,6 +50,7 @@
50
50
  "knip": "^5.0.0",
51
51
  "@ast-grep/cli": "^0.40.4",
52
52
  "@jest/test-sequencer": "^30.2.0",
53
+ "@jest/globals": "^30.0.0",
53
54
  "@types/jest": "^30.0.0",
54
55
  "jest": "^30.0.0",
55
56
  "ts-jest": "^29.4.6"
@@ -1,164 +0,0 @@
1
- ---
2
- name: prompt-complexity-scorer
3
- description: Evaluates user prompts for effort and complexity on a 1-10 scale. This skill should be invoked on every user request to determine if the request warrants planning via a project. For scores 5-10, the agent suggests creating a project at projects/<date>-<project-name>/ to enable task tracking and team collaboration.
4
- model: haiku
5
- ---
6
-
7
- # Prompt Complexity Scorer
8
-
9
- This skill evaluates user prompts to determine if they require planning before implementation.
10
-
11
- ## Skip Evaluation
12
-
13
- **Do not evaluate** prompts that are explicit slash command invocations. If the user's prompt starts with `/`, they are already invoking a workflow and no complexity scoring is needed.
14
-
15
- Examples of prompts to **skip** (just execute the command):
16
- - `/project:bootstrap @specs/add-auth.md`
17
- - `/git:commit`
18
- - `/project:implement @projects/2026-01-26-feature`
19
-
20
- Examples of prompts to **evaluate**:
21
- - "Add WebSocket support to this project"
22
- - "Make the app faster"
23
- - "Refactor the authentication system"
24
-
25
- ## Scoring Criteria
26
-
27
- Score each prompt on a 1-10 scale based on these factors:
28
-
29
- | Factor | Low (1-3) | Medium (4-6) | High (7-10) |
30
- |--------|-----------|--------------|-------------|
31
- | **Scope** | Single file/function | Multiple files/module | Multiple systems/services |
32
- | **Clarity** | Specific, well-defined | Some ambiguity | Vague, open-ended |
33
- | **Dependencies** | None or obvious | Some coordination | Multiple unknown deps |
34
- | **Unknowns** | None | Some research needed | Significant discovery |
35
- | **Risk** | Low/reversible | Moderate | High/architectural |
36
-
37
- ### Score Calculation
38
-
39
- 1. Evaluate each factor independently
40
- 2. Average the factor scores
41
- 3. Round to nearest integer
42
-
43
- ## Behavior by Score
44
-
45
- ### Score 1-4: Continue Normally
46
-
47
- Do not mention the scoring. Proceed with the request immediately.
48
-
49
- ### Score 5-10: Suggest Project
50
-
51
- Pause and ask the user:
52
-
53
- ```text
54
- This request scores [X]/10 on complexity. I suggest creating a project to track this work properly.
55
-
56
- Would you like me to create `projects/<date>-<suggested-name>/` with a brief.md?
57
- ```
58
-
59
- Where `<date>` is today's date in YYYY-MM-DD format and `<suggested-name>` is a kebab-case name derived from the request (e.g., "2026-01-24-add-websockets", "2026-01-24-refactor-auth-system").
60
-
61
- ## Project Setup
62
-
63
- When creating a project, create the directory structure and brief.md:
64
-
65
- ```bash
66
- # Get today's date
67
- DATE=$(date +%Y-%m-%d)
68
- mkdir -p projects/${DATE}-<suggested-name>/tasks
69
- ```
70
-
71
- ```markdown
72
- # <Title derived from prompt>
73
-
74
- ## Original Request
75
-
76
- <User's exact prompt/request>
77
-
78
- ## Goals
79
-
80
- <Bullet points summarizing what needs to be accomplished>
81
-
82
- ## Notes
83
-
84
- <Any additional context or constraints mentioned>
85
- ```
86
-
87
- After creating the project, inform the user:
88
-
89
- ```text
90
- Project created at `projects/<date>-<suggested-name>/`.
91
-
92
- You can now:
93
- - Run `/project:bootstrap @projects/<date>-<suggested-name>` for full research and planning
94
- - Or continue with the request and tasks will be tracked in this project
95
- ```
96
-
97
- **IMPORTANT**: After creating the project, set the active project context by creating a marker file:
98
-
99
- ```bash
100
- echo "${DATE}-<suggested-name>" > .claude-active-project
101
- ```
102
-
103
- This enables automatic task syncing to the project directory.
104
-
105
- ## Examples
106
-
107
- ### Example 1: Simple Request (Score 1)
108
-
109
- **Prompt**: "Fix the typo in the error message on line 45 of user.service.ts"
110
-
111
- **Factors**:
112
- - Scope: 1 (single line)
113
- - Clarity: 1 (exact location specified)
114
- - Dependencies: 1 (none)
115
- - Unknowns: 1 (none)
116
- - Risk: 1 (trivial change)
117
-
118
- **Score**: 1 → Continue normally
119
-
120
- ### Example 2: Complex Request (Score 7)
121
-
122
- **Prompt**: "Add WebSocket support to this project"
123
-
124
- **Factors**:
125
- - Scope: 8 (new system, multiple files)
126
- - Clarity: 6 (what kind of WebSocket? real-time features?)
127
- - Dependencies: 7 (infrastructure, client changes)
128
- - Unknowns: 7 (architecture decisions needed)
129
- - Risk: 7 (architectural change)
130
-
131
- **Score**: 7 → Suggest creating `projects/YYYY-MM-DD-add-websockets/`
132
-
133
- ### Example 3: Medium Request (Score 3)
134
-
135
- **Prompt**: "Add a new field 'nickname' to the User entity"
136
-
137
- **Factors**:
138
- - Scope: 4 (entity, migration, possibly resolvers)
139
- - Clarity: 2 (clear requirement)
140
- - Dependencies: 3 (migration, schema)
141
- - Unknowns: 2 (standard pattern)
142
- - Risk: 3 (database change but reversible)
143
-
144
- **Score**: 3 → Continue normally
145
-
146
- ### Example 4: Nebulous Request (Score 8)
147
-
148
- **Prompt**: "Make the app faster"
149
-
150
- **Factors**:
151
- - Scope: 9 (entire application)
152
- - Clarity: 9 (completely vague)
153
- - Dependencies: 8 (unknown until investigated)
154
- - Unknowns: 9 (what's slow? why?)
155
- - Risk: 6 (depends on changes)
156
-
157
- **Score**: 8 → Suggest creating `projects/YYYY-MM-DD-performance-optimization/`
158
-
159
- ## Important Notes
160
-
161
- - This evaluation should be quick and silent for low-complexity requests
162
- - Never mention the scoring system for scores 1-4
163
- - For borderline cases (score 4-5), lean toward continuing normally
164
- - The goal is to catch truly complex/nebulous requests that benefit from planning