@esoteric-logic/praxis-harness 2.1.1 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,38 @@
1
+ #!/bin/bash
2
+ set -euo pipefail
3
+
4
+ echo "=== Praxis: Installing api kit ==="
5
+ echo ""
6
+
7
+ PASS=0
8
+ TOTAL=0
9
+
10
+ check() {
11
+ TOTAL=$((TOTAL + 1))
12
+ if command -v "$1" &>/dev/null; then
13
+ echo " ✓ $1 found ($(command -v "$1"))"
14
+ PASS=$((PASS + 1))
15
+ else
16
+ echo " ✗ $1 not found"
17
+ echo " Install: $2"
18
+ fi
19
+ }
20
+
21
+ echo "Checking optional CLI tools..."
22
+ echo ""
23
+
24
+ check "jq" "brew install jq OR apt-get install jq"
25
+ check "curl" "pre-installed on macOS/Linux"
26
+
27
+ echo ""
28
+ echo " $PASS/$TOTAL tools found"
29
+ echo ""
30
+
31
+ echo "Note: This kit uses Claude's built-in analysis capabilities."
32
+ echo "No external API linting tools required."
33
+ echo ""
34
+ echo "Commands available: /api:spec, /api:review, /api:contract"
35
+ echo ""
36
+ echo "=== api kit check complete ==="
37
+ echo "Activate with: /kit:api"
38
+ echo ""
@@ -0,0 +1,63 @@
1
+ # API Design — Rules
2
+ # Scope: Loads when api kit is active
3
+ # Paths: **/*.ts, **/*.js, **/*.py, **/*.go (API route files)
4
+
5
+ ## Invariants — BLOCK on violation
6
+
7
+ ### RESTful naming
8
+ - Resources are nouns, plural: `/users`, `/orders`, not `/getUser`, `/createOrder`
9
+ - Nested resources for relationships: `/users/{id}/orders`
10
+ - No verbs in URLs — HTTP methods convey the action
11
+ - Query parameters for filtering, sorting, pagination: `?status=active&sort=created_at&page=2`
12
+
13
+ ### HTTP status codes
14
+ - 200: success with body. 201: created. 204: success no body.
15
+ - 400: client error (validation). 401: not authenticated. 403: not authorized. 404: not found.
16
+ - 409: conflict. 422: unprocessable entity.
17
+ - 500: server error (never leak internals).
18
+ - Never return 200 with an error body — use the appropriate 4xx/5xx code.
19
+
20
+ ### Error response format
21
+ All error responses use a consistent structure:
22
+ ```json
23
+ {
24
+ "error": {
25
+ "code": "VALIDATION_ERROR",
26
+ "message": "Human-readable description",
27
+ "details": [{"field": "email", "issue": "required"}]
28
+ }
29
+ }
30
+ ```
31
+ Never return raw stack traces or internal error messages.
32
+
33
+ ### Versioning
34
+ - URL path versioning preferred: `/v1/users`, `/v2/users`
35
+ - Header versioning acceptable: `Accept: application/vnd.api.v2+json`
36
+ - Never break existing versions without a migration path
37
+ - Deprecation: announce in response headers before removal
38
+
39
+ ### Authentication
40
+ - Bearer tokens in Authorization header, not query parameters
41
+ - API keys in headers, never in URLs (URLs are logged)
42
+ - Short-lived tokens with refresh mechanism for user-facing APIs
43
+
44
+ ## Conventions — WARN on violation
45
+
46
+ ### Pagination
47
+ - Cursor-based for large/real-time datasets
48
+ - Offset-based acceptable for small, static datasets
49
+ - Always return: `total`, `page`/`cursor`, `per_page`, `next`/`prev` links
50
+
51
+ ### Request/Response
52
+ - Use camelCase for JSON fields (or snake_case — be consistent per project)
53
+ - Dates in ISO 8601: `2024-01-15T10:30:00Z`
54
+ - IDs as strings (UUIDs preferred over sequential integers for external APIs)
55
+ - Envelope responses only when metadata is needed — prefer flat responses
56
+
57
+ ### Rate limiting
58
+ - Return `X-RateLimit-Limit`, `X-RateLimit-Remaining`, `X-RateLimit-Reset` headers
59
+ - 429 status code with `Retry-After` header when exceeded
60
+
61
+ ### Documentation
62
+ - Every endpoint must have: method, path, description, request body, response body, error codes
63
+ - OpenAPI 3.x spec as source of truth — code generates from spec or spec generates from code
@@ -0,0 +1,13 @@
1
+ #!/bin/bash
2
+ set -euo pipefail
3
+
4
+ echo "=== Praxis: Removing api kit ==="
5
+ echo ""
6
+ echo "This kit has no npm skills or MCP servers to remove."
7
+ echo ""
8
+ echo "To fully remove:"
9
+ echo " 1. Remove /kit:api from any project CLAUDE.md '## Active kit' sections"
10
+ echo " 2. Delete this directory: rm -rf ~/.claude/kits/api"
11
+ echo ""
12
+ echo "=== api kit removed ==="
13
+ echo ""
@@ -0,0 +1,48 @@
1
+ ---
2
+ name: data
3
+ version: 1.0.0
4
+ description: "Data engineering — schema design, migration planning, query optimization"
5
+ activation: /kit:data
6
+ deactivation: /kit:off
7
+ skills_chain:
8
+ - phase: schema
9
+ skills: []
10
+ status: planned
11
+ - phase: migration
12
+ skills: []
13
+ status: planned
14
+ - phase: query
15
+ skills: []
16
+ status: planned
17
+ mcp_servers: []
18
+ rules:
19
+ - data.md
20
+ removal_condition: >
21
+ Remove when data operations are fully handled by a dedicated DBA tool
22
+ with no manual Claude-driven operations remaining.
23
+ ---
24
+
25
+ # Data Kit
26
+
27
+ ## Purpose
28
+ Enforce data engineering best practices for schema design, migrations,
29
+ and query optimization across SQL and NoSQL databases.
30
+
31
+ ## Skills Chain
32
+
33
+ | # | Phase | Command | What It Provides |
34
+ |---|-------|---------|-----------------|
35
+ | 1 | Schema | `/data:schema` | Schema design review with normalization analysis |
36
+ | 2 | Migration | `/data:migration` | Migration planning with rollback strategy |
37
+ | 3 | Query | `/data:query` | Query optimization and N+1 detection |
38
+
39
+ ## Workflow Integration
40
+
41
+ This kit operates WITHIN the Praxis workflow:
42
+ - **Praxis** structures the work (discuss → plan → execute → verify → simplify → ship)
43
+ - **This kit** adds data-specific rules and commands
44
+
45
+ ## Prerequisites
46
+
47
+ Run `install.sh` in this directory to check for required CLI tools.
48
+ Verify with `/kit:data` after install.
@@ -0,0 +1,23 @@
1
+ ---
2
+ description: "Plan a database migration with rollback strategy and safety checks"
3
+ ---
4
+
5
+ # data:migration
6
+
7
+ ## Steps
8
+
9
+ 1. **Understand the change** — what schema changes are needed and why
10
+ 2. **Classify risk level:**
11
+ - **Low**: add nullable column, add index, add table
12
+ - **Medium**: rename column, add NOT NULL with default, modify index
13
+ - **High**: drop column/table, change column type, data transformation
14
+ 3. **Generate migration:**
15
+ - Up script (apply changes)
16
+ - Down script (reverse changes)
17
+ - Data migration script (if needed, separate from schema migration)
18
+ 4. **Safety checklist:**
19
+ - [ ] Down migration tested?
20
+ - [ ] Production data volume considered? (large table ALTERs may lock)
21
+ - [ ] Backward compatible? (old code works with new schema during deploy)
22
+ - [ ] Indexes added for new query patterns?
23
+ 5. **Write plan** to vault `specs/migration-plan-{date}.md`
@@ -0,0 +1,21 @@
1
+ ---
2
+ description: "Review SQL queries and data access patterns for performance and correctness"
3
+ ---
4
+
5
+ # data:query
6
+
7
+ ## Steps
8
+
9
+ 1. **Identify queries** — scan for SQL in code, ORM queries, or user-provided queries
10
+ 2. **Analyze each query:**
11
+ - Missing indexes (check WHERE, JOIN, ORDER BY columns)
12
+ - N+1 patterns (loop with individual queries instead of batch)
13
+ - Unbounded results (missing LIMIT/pagination)
14
+ - SELECT * usage
15
+ - Correlated subqueries that could be joins
16
+ 3. **Check data access patterns:**
17
+ - Connection pooling configuration
18
+ - Transaction scope (too broad or too narrow)
19
+ - Read replica usage for read-heavy workloads
20
+ 4. **Suggest optimizations** with before/after examples
21
+ 5. **Write review** to vault `specs/query-review-{date}.md`
@@ -0,0 +1,22 @@
1
+ ---
2
+ description: "Review database schema design for normalization, indexing, and best practices"
3
+ ---
4
+
5
+ # data:schema
6
+
7
+ ## Steps
8
+
9
+ 1. **Identify schema files** — find migration files, model definitions, or SQL schemas
10
+ 2. **Analyze structure:**
11
+ - Normalization level (1NF → 3NF)
12
+ - Primary key strategy
13
+ - Foreign key relationships and ON DELETE behavior
14
+ - Index coverage on foreign keys and query columns
15
+ - Missing created_at/updated_at timestamps
16
+ 3. **Check for anti-patterns:**
17
+ - God tables (>20 columns)
18
+ - Missing indexes on foreign keys
19
+ - Polymorphic associations without proper constraints
20
+ - Over-normalization causing excessive joins
21
+ 4. **Present findings** by severity
22
+ 5. **Write review** to vault `specs/schema-review-{date}.md`
@@ -0,0 +1,40 @@
1
+ #!/bin/bash
2
+ set -euo pipefail
3
+
4
+ echo "=== Praxis: Installing data kit ==="
5
+ echo ""
6
+
7
+ PASS=0
8
+ TOTAL=0
9
+
10
+ check() {
11
+ TOTAL=$((TOTAL + 1))
12
+ if command -v "$1" &>/dev/null; then
13
+ echo " ✓ $1 found ($(command -v "$1"))"
14
+ PASS=$((PASS + 1))
15
+ else
16
+ echo " ✗ $1 not found (optional)"
17
+ echo " Install: $2"
18
+ fi
19
+ }
20
+
21
+ echo "Checking optional CLI tools..."
22
+ echo ""
23
+
24
+ check "psql" "brew install postgresql OR apt-get install postgresql-client"
25
+ check "mysql" "brew install mysql-client OR apt-get install mysql-client"
26
+ check "mongosh" "brew install mongosh OR https://www.mongodb.com/try/download/shell"
27
+ check "jq" "brew install jq OR apt-get install jq"
28
+
29
+ echo ""
30
+ echo " $PASS/$TOTAL tools found"
31
+ echo ""
32
+
33
+ echo "Note: This kit uses Claude's built-in analysis for schema and query review."
34
+ echo "Database CLI tools are needed only for live query testing."
35
+ echo ""
36
+ echo "Commands available: /data:schema, /data:migration, /data:query"
37
+ echo ""
38
+ echo "=== data kit check complete ==="
39
+ echo "Activate with: /kit:data"
40
+ echo ""
@@ -0,0 +1,43 @@
1
+ # Data Engineering — Rules
2
+ # Scope: Loads when data kit is active
3
+ # Paths: **/*.sql, **/migrations/**, **/models/**, **/schema/**
4
+
5
+ ## Invariants — BLOCK on violation
6
+
7
+ ### Migration safety
8
+ - Every migration must be reversible — include both up and down scripts
9
+ - Never drop columns or tables in a single migration — deprecate first, remove later
10
+ - Add columns as nullable or with defaults — never add NOT NULL without a default to existing tables
11
+ - Test migrations against a copy of production data before applying
12
+ - No data transformations in schema migrations — separate data migrations
13
+
14
+ ### Query safety
15
+ - No `SELECT *` in production code — specify columns explicitly
16
+ - No unbounded queries — always include LIMIT or pagination
17
+ - No raw SQL string interpolation — use parameterized queries
18
+ - Validate that DELETE/UPDATE statements have WHERE clauses
19
+
20
+ ### Schema design
21
+ - Primary keys on every table — prefer UUIDs for distributed systems, auto-increment for single-node
22
+ - Foreign keys with explicit ON DELETE behavior (CASCADE, SET NULL, RESTRICT)
23
+ - Created/updated timestamps on every table
24
+ - Indexes on all foreign keys and frequently queried columns
25
+
26
+ ## Conventions — WARN on violation
27
+
28
+ ### Naming
29
+ - Tables: plural, snake_case (`user_accounts`, not `UserAccount`)
30
+ - Columns: snake_case, descriptive (`created_at`, not `ts`)
31
+ - Indexes: `idx_{table}_{columns}` pattern
32
+ - Constraints: `fk_{table}_{ref_table}`, `uq_{table}_{columns}`
33
+
34
+ ### Performance
35
+ - Detect N+1 query patterns — suggest eager loading or joins
36
+ - Large tables (>1M rows estimated): require index analysis before new queries
37
+ - Avoid correlated subqueries — prefer joins or CTEs
38
+ - Connection pooling required for production applications
39
+
40
+ ### Normalization
41
+ - Aim for 3NF unless denormalization is justified by read performance
42
+ - Document denormalization decisions as ADRs in vault `specs/`
43
+ - JSON columns acceptable for truly flexible schemas — not for avoiding normalization
@@ -0,0 +1,14 @@
1
+ #!/bin/bash
2
+ set -euo pipefail
3
+
4
+ echo "=== Praxis: Removing data kit ==="
5
+ echo ""
6
+ echo "This kit has no npm skills or MCP servers to remove."
7
+ echo "Database CLI tools are system-level and not managed by this kit."
8
+ echo ""
9
+ echo "To fully remove:"
10
+ echo " 1. Remove /kit:data from any project CLAUDE.md '## Active kit' sections"
11
+ echo " 2. Delete this directory: rm -rf ~/.claude/kits/data"
12
+ echo ""
13
+ echo "=== data kit removed ==="
14
+ echo ""
@@ -0,0 +1,49 @@
1
+ ---
2
+ name: security
3
+ version: 1.0.0
4
+ description: "Security review — threat modeling, IAM audit, OWASP checks, secrets management"
5
+ activation: /kit:security
6
+ deactivation: /kit:off
7
+ skills_chain:
8
+ - phase: threat-model
9
+ skills: []
10
+ status: planned
11
+ - phase: iam-review
12
+ skills: []
13
+ status: planned
14
+ - phase: audit
15
+ skills: []
16
+ status: planned
17
+ mcp_servers: []
18
+ rules:
19
+ - security.md
20
+ removal_condition: >
21
+ Remove when security review is fully handled by a dedicated SAST/DAST pipeline
22
+ with no manual Claude-driven operations remaining.
23
+ ---
24
+
25
+ # Security Kit
26
+
27
+ ## Purpose
28
+ Structured security analysis for applications and infrastructure.
29
+ Covers threat modeling, IAM policy review, and OWASP top 10 auditing.
30
+
31
+ ## Skills Chain
32
+
33
+ | # | Phase | Command | What It Provides |
34
+ |---|-------|---------|-----------------|
35
+ | 1 | Threat Model | `/security:threat-model` | STRIDE-based threat modeling for the current system |
36
+ | 2 | IAM Review | `/security:iam-review` | Review IAM policies, roles, permissions for least privilege |
37
+ | 3 | Audit | `/security:audit` | OWASP top 10 check against codebase |
38
+
39
+ ## Workflow Integration
40
+
41
+ This kit operates WITHIN the Praxis workflow:
42
+ - **Praxis** structures the work (discuss → plan → execute → verify → simplify → ship)
43
+ - **This kit** adds security-specific rules and commands
44
+ - Extends the base `secret-scan` hook with deeper analysis
45
+
46
+ ## Prerequisites
47
+
48
+ Run `install.sh` in this directory to check for required CLI tools.
49
+ Verify with `/kit:security` after install.
@@ -0,0 +1,19 @@
1
+ ---
2
+ description: "Review IAM policies and permissions for least-privilege compliance"
3
+ ---
4
+
5
+ # security:iam-review
6
+
7
+ ## Steps
8
+
9
+ 1. **Identify IAM scope** — cloud provider (AWS/Azure/GCP), service accounts, user roles
10
+ 2. **Collect policies** — read IAM policy files, role definitions, permission sets
11
+ 3. **Check for violations:**
12
+ - Wildcard permissions (`*` actions or resources)
13
+ - Overly broad roles (admin/owner where reader/contributor suffices)
14
+ - Service accounts with user-level permissions
15
+ - Cross-account access without justification
16
+ - Unused roles or permissions (if access logs available)
17
+ 4. **Present findings** by severity (Critical/Major/Minor)
18
+ 5. **Recommend** least-privilege alternatives for each violation
19
+ 6. **Write review** to vault `specs/iam-review-{date}.md`
@@ -0,0 +1,23 @@
1
+ ---
2
+ description: "OWASP top 10 check against the current codebase"
3
+ ---
4
+
5
+ # security:audit
6
+
7
+ ## Steps
8
+
9
+ 1. **Scan codebase** for OWASP Top 10 (2021) vulnerabilities:
10
+ - A01: Broken Access Control
11
+ - A02: Cryptographic Failures
12
+ - A03: Injection (SQL, NoSQL, OS command, LDAP)
13
+ - A04: Insecure Design
14
+ - A05: Security Misconfiguration
15
+ - A06: Vulnerable and Outdated Components
16
+ - A07: Identification and Authentication Failures
17
+ - A08: Software and Data Integrity Failures
18
+ - A09: Security Logging and Monitoring Failures
19
+ - A10: Server-Side Request Forgery (SSRF)
20
+ 2. **Launch subagent** (follow `/subagent` protocol) with role: "Security auditor"
21
+ 3. **Run external tools** if available (trivy, semgrep) for additional coverage
22
+ 4. **Present findings** grouped by OWASP category with severity
23
+ 5. **Write audit report** to vault `specs/security-audit-{date}.md`
@@ -0,0 +1,20 @@
1
+ ---
2
+ description: "STRIDE-based threat modeling for the current system"
3
+ ---
4
+
5
+ # security:threat-model
6
+
7
+ ## Steps
8
+
9
+ 1. **Identify system scope** — ask user for the component or feature to model
10
+ 2. **Map data flows** — identify trust boundaries, data stores, external entities, processes
11
+ 3. **Apply STRIDE** for each component crossing a trust boundary:
12
+ - **S**poofing — can identity be faked?
13
+ - **T**ampering — can data be modified in transit/at rest?
14
+ - **R**epudiation — can actions be denied without audit trail?
15
+ - **I**nformation Disclosure — can data leak to unauthorized parties?
16
+ - **D**enial of Service — can the component be overwhelmed?
17
+ - **E**levation of Privilege — can a user gain unauthorized access?
18
+ 4. **Rate each threat**: likelihood (1-3) × impact (1-3) = risk score
19
+ 5. **Recommend mitigations** for high-risk threats
20
+ 6. **Write threat model** to vault `specs/threat-model-{date}-{component}.md`
@@ -0,0 +1,39 @@
1
+ #!/bin/bash
2
+ set -euo pipefail
3
+
4
+ echo "=== Praxis: Installing security kit ==="
5
+ echo ""
6
+
7
+ PASS=0
8
+ TOTAL=0
9
+
10
+ check() {
11
+ TOTAL=$((TOTAL + 1))
12
+ if command -v "$1" &>/dev/null; then
13
+ echo " ✓ $1 found ($(command -v "$1"))"
14
+ PASS=$((PASS + 1))
15
+ else
16
+ echo " ✗ $1 not found (optional)"
17
+ echo " Install: $2"
18
+ fi
19
+ }
20
+
21
+ echo "Checking optional CLI tools..."
22
+ echo ""
23
+
24
+ check "trivy" "brew install trivy OR https://aquasecurity.github.io/trivy"
25
+ check "semgrep" "pip install semgrep OR brew install semgrep"
26
+ check "rg" "brew install ripgrep OR apt-get install ripgrep"
27
+
28
+ echo ""
29
+ echo " $PASS/$TOTAL tools found"
30
+ echo ""
31
+
32
+ echo "Note: This kit uses Claude's built-in analysis for most checks."
33
+ echo "External tools enhance scanning but are not required."
34
+ echo ""
35
+ echo "Commands available: /security:threat-model, /security:iam-review, /security:audit"
36
+ echo ""
37
+ echo "=== security kit check complete ==="
38
+ echo "Activate with: /kit:security"
39
+ echo ""
@@ -0,0 +1,57 @@
1
+ # Security — Rules
2
+ # Scope: Loads when security kit is active
3
+ # Extends base secret-scan with deeper analysis
4
+
5
+ ## Invariants — BLOCK on violation
6
+
7
+ ### Input validation
8
+ - Validate ALL user input at the boundary — never trust client data
9
+ - Use allowlists over denylists for input validation
10
+ - Parameterize all database queries — no string interpolation in SQL
11
+ - Sanitize HTML output — prevent XSS via output encoding
12
+ - Validate file uploads: type, size, content (not just extension)
13
+
14
+ ### Authentication
15
+ - Hash passwords with bcrypt/argon2 — never MD5/SHA1
16
+ - Enforce minimum password complexity at the application level
17
+ - Implement account lockout after N failed attempts
18
+ - Session tokens must be cryptographically random, sufficient length (128+ bits)
19
+ - Invalidate sessions on logout and password change
20
+
21
+ ### Authorization
22
+ - Enforce least privilege — default deny, explicit allow
23
+ - Check authorization on every request — never rely on client-side checks
24
+ - Use role-based (RBAC) or attribute-based (ABAC) access control
25
+ - Validate object-level access — prevent IDOR (Insecure Direct Object Reference)
26
+ - Log all authorization failures
27
+
28
+ ### Secrets
29
+ - No secrets in code, config files, or environment variable defaults
30
+ - Use secret managers (Vault, AWS Secrets Manager, Azure Key Vault)
31
+ - Rotate secrets on schedule and on compromise
32
+ - Audit secret access logs
33
+
34
+ ### Transport
35
+ - TLS 1.2+ for all external communication
36
+ - HSTS headers on all HTTP responses
37
+ - Certificate pinning for mobile/critical APIs
38
+
39
+ ## Conventions — WARN on violation
40
+
41
+ ### CORS
42
+ - Restrict `Access-Control-Allow-Origin` to specific domains — never `*` in production
43
+ - Limit allowed methods and headers to what's needed
44
+
45
+ ### CSP
46
+ - Content Security Policy headers on all HTML responses
47
+ - No `unsafe-inline` or `unsafe-eval` in production CSP
48
+
49
+ ### Dependencies
50
+ - Audit dependencies for known vulnerabilities before adding
51
+ - Pin dependency versions — no floating ranges in production
52
+ - Review transitive dependencies for supply chain risk
53
+
54
+ ### Logging
55
+ - Log security events: auth failures, privilege escalations, input validation failures
56
+ - Never log secrets, tokens, passwords, or PII
57
+ - Structured logging with correlation IDs for incident investigation
@@ -0,0 +1,14 @@
1
+ #!/bin/bash
2
+ set -euo pipefail
3
+
4
+ echo "=== Praxis: Removing security kit ==="
5
+ echo ""
6
+ echo "This kit has no npm skills or MCP servers to remove."
7
+ echo "CLI tools (trivy, semgrep) are system-level and not managed by this kit."
8
+ echo ""
9
+ echo "To fully remove:"
10
+ echo " 1. Remove /kit:security from any project CLAUDE.md '## Active kit' sections"
11
+ echo " 2. Delete this directory: rm -rf ~/.claude/kits/security"
12
+ echo ""
13
+ echo "=== security kit removed ==="
14
+ echo ""
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@esoteric-logic/praxis-harness",
3
- "version": "2.1.1",
3
+ "version": "2.3.0",
4
4
  "description": "Layered Claude Code harness — workflow discipline, AI-Kits, persistent vault integration",
5
5
  "bin": {
6
6
  "praxis-harness": "./bin/praxis.js"
@@ -13,7 +13,7 @@
13
13
  "scripts/"
14
14
  ],
15
15
  "scripts": {
16
- "test": "node --check bin/praxis.js"
16
+ "test": "node --check bin/praxis.js && bash scripts/test-harness.sh"
17
17
  },
18
18
  "repository": {
19
19
  "type": "git",