prjct-cli 2.2.16 → 2.3.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1 +1 @@
1
- {"agentic/checklist-routing.md":"---\nallowed-tools: [Read, Glob]\ndescription: 'Determine which quality checklists to apply - Claude decides'\n---\n\n# Checklist Routing Instructions\n\n## Objective\n\nDetermine which quality checklists are relevant for a task by analyzing the ACTUAL task and its scope.\n\n## Step 1: Understand the Task\n\nRead the task description and identify:\n\n- What type of work is being done? (new feature, bug fix, refactor, infra, docs)\n- What domains are affected? (code, UI, API, database, deployment)\n- What is the scope? (small fix, major feature, architectural change)\n\n## Step 2: Consider Task Domains\n\nEach task can touch multiple domains. Consider:\n\n| Domain | Signals |\n|--------|---------|\n| Code Quality | Writing/modifying any code |\n| Architecture | New components, services, or major refactors |\n| UX/UI | User-facing changes, CLI output, visual elements |\n| Infrastructure | Deployment, containers, CI/CD, cloud resources |\n| Security | Auth, user data, external inputs, secrets |\n| Testing | New functionality, bug fixes, critical paths |\n| Documentation | Public APIs, complex features, breaking changes |\n| Performance | Data processing, loops, network calls, rendering |\n| Accessibility | User interfaces (web, mobile, CLI) |\n| Data | Database operations, caching, data transformations |\n\n## Step 3: Match Task to Checklists\n\nBased on your analysis, select relevant checklists:\n\n**DO NOT assume:**\n- Every task needs all checklists\n- \"Frontend\" = only UX checklist\n- \"Backend\" = only Code Quality checklist\n\n**DO analyze:**\n- What the task actually touches\n- What quality dimensions matter for this specific work\n- What could go wrong if not checked\n\n## Available Checklists\n\nLocated in `templates/checklists/`:\n\n| Checklist | When to Apply |\n|-----------|---------------|\n| `code-quality.md` | Any code changes (any language) |\n| `architecture.md` | New modules, services, significant structural changes |\n| `ux-ui.md` | User-facing interfaces (web, mobile, CLI, API DX) |\n| `infrastructure.md` | Deployment, containers, CI/CD, cloud resources |\n| `security.md` | ALWAYS for: auth, user input, external APIs, secrets |\n| `testing.md` | New features, bug fixes, refactors |\n| `documentation.md` | Public APIs, complex features, configuration changes |\n| `performance.md` | Data-intensive operations, critical paths |\n| `accessibility.md` | Any user interface work |\n| `data.md` | Database, caching, data transformations |\n\n## Decision Process\n\n1. Read task description\n2. Identify primary work domain\n3. List secondary domains affected\n4. Select 2-4 most relevant checklists\n5. Consider Security (almost always relevant)\n\n## Output\n\nReturn selected checklists with reasoning:\n\n```json\n{\n \"checklists\": [\"code-quality\", \"security\", \"testing\"],\n \"reasoning\": \"Task involves new API endpoint (code), handles user input (security), and adds business logic (testing)\",\n \"priority_items\": [\"Input validation\", \"Error handling\", \"Happy path tests\"],\n \"skipped\": {\n \"accessibility\": \"No user interface changes\",\n \"infrastructure\": \"No deployment changes\"\n }\n}\n```\n\n## Rules\n\n- **Task-driven** - Focus on what the specific task needs\n- **Less is more** - 2-4 focused checklists beat 10 unfocused\n- **Security is special** - Default to including unless clearly irrelevant\n- **Explain your reasoning** - Don't just pick, justify selections AND skips\n- **Context matters** - Small typo fix ≠ major refactor in checklist needs\n","agentic/orchestrator.md":"# Orchestrator\n\nLoad project context for task execution.\n\n## Flow\n\n```\np. {command} → Load Config → Load State → Load Agents → Execute\n```\n\n## Step 1: Load Config\n\n```\nREAD: .prjct/prjct.config.json → {projectId}\nSET: {globalPath} = ~/.prjct-cli/projects/{projectId}\n```\n\n## Step 2: Load State\n\n```bash\nprjct dash compact\n# Parse output to determine: {hasActiveTask}\n```\n\n## Step 3: Load Agents\n\n```\nGLOB: {globalPath}/agents/*.md\nFOR EACH agent: READ and store content\n```\n\n## Step 4: Detect Domains\n\nAnalyze task → identify domains:\n- frontend: UI, forms, components\n- backend: API, server logic\n- database: Schema, queries\n- testing: Tests, mocks\n- devops: CI/CD, deployment\n\nIF task spans 3+ domains → fragment into subtasks\n\n## Step 5: Build Context\n\nCombine: state + agents + detected domains → execute\n\n## Output Format\n\n```\n🎯 Task: {description}\n📦 Context: Agent: {name} | State: {status} | Domains: {list}\n```\n\n## Error Handling\n\n| Situation | Action |\n|-----------|--------|\n| No config | \"Run `p. init` first\" |\n| No state | Create default |\n| No agents | Warn, continue |\n\n## Disable\n\n```yaml\n---\norchestrator: false\n---\n```\n","agentic/task-fragmentation.md":"# Task Fragmentation\n\nBreak complex multi-domain tasks into subtasks.\n\n## When to Fragment\n\n- Spans 3+ domains (frontend + backend + database)\n- Has natural dependency order\n- Too large for single execution\n\n## When NOT to Fragment\n\n- Single domain only\n- Small, focused change\n- Already atomic\n\n## Dependency Order\n\n1. **Database** (models first)\n2. **Backend** (API using models)\n3. **Frontend** (UI using API)\n4. **Testing** (tests for all)\n5. **DevOps** (deploy)\n\n## Subtask Format\n\n```json\n{\n \"subtasks\": [{\n \"id\": \"subtask-1\",\n \"description\": \"Create users table\",\n \"domain\": \"database\",\n \"agent\": \"database.md\",\n \"dependsOn\": []\n }]\n}\n```\n\n## Output\n\n```\n🎯 Task: {task}\n\n📋 Subtasks:\n├─ 1. [database] Create schema\n├─ 2. [backend] Create API\n└─ 3. [frontend] Create form\n```\n\n## Delegation\n\n```\nTask(\n subagent_type: 'general-purpose',\n prompt: '\n Read: {agentsPath}/{domain}.md\n Subtask: {description}\n Previous: {previousSummary}\n Focus ONLY on this subtask.\n '\n)\n```\n\n## Progress\n\n```\n📊 Progress: 2/4 (50%)\n✅ 1. [database] Done\n✅ 2. [backend] Done\n▶️ 3. [frontend] ← CURRENT\n⏳ 4. [testing]\n```\n\n## Error Handling\n\n```\n❌ Subtask 2/4 failed\n\nOptions:\n1. Retry\n2. Skip and continue\n3. Abort\n```\n\n## Anti-Patterns\n\n- Over-fragmentation: 10 subtasks for \"add button\"\n- Under-fragmentation: 1 subtask for \"add auth system\"\n- Wrong order: Frontend before backend\n","agents/AGENTS.md":"# AGENTS.md\n\nAI assistant guidance for **prjct-cli** - context layer for AI coding agents. Works with Claude Code, Gemini CLI, and more.\n\n## What This Is\n\n**NOT** project management. NO sprints, story points, ceremonies, or meetings.\n\n**IS** a context layer that gives AI agents the project knowledge they need to work effectively.\n\n---\n\n## Dynamic Agent Generation\n\nGenerate agents during `p. sync` based on analysis:\n\n```javascript\nawait generator.generateDynamicAgent('agent-name', {\n role: 'Role Description',\n expertise: 'Technologies, versions, tools',\n responsibilities: 'What they handle'\n})\n```\n\n### Guidelines\n1. Read `analysis/repo-summary.md` first\n2. Create specialists for each major technology\n3. Name descriptively: `go-backend` not `be`\n4. Include versions and frameworks found\n5. Follow project-specific patterns\n\n## Architecture\n\n**Global**: `~/.prjct-cli/projects/{id}/`\n```\nprjct.db # SQLite database (all state)\ncontext/ # now.md, next.md\nagents/ # domain specialists\n```\n\n**Local**: `.prjct/prjct.config.json` (read-only)\n\n## Commands\n\n| Command | Action |\n|---------|--------|\n| `p. init` | Initialize |\n| `p. sync` | Analyze + generate agents |\n| `p. task X` | Start task |\n| `p. done` | Complete subtask |\n| `p. ship` | Ship feature |\n| `p. next` | Show queue |\n\n## Intent Detection\n\n| Intent | Command |\n|--------|---------|\n| Start task | `p. task` |\n| Finish | `p. done` |\n| Ship | `p. ship` |\n| What's next | `p. next` |\n\n## Implementation\n\n- Atomic operations via `prjct` CLI\n- CLI handles all state persistence (SQLite)\n- Handle missing config gracefully\n","antigravity/SKILL.md":"---\nname: prjct\ndescription: Use when user mentions p., prjct, task tracking, or workflow commands.\n---\n\n# prjct — Context layer for AI agents\n\nGrammar: `p. <command> [args]` or `prjct <command> --md`\n\nCore commands: sync, task, done, ship, pause, resume, next, bug, workflow, tokens\nIntegrations: linear, jira, enrich\nOther: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nRules:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- All commits include footer: `Generated with [p/](https://www.prjct.app/)`\n- All storage through `prjct` CLI (SQLite internally)\n- Start code tasks with `p. task` and follow Context Contract from CLI output\n","checklists/architecture.md":"# Architecture Checklist\n\n> Applies to ANY system architecture\n\n## Design Principles\n- [ ] Clear separation of concerns\n- [ ] Loose coupling between components\n- [ ] High cohesion within modules\n- [ ] Single source of truth for data\n- [ ] Explicit dependencies (no hidden coupling)\n\n## Scalability\n- [ ] Stateless where possible\n- [ ] Horizontal scaling considered\n- [ ] Bottlenecks identified\n- [ ] Caching strategy defined\n\n## Resilience\n- [ ] Failure modes documented\n- [ ] Graceful degradation planned\n- [ ] Recovery procedures defined\n- [ ] Circuit breakers where needed\n\n## Maintainability\n- [ ] Clear boundaries between layers\n- [ ] Easy to test in isolation\n- [ ] Configuration externalized\n- [ ] Logging and observability built-in\n","checklists/code-quality.md":"# Code Quality Checklist\n\n> Universal principles for ANY programming language\n\n## Universal Principles\n- [ ] Single Responsibility: Each unit does ONE thing well\n- [ ] DRY: No duplicated logic (extract shared code)\n- [ ] KISS: Simplest solution that works\n- [ ] Clear naming: Self-documenting identifiers\n- [ ] Consistent patterns: Match existing codebase style\n\n## Error Handling\n- [ ] All error paths handled gracefully\n- [ ] Meaningful error messages\n- [ ] No silent failures\n- [ ] Proper resource cleanup (files, connections, memory)\n\n## Edge Cases\n- [ ] Null/nil/None handling\n- [ ] Empty collections handled\n- [ ] Boundary conditions tested\n- [ ] Invalid input rejected early\n\n## Code Organization\n- [ ] Functions/methods are small and focused\n- [ ] Related code grouped together\n- [ ] Clear module/package boundaries\n- [ ] No circular dependencies\n","checklists/data.md":"# Data Checklist\n\n> Applies to: SQL, NoSQL, GraphQL, File storage, Caching\n\n## Data Integrity\n- [ ] Schema/structure defined\n- [ ] Constraints enforced\n- [ ] Transactions used appropriately\n- [ ] Referential integrity maintained\n\n## Query Performance\n- [ ] Indexes on frequent queries\n- [ ] N+1 queries eliminated\n- [ ] Query complexity analyzed\n- [ ] Pagination for large datasets\n\n## Data Operations\n- [ ] Migrations versioned and reversible\n- [ ] Backup and restore tested\n- [ ] Data validation at boundary\n- [ ] Soft deletes considered (if applicable)\n\n## Caching\n- [ ] Cache invalidation strategy defined\n- [ ] TTL values appropriate\n- [ ] Cache warming considered\n- [ ] Cache hit/miss monitored\n\n## Data Privacy\n- [ ] PII identified and protected\n- [ ] Data anonymization where needed\n- [ ] Audit trail for sensitive data\n- [ ] Data deletion procedures defined\n","checklists/documentation.md":"# Documentation Checklist\n\n> Applies to ALL projects\n\n## Essential Docs\n- [ ] README with quick start\n- [ ] Installation instructions\n- [ ] Configuration options documented\n- [ ] Common use cases shown\n\n## Code Documentation\n- [ ] Public APIs documented\n- [ ] Complex logic explained\n- [ ] Architecture decisions recorded (ADRs)\n- [ ] Diagrams for complex flows\n\n## Operational Docs\n- [ ] Deployment process documented\n- [ ] Troubleshooting guide\n- [ ] Runbooks for common issues\n- [ ] Changelog maintained\n\n## API Documentation\n- [ ] All endpoints documented\n- [ ] Request/response examples\n- [ ] Error codes explained\n- [ ] Authentication documented\n\n## Maintenance\n- [ ] Docs updated with code changes\n- [ ] Version-specific documentation\n- [ ] Broken links checked\n- [ ] Examples tested and working\n","checklists/infrastructure.md":"# Infrastructure Checklist\n\n> Applies to: Cloud, On-prem, Hybrid, Edge\n\n## Deployment\n- [ ] Infrastructure as Code (Terraform, Pulumi, CloudFormation, etc.)\n- [ ] Reproducible environments\n- [ ] Rollback strategy defined\n- [ ] Blue-green or canary deployment option\n\n## Observability\n- [ ] Logging strategy defined\n- [ ] Metrics collection configured\n- [ ] Alerting thresholds set\n- [ ] Distributed tracing (if applicable)\n\n## Security\n- [ ] Secrets management (not in code)\n- [ ] Network segmentation\n- [ ] Least privilege access\n- [ ] Encryption at rest and in transit\n\n## Reliability\n- [ ] Backup strategy defined\n- [ ] Disaster recovery plan\n- [ ] Health checks configured\n- [ ] Auto-scaling rules (if applicable)\n\n## Cost Management\n- [ ] Resource sizing appropriate\n- [ ] Unused resources identified\n- [ ] Cost monitoring in place\n- [ ] Budget alerts configured\n","checklists/performance.md":"# Performance Checklist\n\n> Applies to: Backend, Frontend, Mobile, Database\n\n## Analysis\n- [ ] Bottlenecks identified with profiling\n- [ ] Baseline metrics established\n- [ ] Performance budgets defined\n- [ ] Benchmarks before/after changes\n\n## Optimization Strategies\n- [ ] Algorithmic complexity reviewed (O(n) vs O(n²))\n- [ ] Appropriate data structures used\n- [ ] Caching implemented where beneficial\n- [ ] Lazy loading for expensive operations\n\n## Resource Management\n- [ ] Memory usage optimized\n- [ ] Connection pooling used\n- [ ] Batch operations where applicable\n- [ ] Async/parallel processing considered\n\n## Frontend Specific\n- [ ] Bundle size optimized\n- [ ] Images optimized\n- [ ] Critical rendering path optimized\n- [ ] Network requests minimized\n\n## Backend Specific\n- [ ] Database queries optimized\n- [ ] Response compression enabled\n- [ ] Proper indexing in place\n- [ ] Connection limits configured\n","checklists/security.md":"# Security Checklist\n\n> ALWAYS ON - Applies to ALL applications\n\n## Input/Output\n- [ ] All user input validated and sanitized\n- [ ] Output encoded appropriately (prevent injection)\n- [ ] File uploads restricted and scanned\n- [ ] No sensitive data in logs or error messages\n\n## Authentication & Authorization\n- [ ] Strong authentication mechanism\n- [ ] Proper session management\n- [ ] Authorization checked at every access point\n- [ ] Principle of least privilege applied\n\n## Data Protection\n- [ ] Sensitive data encrypted at rest\n- [ ] Secure transmission (TLS/HTTPS)\n- [ ] PII handled according to regulations\n- [ ] Data retention policies followed\n\n## Dependencies\n- [ ] Dependencies from trusted sources\n- [ ] Known vulnerabilities checked\n- [ ] Minimal dependency surface\n- [ ] Regular security updates planned\n\n## API Security\n- [ ] Rate limiting implemented\n- [ ] Authentication required for sensitive endpoints\n- [ ] CORS properly configured\n- [ ] API keys/tokens secured\n","checklists/testing.md":"# Testing Checklist\n\n> Applies to: Unit, Integration, E2E, Performance testing\n\n## Coverage Strategy\n- [ ] Critical paths have high coverage\n- [ ] Happy path tested\n- [ ] Error paths tested\n- [ ] Edge cases covered\n\n## Test Quality\n- [ ] Tests are deterministic (no flaky tests)\n- [ ] Tests are independent (no order dependency)\n- [ ] Tests are fast (optimize slow tests)\n- [ ] Tests are readable (clear intent)\n\n## Test Types\n- [ ] Unit tests for business logic\n- [ ] Integration tests for boundaries\n- [ ] E2E tests for critical flows\n- [ ] Performance tests for bottlenecks\n\n## Mocking Strategy\n- [ ] External services mocked\n- [ ] Database isolated or mocked\n- [ ] Time-dependent code controlled\n- [ ] Random values seeded\n\n## Test Maintenance\n- [ ] Tests updated with code changes\n- [ ] Dead tests removed\n- [ ] Test data managed properly\n- [ ] CI/CD integration working\n","checklists/ux-ui.md":"# UX/UI Checklist\n\n> Applies to: Web, Mobile, CLI, Desktop, API DX\n\n## User Experience\n- [ ] Clear user journey/flow\n- [ ] Feedback for every action\n- [ ] Loading states shown\n- [ ] Error states handled gracefully\n- [ ] Success confirmation provided\n\n## Interface Design\n- [ ] Consistent visual language\n- [ ] Intuitive navigation\n- [ ] Responsive/adaptive layout (if applicable)\n- [ ] Touch targets adequate (mobile)\n- [ ] Keyboard navigation (web/desktop)\n\n## CLI Specific\n- [ ] Help text for all commands\n- [ ] Clear error messages with suggestions\n- [ ] Progress indicators for long operations\n- [ ] Consistent flag naming conventions\n- [ ] Exit codes meaningful\n\n## API DX (Developer Experience)\n- [ ] Intuitive endpoint/function naming\n- [ ] Consistent response format\n- [ ] Helpful error messages with codes\n- [ ] Good documentation with examples\n- [ ] Predictable behavior\n\n## Information Architecture\n- [ ] Content hierarchy clear\n- [ ] Important actions prominent\n- [ ] Related items grouped\n- [ ] Search/filter for large datasets\n","codex/SKILL.md":"---\nname: prjct\ndescription: Use when user mentions p., prjct, task tracking, or workflow commands.\n---\n\n# prjct — Context layer for AI agents\n\nGrammar: `p. <command> [args]` or `prjct <command> --md`\n\nCore commands: sync, task, done, ship, pause, resume, next, bug, workflow, tokens\nIntegrations: linear, jira, enrich\nOther: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nRules:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- All commits include footer: `Generated with [p/](https://www.prjct.app/)`\n- All storage through `prjct` CLI (SQLite internally)\n- Start code tasks with `p. task` and follow Context Contract from CLI output\n","config/skill-mappings.json":"{\n \"version\": \"3.0.0\",\n \"description\": \"Skill packages from skills.sh for auto-installation during sync\",\n \"sources\": {\n \"primary\": {\n \"name\": \"skills.sh\",\n \"url\": \"https://skills.sh\",\n \"installCmd\": \"npx skills add {package}\"\n },\n \"fallback\": {\n \"name\": \"GitHub direct\",\n \"installFormat\": \"owner/repo\"\n }\n },\n \"skillsDirectory\": \"~/.claude/skills/\",\n \"skillFormat\": {\n \"required\": [\"name\", \"description\"],\n \"optional\": [\"license\", \"compatibility\", \"metadata\", \"allowed-tools\"],\n \"fileStructure\": {\n \"required\": \"SKILL.md\",\n \"optional\": [\"scripts/\", \"references/\", \"assets/\"]\n }\n },\n \"agentToSkillMap\": {\n \"frontend\": {\n \"packages\": [\n \"anthropics/skills/frontend-design\",\n \"vercel-labs/agent-skills/vercel-react-best-practices\"\n ]\n },\n \"uxui\": {\n \"packages\": [\"anthropics/skills/frontend-design\"]\n },\n \"backend\": {\n \"packages\": [\"obra/superpowers/systematic-debugging\"]\n },\n \"database\": {\n \"packages\": []\n },\n \"testing\": {\n \"packages\": [\"obra/superpowers/test-driven-development\", \"anthropics/skills/webapp-testing\"]\n },\n \"devops\": {\n \"packages\": [\"anthropics/skills/mcp-builder\"]\n },\n \"prjct-planner\": {\n \"packages\": [\"obra/superpowers/brainstorming\"]\n },\n \"prjct-shipper\": {\n \"packages\": []\n },\n \"prjct-workflow\": {\n \"packages\": []\n }\n },\n \"documentSkills\": {\n \"note\": \"Official Anthropic document creation skills\",\n \"source\": \"anthropics/skills\",\n \"skills\": {\n \"pdf\": {\n \"name\": \"pdf\",\n \"description\": \"Create and edit PDF documents\",\n \"path\": \"skills/pdf\"\n },\n \"docx\": {\n \"name\": \"docx\",\n \"description\": \"Create and edit Word documents\",\n \"path\": \"skills/docx\"\n },\n \"pptx\": {\n \"name\": \"pptx\",\n \"description\": \"Create PowerPoint presentations\",\n \"path\": \"skills/pptx\"\n },\n \"xlsx\": {\n \"name\": \"xlsx\",\n \"description\": \"Create Excel spreadsheets\",\n \"path\": \"skills/xlsx\"\n }\n }\n }\n}\n","context/dashboard.md":"---\ndescription: 'Template for generated dashboard context'\ngenerated-by: 'p. dashboard'\ndata-source: 'prjct.db (SQLite)'\n---\n\n# Dashboard Context Template\n\nThis template defines the format for `{globalPath}/context/dashboard.md` generated by `p. dashboard`.\n\n---\n\n## Template\n\n```markdown\n# Dashboard\n\n**Project:** {projectName}\n**Generated:** {timestamp}\n\n---\n\n## Health Score\n\n**Overall:** {healthScore}/100\n\n| Component | Score | Weight | Contribution |\n|-----------|-------|--------|--------------|\n| Roadmap Progress | {roadmapScore}/100 | 25% | {roadmapContribution} |\n| Estimation Accuracy | {estimationScore}/100 | 25% | {estimationContribution} |\n| Success Rate | {successScore}/100 | 25% | {successContribution} |\n| Velocity Trend | {velocityScore}/100 | 25% | {velocityContribution} |\n\n---\n\n## Quick Stats\n\n| Metric | Value | Trend |\n|--------|-------|-------|\n| Features Shipped | {shippedCount} | {shippedTrend} |\n| PRDs Created | {prdCount} | {prdTrend} |\n| Avg Cycle Time | {avgCycleTime}d | {cycleTrend} |\n| Estimation Accuracy | {estimationAccuracy}% | {accuracyTrend} |\n| Success Rate | {successRate}% | {successTrend} |\n| ROI Score | {avgROI} | {roiTrend} |\n\n---\n\n## Active Quarter: {activeQuarter.id}\n\n**Theme:** {activeQuarter.theme}\n**Status:** {activeQuarter.status}\n\n### Progress\n\n```\nFeatures: {featureBar} {quarterFeatureProgress}%\nCapacity: {capacityBar} {capacityUtilization}%\nTimeline: {timelineBar} {timelineProgress}%\n```\n\n### Features\n\n| Feature | Status | Progress | Owner |\n|---------|--------|----------|-------|\n{FOR EACH feature in quarterFeatures:}\n| {feature.name} | {statusEmoji(feature.status)} | {feature.progress}% | {feature.agent || '-'} |\n{END FOR}\n\n---\n\n## Current Work\n\n### Active Task\n{IF currentTask:}\n**{currentTask.description}**\n\n- Type: {currentTask.type}\n- Started: {currentTask.startedAt}\n- Elapsed: {elapsed}\n- Branch: {currentTask.branch?.name || 'N/A'}\n\nSubtasks: {completedSubtasks}/{totalSubtasks}\n{ELSE:}\n*No active task*\n{END IF}\n\n### In Progress Features\n\n{FOR EACH feature in activeFeatures:}\n#### {feature.name}\n\n- Progress: {progressBar(feature.progress)} {feature.progress}%\n- Quarter: {feature.quarter || 'Unassigned'}\n- PRD: {feature.prdId || 'None'}\n- Dependencies: {feature.dependencies?.join(', ') || 'None'}\n{END FOR}\n\n---\n\n## Pipeline\n\n```\nPRDs Features Active Shipped\n┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐\n│ Draft │──▶│ Planned │──▶│ Active │──▶│ Shipped │\n│ ({draft}) │ │ ({planned}) │ │ ({active}) │ │ ({shipped}) │\n└─────────┘ └─────────┘ └─────────┘ └─────────┘\n │\n ▼\n┌─────────┐\n│Approved │\n│ ({approved}) │\n└─────────┘\n```\n\n---\n\n## Metrics Trends (Last 4 Weeks)\n\n### Velocity\n```\nW-3: {velocityW3Bar} {velocityW3}\nW-2: {velocityW2Bar} {velocityW2}\nW-1: {velocityW1Bar} {velocityW1}\nW-0: {velocityW0Bar} {velocityW0}\n```\n\n### Estimation Accuracy\n```\nW-3: {accuracyW3Bar} {accuracyW3}%\nW-2: {accuracyW2Bar} {accuracyW2}%\nW-1: {accuracyW1Bar} {accuracyW1}%\nW-0: {accuracyW0Bar} {accuracyW0}%\n```\n\n---\n\n## Alerts & Actions\n\n### Warnings\n{FOR EACH alert in alerts:}\n- {alert.icon} {alert.message}\n{END FOR}\n\n### Suggested Actions\n{FOR EACH action in suggestedActions:}\n1. {action.description}\n - Command: `{action.command}`\n{END FOR}\n\n---\n\n## Recent Activity\n\n| Date | Action | Details |\n|------|--------|---------|\n{FOR EACH event in recentEvents.slice(0, 10):}\n| {event.date} | {event.action} | {event.details} |\n{END FOR}\n\n---\n\n## Learnings Summary\n\n### Top Patterns\n{FOR EACH pattern in topPatterns.slice(0, 5):}\n- {pattern.insight} ({pattern.frequency}x)\n{END FOR}\n\n### Improvement Areas\n{FOR EACH area in improvementAreas:}\n- **{area.name}**: {area.suggestion}\n{END FOR}\n\n---\n\n*Generated by prjct-cli | https://prjct.app*\n```\n\n---\n\n## Health Score Calculation\n\n```javascript\nconst healthScore = Math.round(\n (roadmapProgress * 0.25) +\n (estimationAccuracy * 0.25) +\n (successRate * 0.25) +\n (normalizedVelocity * 0.25)\n)\n```\n\n| Score Range | Health Level | Color |\n|-------------|--------------|-------|\n| 80-100 | Excellent | Green |\n| 60-79 | Good | Blue |\n| 40-59 | Needs Attention | Yellow |\n| 0-39 | Critical | Red |\n\n---\n\n## Alert Definitions\n\n| Condition | Alert | Severity |\n|-----------|-------|----------|\n| `capacityUtilization > 90` | Quarter capacity nearly full | Warning |\n| `estimationAccuracy < 60` | Estimation accuracy below target | Warning |\n| `activeFeatures.length > 3` | Too many features in progress | Info |\n| `draftPRDs.length > 3` | PRDs awaiting review | Info |\n| `successRate < 70` | Success rate declining | Warning |\n| `velocityTrend < -20` | Velocity dropping | Warning |\n| `currentTask && elapsed > 4h` | Task running long | Info |\n\n---\n\n## Suggested Actions Matrix\n\n| Condition | Suggested Action | Command |\n|-----------|------------------|---------|\n| No active task | Start a task | `p. task` |\n| PRDs in draft | Review PRDs | `p. prd list` |\n| Features pending review | Record impact | `p. impact` |\n| Quarter ending soon | Plan next quarter | `p. plan quarter` |\n| Low estimation accuracy | Analyze estimates | `p. dashboard estimates` |\n\n---\n\n## Integration with Linear/Jira/Monday\n\nThe dashboard context maps to PM tool dashboards:\n\n| Dashboard Section | Linear | Jira | Monday |\n|-------------------|--------|------|--------|\n| Health Score | Project Health | Dashboard Gadget | Board Overview |\n| Active Quarter | Cycle | Sprint | Timeline |\n| Pipeline | Workflow Board | Kanban | Board |\n| Velocity | Velocity Chart | Velocity Report | Chart Widget |\n| Alerts | Notifications | Issues | Notifications |\n\n---\n\n## Refresh Frequency\n\n| Data Type | Refresh Trigger |\n|-----------|-----------------|\n| Current Task | Real-time (on state change) |\n| Features | On feature status change |\n| Metrics | On `p. dashboard` execution |\n| Aggregates | On `p. impact` completion |\n| Alerts | Calculated on view |\n","context/roadmap.md":"---\ndescription: 'Template for generated roadmap context'\ngenerated-by: 'p. plan, p. sync'\ndata-source: 'prjct.db (SQLite)'\n---\n\n# Roadmap Context Template\n\nThis template defines the format for `{globalPath}/context/roadmap.md` generated by:\n- `p. plan` - After quarter planning\n- `p. sync` - After roadmap generation from git\n\n---\n\n## Template\n\n```markdown\n# Roadmap\n\n**Last Updated:** {lastUpdated}\n\n---\n\n## Strategy\n\n**Goal:** {strategy.goal}\n\n### Phases\n{FOR EACH phase in strategy.phases:}\n- **{phase.id}**: {phase.name} ({phase.status})\n{END FOR}\n\n### Success Metrics\n{FOR EACH metric in strategy.successMetrics:}\n- {metric}\n{END FOR}\n\n---\n\n## Quarters\n\n{FOR EACH quarter in quarters:}\n### {quarter.id}: {quarter.name}\n\n**Status:** {quarter.status}\n**Theme:** {quarter.theme}\n**Capacity:** {capacity.allocatedHours}/{capacity.totalHours}h ({utilization}%)\n\n#### Goals\n{FOR EACH goal in quarter.goals:}\n- {goal}\n{END FOR}\n\n#### Features\n{FOR EACH featureId in quarter.features:}\n- [{status icon}] **{feature.name}** ({feature.status}, {feature.progress}%)\n - PRD: {feature.prdId || 'None (legacy)'}\n - Estimated: {feature.effortTracking?.estimated?.hours || '?'}h\n - Value Score: {feature.valueScore || 'N/A'}\n - Dependencies: {feature.dependencies?.join(', ') || 'None'}\n{END FOR}\n\n{END FOR}\n\n---\n\n## Active Work\n\n{FOR EACH feature WHERE status == 'active':}\n### {feature.name}\n\n| Attribute | Value |\n|-----------|-------|\n| Progress | {feature.progress}% |\n| Branch | {feature.branch || 'N/A'} |\n| Quarter | {feature.quarter || 'Unassigned'} |\n| PRD | {feature.prdId || 'Legacy (no PRD)'} |\n| Started | {feature.createdAt} |\n\n#### Tasks\n{FOR EACH task in feature.tasks:}\n- [{task.completed ? 'x' : ' '}] {task.description}\n{END FOR}\n\n{END FOR}\n\n---\n\n## Completed Features\n\n{FOR EACH feature WHERE status == 'completed' OR status == 'shipped':}\n- **{feature.name}** (v{feature.version || 'N/A'})\n - Shipped: {feature.shippedAt || feature.completedDate}\n - Actual: {feature.effortTracking?.actual?.hours || '?'}h vs Est: {feature.effortTracking?.estimated?.hours || '?'}h\n{END FOR}\n\n---\n\n## Backlog\n\nPriority-ordered list of unscheduled items:\n\n| Priority | Item | Value | Effort | Score |\n|----------|------|-------|--------|-------|\n{FOR EACH item in backlog:}\n| {rank} | {item.title} | {item.valueScore} | {item.effortEstimate}h | {priorityScore} |\n{END FOR}\n\n---\n\n## Legacy Features\n\nFeatures detected from git history (no PRD required):\n\n{FOR EACH feature WHERE legacy == true:}\n- **{feature.name}**\n - Inferred From: {feature.inferredFrom}\n - Status: {feature.status}\n - Commits: {feature.commits?.length || 0}\n{END FOR}\n\n---\n\n## Dependencies\n\n```\n{FOR EACH feature WHERE dependencies?.length > 0:}\n{feature.name}\n{FOR EACH depId in feature.dependencies:}\n └── {dependency.name}\n{END FOR}\n{END FOR}\n```\n\n---\n\n## Metrics Summary\n\n| Metric | Value |\n|--------|-------|\n| Total Features | {features.length} |\n| Planned | {planned.length} |\n| Active | {active.length} |\n| Completed | {completed.length} |\n| Shipped | {shipped.length} |\n| Legacy | {legacy.length} |\n| PRD-Backed | {prdBacked.length} |\n| Backlog | {backlog.length} |\n\n### Capacity by Quarter\n\n| Quarter | Allocated | Total | Utilization |\n|---------|-----------|-------|-------------|\n{FOR EACH quarter in quarters:}\n| {quarter.id} | {capacity.allocatedHours}h | {capacity.totalHours}h | {utilization}% |\n{END FOR}\n\n### Effort Accuracy (Shipped Features)\n\n| Feature | Estimated | Actual | Variance |\n|---------|-----------|--------|----------|\n{FOR EACH feature WHERE status == 'shipped' AND effortTracking:}\n| {feature.name} | {estimated.hours}h | {actual.hours}h | {variance}% |\n{END FOR}\n\n**Average Variance:** {averageVariance}%\n\n---\n\n*Generated by prjct-cli | https://prjct.app*\n```\n\n---\n\n## Status Icons\n\n| Status | Icon |\n|--------|------|\n| planned | [ ] |\n| active | [~] |\n| completed | [x] |\n| shipped | [+] |\n\n---\n\n## Variable Reference\n\n| Variable | Source | Description |\n|----------|--------|-------------|\n| `lastUpdated` | roadmap.lastUpdated | ISO timestamp |\n| `strategy` | roadmap.strategy | Strategy object |\n| `quarters` | roadmap.quarters | Array of quarters |\n| `features` | roadmap.features | Array of features |\n| `backlog` | roadmap.backlog | Array of backlog items |\n| `utilization` | Calculated | (allocated/total) * 100 |\n| `priorityScore` | Calculated | valueScore / (effort/10) |\n\n---\n\n## Generation Rules\n\n1. **Quarters** - Show only `planned` and `active` quarters by default\n2. **Features** - Group by status (active first, then planned)\n3. **Backlog** - Sort by priority score (descending)\n4. **Legacy** - Always show separately to distinguish from PRD-backed\n5. **Dependencies** - Only show features with dependencies\n6. **Metrics** - Always include for dashboard views\n\n---\n\n## Integration with Linear/Jira/Monday\n\nThe context file maps to PM tool exports:\n\n| Context Section | Linear | Jira | Monday |\n|-----------------|--------|------|--------|\n| Quarters | Cycles | Sprints | Timelines |\n| Features | Issues | Stories | Items |\n| Backlog | Backlog | Backlog | Inbox |\n| Status | State | Status | Status |\n| Capacity | Estimates | Story Points | Time |\n","cursor/commands/ship.md":"# /ship - Ship feature\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct ship {{args}} --md`\nFollow CLI output.\n","cursor/commands/task.md":"# /task - Start a task\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct task {{args}} --md`\nFollow CLI output.\n","cursor/p.md":"# p. Command Router for Cursor IDE\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct {{first_word_of_args}} {{rest_of_args}} --md`\nFollow CLI output.\n","cursor/router.mdc":"---\ndescription: \"prjct - Context layer for AI coding agents\"\nalwaysApply: true\n---\n\n# prjct\n\nCore: /sync, /task, /done, /ship, /pause, /resume, /next, /bug, /workflow\nOther: run `prjct <command> --md` and follow CLI output\n","global/ANTIGRAVITY.md":"<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","global/CURSOR.mdc":"---\ndescription: \"prjct - Context layer for AI coding agents\"\nalwaysApply: true\n---\n\n<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","global/GEMINI.md":"<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","global/STORAGE-SPEC.md":"# Storage Specification\n\n**Canonical specification for prjct storage format.**\n\nAll storage is managed by the `prjct` CLI which uses SQLite (`prjct.db`) internally. **NEVER read or write JSON storage files directly. Use `prjct` CLI commands for all storage operations.**\n\n---\n\n## Current Storage: SQLite (prjct.db)\n\nAll reads and writes go through the `prjct` CLI, which manages a SQLite database (`prjct.db`) with WAL mode for safe concurrent access.\n\n```\n~/.prjct-cli/projects/{projectId}/\n├── prjct.db # SQLite database (SOURCE OF TRUTH for all storage)\n├── context/\n│ ├── now.md # Current task (generated from prjct.db)\n│ └── next.md # Queue (generated from prjct.db)\n├── config/\n│ └── skills.json # Agent-to-skill mappings\n├── agents/ # Domain specialists (auto-generated)\n└── sync/\n └── pending.json # Events for backend sync\n```\n\n### How to interact with storage\n\n- **Read state**: Use `prjct status`, `prjct dash`, `prjct next` CLI commands\n- **Write state**: Use `prjct` CLI commands (task, done, pause, resume, etc.)\n- **Issue tracker setup**: Use `prjct linear setup` or `prjct jira setup` (MCP/OAuth)\n- **Never** read/write JSON files in `storage/` or `memory/` directories\n\n---\n\n## LEGACY JSON Schemas (for reference only)\n\n> **WARNING**: These JSON schemas are LEGACY documentation only. The `storage/` and `memory/` directories are no longer used. All data lives in `prjct.db` (SQLite). Do NOT read or write these files.\n\n### state.json (LEGACY)\n\n```json\n{\n \"task\": {\n \"id\": \"uuid-v4\",\n \"title\": \"string\",\n \"type\": \"feature|bug|improvement|refactor|chore\",\n \"status\": \"active|paused|done\",\n \"branch\": \"string|null\",\n \"subtasks\": [\n {\n \"id\": \"uuid-v4\",\n \"title\": \"string\",\n \"status\": \"pending|done\"\n }\n ],\n \"currentSubtask\": 0,\n \"createdAt\": \"2024-01-15T10:30:00.000Z\",\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n }\n}\n```\n\n**Empty state (no active task):**\n```json\n{\n \"task\": null\n}\n```\n\n### queue.json (LEGACY)\n\n```json\n{\n \"tasks\": [\n {\n \"id\": \"uuid-v4\",\n \"title\": \"string\",\n \"type\": \"feature|bug|improvement|refactor|chore\",\n \"priority\": 1,\n \"createdAt\": \"2024-01-15T10:30:00.000Z\"\n }\n ],\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n### shipped.json (LEGACY)\n\n```json\n{\n \"features\": [\n {\n \"id\": \"uuid-v4\",\n \"name\": \"string\",\n \"version\": \"1.0.0\",\n \"type\": \"feature|bug|improvement|refactor|chore\",\n \"shippedAt\": \"2024-01-15T10:30:00.000Z\"\n }\n ],\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n### events.jsonl (LEGACY - now stored in SQLite `events` table)\n\nPreviously append-only JSONL. Now stored in SQLite.\n\n```jsonl\n{\"type\":\"task.created\",\"timestamp\":\"2024-01-15T10:30:00.000Z\",\"data\":{\"taskId\":\"uuid\",\"title\":\"string\"}}\n{\"type\":\"task.started\",\"timestamp\":\"2024-01-15T10:30:00.000Z\",\"data\":{\"taskId\":\"uuid\"}}\n{\"type\":\"subtask.completed\",\"timestamp\":\"2024-01-15T10:35:00.000Z\",\"data\":{\"taskId\":\"uuid\",\"subtaskIndex\":0}}\n{\"type\":\"task.completed\",\"timestamp\":\"2024-01-15T10:40:00.000Z\",\"data\":{\"taskId\":\"uuid\"}}\n{\"type\":\"feature.shipped\",\"timestamp\":\"2024-01-15T10:45:00.000Z\",\"data\":{\"featureId\":\"uuid\",\"name\":\"string\",\"version\":\"1.0.0\"}}\n```\n\n**Event Types:**\n- `task.created` - New task created\n- `task.started` - Task activated\n- `task.paused` - Task paused\n- `task.resumed` - Task resumed\n- `task.completed` - Task completed\n- `subtask.completed` - Subtask completed\n- `feature.shipped` - Feature shipped\n\n### learnings.jsonl (LEGACY - now stored in SQLite)\n\nPreviously used for LLM-to-LLM knowledge transfer. Now stored in SQLite.\n\n```jsonl\n{\"taskId\":\"uuid\",\"linearId\":\"PRJ-123\",\"timestamp\":\"2024-01-15T10:40:00.000Z\",\"learnings\":{\"patterns\":[\"Use NestedContextResolver for hierarchical discovery\"],\"approaches\":[\"Mirror existing method structure when extending\"],\"decisions\":[\"Extended class rather than wrapper for consistency\"],\"gotchas\":[\"Must handle null parent case\"]},\"value\":{\"type\":\"feature\",\"impact\":\"high\",\"description\":\"Hierarchical AGENTS.md support for monorepos\"},\"filesChanged\":[\"core/resolver.ts\",\"core/types.ts\"],\"tags\":[\"agents\",\"hierarchy\",\"monorepo\"]}\n```\n\n**Schema:**\n```json\n{\n \"taskId\": \"uuid-v4\",\n \"linearId\": \"string|null\",\n \"timestamp\": \"2024-01-15T10:40:00.000Z\",\n \"learnings\": {\n \"patterns\": [\"string\"],\n \"approaches\": [\"string\"],\n \"decisions\": [\"string\"],\n \"gotchas\": [\"string\"]\n },\n \"value\": {\n \"type\": \"feature|bugfix|performance|dx|refactor|infrastructure\",\n \"impact\": \"high|medium|low\",\n \"description\": \"string\"\n },\n \"filesChanged\": [\"string\"],\n \"tags\": [\"string\"]\n}\n```\n\n**Why Local Cache**: Enables future semantic retrieval without API latency. Will feed into vector DB for cross-session knowledge transfer.\n\n### skills.json\n\n```json\n{\n \"mappings\": {\n \"frontend.md\": [\"frontend-design\"],\n \"backend.md\": [\"javascript-typescript\"],\n \"testing.md\": [\"developer-kit\"]\n },\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n### pending.json (sync queue)\n\n```json\n{\n \"events\": [\n {\n \"id\": \"uuid-v4\",\n \"type\": \"task.created\",\n \"timestamp\": \"2024-01-15T10:30:00.000Z\",\n \"data\": {},\n \"synced\": false\n }\n ],\n \"lastSync\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n---\n\n## Formatting Rules (MANDATORY)\n\nAll agents MUST follow these rules for cross-agent compatibility:\n\n| Rule | Value |\n|------|-------|\n| JSON indentation | 2 spaces |\n| Trailing commas | NEVER |\n| Key ordering | Logical (as shown in schemas above) |\n| Timestamps | ISO-8601 with milliseconds (`.000Z`) |\n| UUIDs | v4 format (lowercase) |\n| Line endings | LF (not CRLF) |\n| File encoding | UTF-8 without BOM |\n| Empty objects | `{}` |\n| Empty arrays | `[]` |\n| Null values | `null` (lowercase) |\n\n### Timestamp Generation\n\n```bash\n# ALWAYS use dynamic timestamps, NEVER hardcode\nbun -e \"console.log(new Date().toISOString())\" 2>/dev/null || node -e \"console.log(new Date().toISOString())\"\n```\n\n### UUID Generation\n\n```bash\n# ALWAYS generate fresh UUIDs\nbun -e \"console.log(crypto.randomUUID())\" 2>/dev/null || node -e \"console.log(require('crypto').randomUUID())\"\n```\n\n---\n\n## Write Rules (CRITICAL)\n\n### Direct Writes Only\n\n**NEVER use temporary files** - Write directly to final destination:\n\n```\nWRONG: Create `.tmp/file.json`, then `mv` to final path\nCORRECT: Use prjctDb.setDoc() or StorageManager.write() to write to SQLite\n```\n\n### Atomic Updates\n\nAll writes go through SQLite which handles atomicity via WAL mode:\n```typescript\n// StorageManager pattern (preferred):\nawait stateStorage.update(projectId, (state) => {\n state.field = newValue\n return state\n})\n\n// Direct kv_store pattern:\nprjctDb.setDoc(projectId, 'key', data)\n```\n\n### NEVER Do These\n\n- Read or write JSON files in `storage/` or `memory/` directories\n- Use `.tmp/` directories\n- Use `mv` or `rename` operations for storage files\n- Create backup files like `*.bak` or `*.old`\n- Bypass `prjct` CLI to write directly to `prjct.db`\n\n---\n\n## Cross-Agent Compatibility\n\n### Why This Matters\n\n1. **User freedom**: Switch between Claude and Gemini freely\n2. **Remote sync**: Storage will sync to prjct.app backend\n3. **Single truth**: Both agents produce identical output\n\n### Verification Test\n\n```bash\n# Start task with Claude\np. task \"add feature X\"\n\n# Switch to Gemini, continue\np. done # Should work seamlessly\n\n# Switch back to Claude\np. ship # Should read Gemini's changes correctly\n\n# All agents read from the same prjct.db via CLI commands\nprjct status # Works from any agent\n```\n\n### Remote Sync Flow\n\n```\nLocal Storage: prjct.db (Claude/Gemini)\n ↓\n sync/pending.json (events queue)\n ↓\n prjct.app API\n ↓\n Global Remote Storage\n ↓\n Any device, any agent\n```\n\n---\n\n## MCP Issue Tracker Strategy\n\nIssue tracker integrations are MCP-only.\n\n### Rules\n\n- `prjct` CLI does not call Linear/Jira SDKs or REST APIs directly.\n- Issue operations (`sync`, `list`, `get`, `start`, `done`, `update`, etc.) are delegated to MCP tools in the AI client.\n- `p. sync` refreshes project context and agent artifacts, not issue tracker payloads.\n- Local storage keeps task linkage metadata (for example `linearId`) and project workflow state in SQLite.\n\n### Setup\n\n- `prjct linear setup`\n- `prjct jira setup`\n\n### Operational Model\n\n```\nAI client MCP tools <-> Linear/Jira\n |\n v\n prjct workflow state (prjct.db)\n```\n\nThe CLI remains the source of truth for local project/task state.\nIssue-system mutations happen through MCP operations in the active AI session.\n\n---\n\n**Version**: 2.0.0\n**Last Updated**: 2026-02-10\n","global/WINDSURF.md":"---\ntrigger: always_on\ndescription: \"prjct - Context layer for AI coding agents\"\n---\n\n<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","mcp-config.json":"{\n \"mcpServers\": {\n \"context7\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@upstash/context7-mcp@latest\"],\n \"description\": \"Library documentation lookup\"\n },\n \"linear\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"mcp-remote\", \"https://mcp.linear.app/mcp\"],\n \"description\": \"Linear MCP server (OAuth)\"\n },\n \"jira\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"mcp-remote\", \"https://mcp.atlassian.com/v1/mcp\"],\n \"description\": \"Atlassian MCP server for Jira (OAuth)\"\n }\n },\n \"usage\": {\n \"context7\": {\n \"when\": [\"Looking up library/framework documentation\", \"Need current API docs\"],\n \"tools\": [\"resolve-library-id\", \"get-library-docs\"]\n }\n },\n \"integrations\": {\n \"linear\": \"MCP - Run `prjct linear setup`\",\n \"jira\": \"MCP - Run `prjct jira setup`\"\n }\n}\n","permissions/default.jsonc":"{\n // Default permissions preset for prjct-cli\n // Safe defaults with protection against destructive operations\n\n \"bash\": {\n // Safe read-only commands - always allowed\n \"git status*\": \"allow\",\n \"git log*\": \"allow\",\n \"git diff*\": \"allow\",\n \"git branch*\": \"allow\",\n \"ls*\": \"allow\",\n \"pwd\": \"allow\",\n \"cat*\": \"allow\",\n \"head*\": \"allow\",\n \"tail*\": \"allow\",\n \"grep*\": \"allow\",\n \"find*\": \"allow\",\n \"which*\": \"allow\",\n \"node -e*\": \"allow\",\n \"bun -e*\": \"allow\",\n \"npm list*\": \"allow\",\n \"npx tsc --noEmit*\": \"allow\",\n\n // Potentially destructive - ask first\n \"rm -rf*\": \"ask\",\n \"rm -r*\": \"ask\",\n \"git push*\": \"ask\",\n \"git reset --hard*\": \"ask\",\n \"npm publish*\": \"ask\",\n \"chmod*\": \"ask\",\n\n // Always denied - too dangerous\n \"rm -rf /*\": \"deny\",\n \"rm -rf ~/*\": \"deny\",\n \"sudo*\": \"deny\"\n },\n\n \"files\": {\n \"read\": {\n \"**/*\": \"allow\"\n },\n \"write\": {\n \"**/*\": \"allow\"\n },\n \"delete\": {\n \"**/*\": \"ask\"\n }\n },\n\n \"web\": {\n \"enabled\": true\n },\n\n \"doomLoop\": {\n \"enabled\": true,\n \"maxRetries\": 3\n },\n\n \"externalDirectories\": \"ask\"\n}\n","permissions/permissive.jsonc":"{\n // Permissive preset for prjct-cli\n // For trusted environments - minimal restrictions\n\n \"bash\": {\n // Most commands allowed\n \"git*\": \"allow\",\n \"npm*\": \"allow\",\n \"bun*\": \"allow\",\n \"node*\": \"allow\",\n \"ls*\": \"allow\",\n \"cat*\": \"allow\",\n \"mkdir*\": \"allow\",\n \"cp*\": \"allow\",\n \"mv*\": \"allow\",\n \"rm*\": \"allow\",\n \"chmod*\": \"allow\",\n\n // Still protect against catastrophic mistakes\n \"rm -rf /*\": \"deny\",\n \"rm -rf ~/*\": \"deny\",\n \"sudo rm -rf*\": \"deny\",\n \":(){ :|:& };:*\": \"deny\"\n },\n\n \"files\": {\n \"read\": {\n \"**/*\": \"allow\"\n },\n \"write\": {\n \"**/*\": \"allow\"\n },\n \"delete\": {\n \"**/*\": \"allow\",\n \"**/node_modules/**\": \"deny\" // Protect dependencies\n }\n },\n\n \"web\": {\n \"enabled\": true\n },\n\n \"doomLoop\": {\n \"enabled\": true,\n \"maxRetries\": 5\n },\n\n \"externalDirectories\": \"allow\"\n}\n","permissions/strict.jsonc":"{\n // Strict permissions preset for prjct-cli\n // Maximum safety - requires approval for most operations\n\n \"bash\": {\n // Only read-only commands allowed\n \"git status\": \"allow\",\n \"git log*\": \"allow\",\n \"git diff*\": \"allow\",\n \"ls*\": \"allow\",\n \"pwd\": \"allow\",\n \"cat*\": \"allow\",\n \"head*\": \"allow\",\n \"tail*\": \"allow\",\n \"which*\": \"allow\",\n\n // Everything else requires approval\n \"git*\": \"ask\",\n \"npm*\": \"ask\",\n \"bun*\": \"ask\",\n \"node*\": \"ask\",\n \"rm*\": \"ask\",\n \"mv*\": \"ask\",\n \"cp*\": \"ask\",\n \"mkdir*\": \"ask\",\n\n // Always denied\n \"rm -rf*\": \"deny\",\n \"sudo*\": \"deny\",\n \"chmod 777*\": \"deny\"\n },\n\n \"files\": {\n \"read\": {\n \"**/*\": \"allow\",\n \"**/.*\": \"ask\", // Hidden files need approval\n \"**/.env*\": \"deny\" // Never read env files\n },\n \"write\": {\n \"**/*\": \"ask\" // All writes need approval\n },\n \"delete\": {\n \"**/*\": \"deny\" // No deletions without explicit override\n }\n },\n\n \"web\": {\n \"enabled\": true,\n \"blockedDomains\": [\"localhost\", \"127.0.0.1\", \"internal\"]\n },\n\n \"doomLoop\": {\n \"enabled\": true,\n \"maxRetries\": 2\n },\n\n \"externalDirectories\": \"deny\"\n}\n","planning-methodology.md":"# Software Planning Methodology for prjct\n\nThis methodology guides the AI through developing ideas into complete technical specifications.\n\n## Phase 1: Discovery & Problem Definition\n\n### Questions to Ask\n- What specific problem does this solve?\n- Who is the target user?\n- What's the budget and timeline?\n- What happens if this problem isn't solved?\n\n### Output\n- Problem statement\n- User personas\n- Business constraints\n- Success metrics\n\n## Phase 2: User Flows & Journeys\n\n### Process\n1. Map primary user journey\n2. Identify entry points\n3. Define success states\n4. Document error states\n5. Note edge cases\n\n### Jobs-to-be-Done\nWhen [situation], I want to [motivation], so I can [expected outcome]\n\n## Phase 3: Domain Modeling\n\n### Entity Definition\nFor each entity, define:\n- Description\n- Attributes (name, type, constraints)\n- Relationships\n- Business rules\n- Lifecycle states\n\n### Bounded Contexts\nGroup entities into logical boundaries with:\n- Owned entities\n- External dependencies\n- Events published/consumed\n\n## Phase 4: API Contract Design\n\n### Style Selection\n| Style | Best For |\n|----------|----------|\n| REST | Simple CRUD, broad compatibility |\n| GraphQL | Complex data requirements |\n| tRPC | Full-stack TypeScript |\n| gRPC | Microservices |\n\n### Endpoint Specification\n- Method/Type\n- Path/Name\n- Authentication\n- Input/Output schemas\n- Error responses\n\n## Phase 5: System Architecture\n\n### Pattern Selection\n| Pattern | Best For |\n|---------|----------|\n| Modular Monolith | Small team, fast iteration |\n| Serverless-First | Variable load, event-driven |\n| Microservices | Large team, complex domain |\n\n### C4 Model\n1. Context - System and external actors\n2. Container - Major components\n3. Component - Internal structure\n\n## Phase 6: Data Architecture\n\n### Database Selection\n| Type | Options | Best For |\n|------|---------|----------|\n| Relational | PostgreSQL | ACID, structured data |\n| Document | MongoDB | Flexible schema |\n| Key-Value | Redis | Caching, sessions |\n\n### Schema Design\n- Tables and columns\n- Indexes\n- Constraints\n- Relationships\n\n## Phase 7: Tech Stack Decision\n\n### Frontend Stack\n- Framework (Next.js, Remix, SvelteKit)\n- Styling (Tailwind, CSS Modules)\n- State management (Zustand, Jotai)\n- Data fetching (TanStack Query, SWR)\n\n### Backend Stack\n- Runtime (Node.js, Bun)\n- Framework (Next.js API, Hono)\n- ORM (Drizzle, Prisma)\n- Validation (Zod, Valibot)\n\n### Infrastructure\n- Hosting (Vercel, Railway, Fly.io)\n- Database (Neon, PlanetScale)\n- Cache (Upstash, Redis)\n- Monitoring (Sentry, Axiom)\n\n## Phase 8: Implementation Roadmap\n\n### MVP Scope Definition\n- Must-have features (P0)\n- Should-have features (P1)\n- Nice-to-have features (P2)\n- Future considerations (P3)\n\n### Development Phases\n1. Foundation - Setup, core infrastructure\n2. Core Features - Primary functionality\n3. Polish & Launch - Optimization, deployment\n\n### Risk Assessment\n- Technical risks and mitigation\n- Business risks and mitigation\n- Dependencies and assumptions\n\n## Output Structure\n\nWhen complete, generate:\n\n1. **Executive Summary** - Problem, solution, key decisions\n2. **Architecture Documents** - All phases detailed\n3. **Implementation Plan** - Prioritized tasks with estimates\n4. **Decision Log** - Key choices and reasoning\n\n## Interactive Development Process\n\n1. **Classification**: Determine if idea needs full architecture\n2. **Discovery**: Ask clarifying questions\n3. **Generation**: Create architecture phase by phase\n4. **Validation**: Review with user at key points\n5. **Refinement**: Iterate based on feedback\n6. **Output**: Save complete specification\n\n## Success Criteria\n\nA complete architecture includes:\n- Clear problem definition\n- User flows mapped\n- Domain model defined\n- API contracts specified\n- Tech stack chosen\n- Database schema designed\n- Implementation roadmap created\n- Risk assessment completed\n\n## Templates\n\n### Entity Template\n```\nEntity: [Name]\n├── Description: [What it represents]\n├── Attributes:\n│ ├── id: uuid (primary key)\n│ └── [field]: [type] ([constraints])\n├── Relationships: [connections]\n├── Rules: [invariants]\n└── States: [lifecycle]\n```\n\n### API Endpoint Template\n```\nOperation: [Name]\n├── Method: [GET/POST/PUT/DELETE]\n├── Path: [/api/resource]\n├── Auth: [Required/Optional]\n├── Input: {schema}\n├── Output: {schema}\n└── Errors: [codes and descriptions]\n```\n\n### Phase Template\n```\nPhase: [Name]\n├── Duration: [timeframe]\n├── Tasks:\n│ ├── [Task 1]\n│ └── [Task 2]\n├── Deliverable: [outcome]\n└── Dependencies: [prerequisites]\n```","skills/code-review.md":"---\nname: Code Review\ndescription: Review code changes for quality, security, and best practices\nagent: general\ntags: [review, quality, security]\nversion: 1.0.0\n---\n\n# Code Review Skill\n\nReview the provided code changes with focus on:\n\n## Quality Checks\n- Code readability and clarity\n- Naming conventions\n- Function/method length\n- Code duplication\n- Error handling\n\n## Security Checks\n- Input validation\n- SQL injection risks\n- XSS vulnerabilities\n- Sensitive data exposure\n- Authentication/authorization issues\n\n## Best Practices\n- SOLID principles\n- DRY (Don't Repeat Yourself)\n- Single responsibility\n- Proper typing (TypeScript)\n- Documentation where needed\n\n## Output Format\n\nProvide feedback in this structure:\n\n### Summary\nBrief overview of the changes\n\n### Issues Found\n- 🔴 **Critical**: Must fix before merge\n- 🟡 **Warning**: Should fix, but not blocking\n- 🔵 **Suggestion**: Nice to have improvements\n\n### Recommendations\nSpecific actionable items to improve the code\n","skills/debug.md":"---\nname: Debug\ndescription: Systematic debugging to find and fix issues\nagent: general\ntags: [debug, fix, troubleshoot]\nversion: 1.0.0\n---\n\n# Debug Skill\n\nSystematically debug the reported issue.\n\n## Process\n\n### Step 1: Understand the Problem\n- What is the expected behavior?\n- What is the actual behavior?\n- When did it start happening?\n- Can it be reproduced consistently?\n\n### Step 2: Gather Information\n- Read relevant error messages\n- Check logs\n- Review recent changes\n- Identify affected code paths\n\n### Step 3: Form Hypothesis\n- What could cause this behavior?\n- List possible causes in order of likelihood\n- Identify the most likely root cause\n\n### Step 4: Test Hypothesis\n- Add logging if needed\n- Isolate the problematic code\n- Verify the root cause\n\n### Step 5: Fix\n- Implement the minimal fix\n- Ensure no side effects\n- Add tests if applicable\n\n### Step 6: Verify\n- Confirm the issue is resolved\n- Check for regressions\n- Document the fix\n\n## Output Format\n\n```\n## Issue\n[Description of the problem]\n\n## Root Cause\n[What was causing the issue]\n\n## Fix\n[What was changed to fix it]\n\n## Prevention\n[How to prevent similar issues]\n```\n","skills/refactor.md":"---\nname: Refactor\ndescription: Refactor code for better structure, readability, and maintainability\nagent: general\ntags: [refactor, cleanup, improvement]\nversion: 1.0.0\n---\n\n# Refactor Skill\n\nRefactor the specified code with these goals:\n\n## Objectives\n1. **Improve Readability** - Clear naming, logical structure\n2. **Reduce Complexity** - Simplify nested logic, extract functions\n3. **Enhance Maintainability** - Make future changes easier\n4. **Preserve Behavior** - No functional changes unless requested\n\n## Approach\n\n### Step 1: Analyze Current Code\n- Identify pain points\n- Note code smells\n- Understand dependencies\n\n### Step 2: Plan Changes\n- List specific refactoring operations\n- Prioritize by impact\n- Consider breaking changes\n\n### Step 3: Execute\n- Make incremental changes\n- Test after each change\n- Document decisions\n\n## Common Refactorings\n- Extract function/method\n- Rename for clarity\n- Remove duplication\n- Simplify conditionals\n- Replace magic numbers with constants\n- Add type annotations\n\n## Output\n- Modified code\n- Brief explanation of changes\n- Any trade-offs made\n","tools/bash.txt":"Execute shell commands in a persistent bash session.\n\nUse this tool for terminal operations like git, npm, docker, build commands, and system utilities. NOT for file operations (use Read, Write, Edit instead).\n\nCapabilities:\n- Run any shell command\n- Persistent session (environment persists between calls)\n- Support for background execution\n- Configurable timeout (up to 10 minutes)\n\nBest practices:\n- Quote paths with spaces using double quotes\n- Use absolute paths to avoid cd\n- Chain dependent commands with &&\n- Run independent commands in parallel (multiple tool calls)\n- Never use for file reading (use Read tool)\n- Never use echo/printf to communicate (output text directly)\n\nGit operations:\n- Never update git config\n- Never use destructive commands without explicit request\n- Always use HEREDOC for commit messages\n","tools/edit.txt":"Edit files using exact string replacement.\n\nUse this tool to make precise changes to existing files. Requires reading the file first to ensure accurate matching.\n\nCapabilities:\n- Replace exact string matches in files\n- Support for replace_all to change all occurrences\n- Preserves file formatting and indentation\n\nRequirements:\n- Must read the file first (tool will error otherwise)\n- old_string must be unique in the file (or use replace_all)\n- Preserve exact indentation from the original\n\nBest practices:\n- Include enough context to make old_string unique\n- Use replace_all for renaming variables/functions\n- Never include line numbers in old_string or new_string\n","tools/glob.txt":"Find files by pattern matching.\n\nUse this tool to locate files using glob patterns. Fast and efficient for any codebase size.\n\nCapabilities:\n- Match files using glob patterns (e.g., \"**/*.ts\", \"src/**/*.tsx\")\n- Returns paths sorted by modification time\n- Works with any codebase size\n\nPattern examples:\n- \"**/*.ts\" - all TypeScript files\n- \"src/**/*.tsx\" - React components in src\n- \"**/test*.ts\" - test files anywhere\n- \"core/**/*\" - all files in core directory\n\nBest practices:\n- Use specific patterns to narrow results\n- Prefer glob over bash find command\n- Run multiple patterns in parallel if needed\n","tools/grep.txt":"Search file contents using regex patterns.\n\nUse this tool to search for code patterns, function definitions, imports, and text across the codebase. Built on ripgrep for speed.\n\nCapabilities:\n- Full regex syntax support\n- Filter by file type or glob pattern\n- Multiple output modes: files_with_matches, content, count\n- Context lines before/after matches (-A, -B, -C)\n- Multiline matching support\n\nOutput modes:\n- files_with_matches (default): just file paths\n- content: matching lines with context\n- count: match counts per file\n\nBest practices:\n- Use specific patterns to reduce noise\n- Filter by file type when possible (type: \"ts\")\n- Use content mode with context for understanding matches\n- Never use bash grep/rg directly (use this tool)\n","tools/read.txt":"Read files from the filesystem.\n\nUse this tool to read file contents before making edits. Always read a file before attempting to modify it to understand the current state and structure.\n\nCapabilities:\n- Read any text file by absolute path\n- Supports line offset and limit for large files\n- Returns content with line numbers for easy reference\n- Can read images, PDFs, and Jupyter notebooks\n\nBest practices:\n- Always read before editing\n- Use offset/limit for files > 2000 lines\n- Read multiple related files in parallel when exploring\n","tools/task.txt":"Launch specialized agents for complex tasks.\n\nUse this tool to delegate multi-step tasks to autonomous agents. Each agent type has specific capabilities and tools.\n\nAgent types:\n- Explore: Fast codebase exploration, file search, pattern finding\n- Plan: Software architecture, implementation planning\n- general-purpose: Research, code search, multi-step tasks\n\nWhen to use:\n- Complex multi-step tasks\n- Open-ended exploration\n- When multiple search rounds may be needed\n- Tasks matching agent descriptions\n\nBest practices:\n- Provide clear, detailed prompts\n- Launch multiple agents in parallel when independent\n- Prefer direct Glob/Grep for simple exploration. Subagents inherit all MCP tool schemas from the parent session and can start heavy — only delegate to Explore when the search is genuinely open-ended and benefits from parallel rounds\n- Use Plan for implementation design\n","tools/webfetch.txt":"Fetch and analyze web content.\n\nUse this tool to retrieve content from URLs and process it with AI. Useful for documentation, API references, and external resources.\n\nCapabilities:\n- Fetch any URL content\n- Automatic HTML to markdown conversion\n- AI-powered content extraction based on prompt\n- 15-minute cache for repeated requests\n- Automatic HTTP to HTTPS upgrade\n\nBest practices:\n- Provide specific prompts for extraction\n- Handle redirects by following the provided URL\n- Use for documentation and reference lookup\n- Results may be summarized for large content\n","tools/websearch.txt":"Search the web for current information.\n\nUse this tool to find up-to-date information beyond the knowledge cutoff. Returns search results with links.\n\nCapabilities:\n- Real-time web search\n- Domain filtering (allow/block specific sites)\n- Returns formatted results with URLs\n\nRequirements:\n- MUST include Sources section with URLs after answering\n- Use current year in queries for recent info\n\nBest practices:\n- Be specific in search queries\n- Include year for time-sensitive searches\n- Always cite sources in response\n- Filter domains when targeting specific sites\n","tools/write.txt":"Write or create files on the filesystem.\n\nUse this tool to create new files or completely overwrite existing ones. For modifications to existing files, prefer the Edit tool instead.\n\nCapabilities:\n- Create new files with specified content\n- Overwrite existing files completely\n- Create parent directories automatically\n\nRequirements:\n- Must read existing file first before overwriting\n- Use absolute paths only\n\nBest practices:\n- Prefer Edit for modifications to existing files\n- Only create new files when truly necessary\n- Never create documentation files unless explicitly requested\n","windsurf/router.md":"---\ntrigger: always_on\ndescription: \"prjct - Context layer for AI coding agents\"\n---\n\n# prjct\n\nCore: /sync, /task, /done, /ship, /pause, /resume, /next, /bug, /workflow\nOther: run `prjct <command> --md` and follow CLI output\n","windsurf/workflows/ship.md":"# /ship - Ship feature\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct ship {{args}} --md`\nFollow CLI output.\n","windsurf/workflows/task.md":"# /task - Start a task\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct task {{args}} --md`\nFollow CLI output.\n"}
1
+ {"agentic/checklist-routing.md":"---\nallowed-tools: [Read, Glob]\ndescription: 'Determine which quality checklists to apply - Claude decides'\n---\n\n# Checklist Routing Instructions\n\n## Objective\n\nDetermine which quality checklists are relevant for a task by analyzing the ACTUAL task and its scope.\n\n## Step 1: Understand the Task\n\nRead the task description and identify:\n\n- What type of work is being done? (new feature, bug fix, refactor, infra, docs)\n- What domains are affected? (code, UI, API, database, deployment)\n- What is the scope? (small fix, major feature, architectural change)\n\n## Step 2: Consider Task Domains\n\nEach task can touch multiple domains. Consider:\n\n| Domain | Signals |\n|--------|---------|\n| Code Quality | Writing/modifying any code |\n| Architecture | New components, services, or major refactors |\n| UX/UI | User-facing changes, CLI output, visual elements |\n| Infrastructure | Deployment, containers, CI/CD, cloud resources |\n| Security | Auth, user data, external inputs, secrets |\n| Testing | New functionality, bug fixes, critical paths |\n| Documentation | Public APIs, complex features, breaking changes |\n| Performance | Data processing, loops, network calls, rendering |\n| Accessibility | User interfaces (web, mobile, CLI) |\n| Data | Database operations, caching, data transformations |\n\n## Step 3: Match Task to Checklists\n\nBased on your analysis, select relevant checklists:\n\n**DO NOT assume:**\n- Every task needs all checklists\n- \"Frontend\" = only UX checklist\n- \"Backend\" = only Code Quality checklist\n\n**DO analyze:**\n- What the task actually touches\n- What quality dimensions matter for this specific work\n- What could go wrong if not checked\n\n## Available Checklists\n\nLocated in `templates/checklists/`:\n\n| Checklist | When to Apply |\n|-----------|---------------|\n| `code-quality.md` | Any code changes (any language) |\n| `architecture.md` | New modules, services, significant structural changes |\n| `ux-ui.md` | User-facing interfaces (web, mobile, CLI, API DX) |\n| `infrastructure.md` | Deployment, containers, CI/CD, cloud resources |\n| `security.md` | ALWAYS for: auth, user input, external APIs, secrets |\n| `testing.md` | New features, bug fixes, refactors |\n| `documentation.md` | Public APIs, complex features, configuration changes |\n| `performance.md` | Data-intensive operations, critical paths |\n| `accessibility.md` | Any user interface work |\n| `data.md` | Database, caching, data transformations |\n\n## Decision Process\n\n1. Read task description\n2. Identify primary work domain\n3. List secondary domains affected\n4. Select 2-4 most relevant checklists\n5. Consider Security (almost always relevant)\n\n## Output\n\nReturn selected checklists with reasoning:\n\n```json\n{\n \"checklists\": [\"code-quality\", \"security\", \"testing\"],\n \"reasoning\": \"Task involves new API endpoint (code), handles user input (security), and adds business logic (testing)\",\n \"priority_items\": [\"Input validation\", \"Error handling\", \"Happy path tests\"],\n \"skipped\": {\n \"accessibility\": \"No user interface changes\",\n \"infrastructure\": \"No deployment changes\"\n }\n}\n```\n\n## Rules\n\n- **Task-driven** - Focus on what the specific task needs\n- **Less is more** - 2-4 focused checklists beat 10 unfocused\n- **Security is special** - Default to including unless clearly irrelevant\n- **Explain your reasoning** - Don't just pick, justify selections AND skips\n- **Context matters** - Small typo fix ≠ major refactor in checklist needs\n","agentic/orchestrator.md":"# Orchestrator\n\nLoad project context for task execution.\n\n## Flow\n\n```\np. {command} → Load Config → Load State → Load Agents → Execute\n```\n\n## Step 1: Load Config\n\n```\nREAD: .prjct/prjct.config.json → {projectId}\nSET: {globalPath} = ~/.prjct-cli/projects/{projectId}\n```\n\n## Step 2: Load State\n\n```bash\nprjct dash compact\n# Parse output to determine: {hasActiveTask}\n```\n\n## Step 3: Load Agents\n\n```\nGLOB: {globalPath}/agents/*.md\nFOR EACH agent: READ and store content\n```\n\n## Step 4: Detect Domains\n\nAnalyze task → identify domains:\n- frontend: UI, forms, components\n- backend: API, server logic\n- database: Schema, queries\n- testing: Tests, mocks\n- devops: CI/CD, deployment\n\nIF task spans 3+ domains → fragment into subtasks\n\n## Step 5: Build Context\n\nCombine: state + agents + detected domains → execute\n\n## Output Format\n\n```\n🎯 Task: {description}\n📦 Context: Agent: {name} | State: {status} | Domains: {list}\n```\n\n## Error Handling\n\n| Situation | Action |\n|-----------|--------|\n| No config | \"Run `p. init` first\" |\n| No state | Create default |\n| No agents | Warn, continue |\n\n## Disable\n\n```yaml\n---\norchestrator: false\n---\n```\n","agentic/task-fragmentation.md":"# Task Fragmentation\n\nBreak complex multi-domain tasks into subtasks.\n\n## When to Fragment\n\n- Spans 3+ domains (frontend + backend + database)\n- Has natural dependency order\n- Too large for single execution\n\n## When NOT to Fragment\n\n- Single domain only\n- Small, focused change\n- Already atomic\n\n## Dependency Order\n\n1. **Database** (models first)\n2. **Backend** (API using models)\n3. **Frontend** (UI using API)\n4. **Testing** (tests for all)\n5. **DevOps** (deploy)\n\n## Subtask Format\n\n```json\n{\n \"subtasks\": [{\n \"id\": \"subtask-1\",\n \"description\": \"Create users table\",\n \"domain\": \"database\",\n \"agent\": \"database.md\",\n \"dependsOn\": []\n }]\n}\n```\n\n## Output\n\n```\n🎯 Task: {task}\n\n📋 Subtasks:\n├─ 1. [database] Create schema\n├─ 2. [backend] Create API\n└─ 3. [frontend] Create form\n```\n\n## Delegation\n\n```\nTask(\n subagent_type: 'general-purpose',\n prompt: '\n Read: {agentsPath}/{domain}.md\n Subtask: {description}\n Previous: {previousSummary}\n Focus ONLY on this subtask.\n '\n)\n```\n\n## Progress\n\n```\n📊 Progress: 2/4 (50%)\n✅ 1. [database] Done\n✅ 2. [backend] Done\n▶️ 3. [frontend] ← CURRENT\n⏳ 4. [testing]\n```\n\n## Error Handling\n\n```\n❌ Subtask 2/4 failed\n\nOptions:\n1. Retry\n2. Skip and continue\n3. Abort\n```\n\n## Anti-Patterns\n\n- Over-fragmentation: 10 subtasks for \"add button\"\n- Under-fragmentation: 1 subtask for \"add auth system\"\n- Wrong order: Frontend before backend\n","agents/AGENTS.md":"# AGENTS.md\n\nAI assistant guidance for **prjct-cli** - context layer for AI coding agents. Works with Claude Code, Gemini CLI, and more.\n\n## What This Is\n\n**NOT** project management. NO sprints, story points, ceremonies, or meetings.\n\n**IS** a context layer that gives AI agents the project knowledge they need to work effectively.\n\n---\n\n## Dynamic Agent Generation\n\nGenerate agents during `p. sync` based on analysis:\n\n```javascript\nawait generator.generateDynamicAgent('agent-name', {\n role: 'Role Description',\n expertise: 'Technologies, versions, tools',\n responsibilities: 'What they handle'\n})\n```\n\n### Guidelines\n1. Read `analysis/repo-summary.md` first\n2. Create specialists for each major technology\n3. Name descriptively: `go-backend` not `be`\n4. Include versions and frameworks found\n5. Follow project-specific patterns\n\n## Architecture\n\n**Global**: `~/.prjct-cli/projects/{id}/`\n```\nprjct.db # SQLite database (all state)\ncontext/ # now.md, next.md\nagents/ # domain specialists\n```\n\n**Local**: `.prjct/prjct.config.json` (read-only)\n\n## Commands\n\n| Command | Action |\n|---------|--------|\n| `p. init` | Initialize |\n| `p. sync` | Analyze + generate agents |\n| `p. task X` | Start task |\n| `p. done` | Complete subtask |\n| `p. ship` | Ship feature |\n| `p. next` | Show queue |\n\n## Intent Detection\n\n| Intent | Command |\n|--------|---------|\n| Start task | `p. task` |\n| Finish | `p. done` |\n| Ship | `p. ship` |\n| What's next | `p. next` |\n\n## Implementation\n\n- Atomic operations via `prjct` CLI\n- CLI handles all state persistence (SQLite)\n- Handle missing config gracefully\n\n## Harness mode (opt-in)\n\nProjects that want a multi-agent workflow can run `prjct harness install` to drop a leader/implementer/reviewer trio into `.claude/agents/`, a project `CHECKPOINTS.md`, and a CLAUDE.md snippet that locks the main session into orchestrator role. Templates live in `templates/harness/`. Uninstall with `prjct harness uninstall`. Strictly opt-in — not invoked by `init`/`sync`.\n","antigravity/SKILL.md":"---\nname: prjct\ndescription: Use when user mentions p., prjct, task tracking, or workflow commands.\n---\n\n# prjct — Context layer for AI agents\n\nGrammar: `p. <command> [args]` or `prjct <command> --md`\n\nCore commands: sync, task, done, ship, pause, resume, next, bug, workflow, tokens\nIntegrations: linear, jira, enrich\nOther: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nRules:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- All commits include footer: `Generated with [p/](https://www.prjct.app/)`\n- All storage through `prjct` CLI (SQLite internally)\n- Start code tasks with `p. task` and follow Context Contract from CLI output\n","checklists/architecture.md":"# Architecture Checklist\n\n> Applies to ANY system architecture\n\n## Design Principles\n- [ ] Clear separation of concerns\n- [ ] Loose coupling between components\n- [ ] High cohesion within modules\n- [ ] Single source of truth for data\n- [ ] Explicit dependencies (no hidden coupling)\n\n## Scalability\n- [ ] Stateless where possible\n- [ ] Horizontal scaling considered\n- [ ] Bottlenecks identified\n- [ ] Caching strategy defined\n\n## Resilience\n- [ ] Failure modes documented\n- [ ] Graceful degradation planned\n- [ ] Recovery procedures defined\n- [ ] Circuit breakers where needed\n\n## Maintainability\n- [ ] Clear boundaries between layers\n- [ ] Easy to test in isolation\n- [ ] Configuration externalized\n- [ ] Logging and observability built-in\n","checklists/code-quality.md":"# Code Quality Checklist\n\n> Universal principles for ANY programming language\n\n## Universal Principles\n- [ ] Single Responsibility: Each unit does ONE thing well\n- [ ] DRY: No duplicated logic (extract shared code)\n- [ ] KISS: Simplest solution that works\n- [ ] Clear naming: Self-documenting identifiers\n- [ ] Consistent patterns: Match existing codebase style\n\n## Error Handling\n- [ ] All error paths handled gracefully\n- [ ] Meaningful error messages\n- [ ] No silent failures\n- [ ] Proper resource cleanup (files, connections, memory)\n\n## Edge Cases\n- [ ] Null/nil/None handling\n- [ ] Empty collections handled\n- [ ] Boundary conditions tested\n- [ ] Invalid input rejected early\n\n## Code Organization\n- [ ] Functions/methods are small and focused\n- [ ] Related code grouped together\n- [ ] Clear module/package boundaries\n- [ ] No circular dependencies\n","checklists/data.md":"# Data Checklist\n\n> Applies to: SQL, NoSQL, GraphQL, File storage, Caching\n\n## Data Integrity\n- [ ] Schema/structure defined\n- [ ] Constraints enforced\n- [ ] Transactions used appropriately\n- [ ] Referential integrity maintained\n\n## Query Performance\n- [ ] Indexes on frequent queries\n- [ ] N+1 queries eliminated\n- [ ] Query complexity analyzed\n- [ ] Pagination for large datasets\n\n## Data Operations\n- [ ] Migrations versioned and reversible\n- [ ] Backup and restore tested\n- [ ] Data validation at boundary\n- [ ] Soft deletes considered (if applicable)\n\n## Caching\n- [ ] Cache invalidation strategy defined\n- [ ] TTL values appropriate\n- [ ] Cache warming considered\n- [ ] Cache hit/miss monitored\n\n## Data Privacy\n- [ ] PII identified and protected\n- [ ] Data anonymization where needed\n- [ ] Audit trail for sensitive data\n- [ ] Data deletion procedures defined\n","checklists/documentation.md":"# Documentation Checklist\n\n> Applies to ALL projects\n\n## Essential Docs\n- [ ] README with quick start\n- [ ] Installation instructions\n- [ ] Configuration options documented\n- [ ] Common use cases shown\n\n## Code Documentation\n- [ ] Public APIs documented\n- [ ] Complex logic explained\n- [ ] Architecture decisions recorded (ADRs)\n- [ ] Diagrams for complex flows\n\n## Operational Docs\n- [ ] Deployment process documented\n- [ ] Troubleshooting guide\n- [ ] Runbooks for common issues\n- [ ] Changelog maintained\n\n## API Documentation\n- [ ] All endpoints documented\n- [ ] Request/response examples\n- [ ] Error codes explained\n- [ ] Authentication documented\n\n## Maintenance\n- [ ] Docs updated with code changes\n- [ ] Version-specific documentation\n- [ ] Broken links checked\n- [ ] Examples tested and working\n","checklists/infrastructure.md":"# Infrastructure Checklist\n\n> Applies to: Cloud, On-prem, Hybrid, Edge\n\n## Deployment\n- [ ] Infrastructure as Code (Terraform, Pulumi, CloudFormation, etc.)\n- [ ] Reproducible environments\n- [ ] Rollback strategy defined\n- [ ] Blue-green or canary deployment option\n\n## Observability\n- [ ] Logging strategy defined\n- [ ] Metrics collection configured\n- [ ] Alerting thresholds set\n- [ ] Distributed tracing (if applicable)\n\n## Security\n- [ ] Secrets management (not in code)\n- [ ] Network segmentation\n- [ ] Least privilege access\n- [ ] Encryption at rest and in transit\n\n## Reliability\n- [ ] Backup strategy defined\n- [ ] Disaster recovery plan\n- [ ] Health checks configured\n- [ ] Auto-scaling rules (if applicable)\n\n## Cost Management\n- [ ] Resource sizing appropriate\n- [ ] Unused resources identified\n- [ ] Cost monitoring in place\n- [ ] Budget alerts configured\n","checklists/performance.md":"# Performance Checklist\n\n> Applies to: Backend, Frontend, Mobile, Database\n\n## Analysis\n- [ ] Bottlenecks identified with profiling\n- [ ] Baseline metrics established\n- [ ] Performance budgets defined\n- [ ] Benchmarks before/after changes\n\n## Optimization Strategies\n- [ ] Algorithmic complexity reviewed (O(n) vs O(n²))\n- [ ] Appropriate data structures used\n- [ ] Caching implemented where beneficial\n- [ ] Lazy loading for expensive operations\n\n## Resource Management\n- [ ] Memory usage optimized\n- [ ] Connection pooling used\n- [ ] Batch operations where applicable\n- [ ] Async/parallel processing considered\n\n## Frontend Specific\n- [ ] Bundle size optimized\n- [ ] Images optimized\n- [ ] Critical rendering path optimized\n- [ ] Network requests minimized\n\n## Backend Specific\n- [ ] Database queries optimized\n- [ ] Response compression enabled\n- [ ] Proper indexing in place\n- [ ] Connection limits configured\n","checklists/security.md":"# Security Checklist\n\n> ALWAYS ON - Applies to ALL applications\n\n## Input/Output\n- [ ] All user input validated and sanitized\n- [ ] Output encoded appropriately (prevent injection)\n- [ ] File uploads restricted and scanned\n- [ ] No sensitive data in logs or error messages\n\n## Authentication & Authorization\n- [ ] Strong authentication mechanism\n- [ ] Proper session management\n- [ ] Authorization checked at every access point\n- [ ] Principle of least privilege applied\n\n## Data Protection\n- [ ] Sensitive data encrypted at rest\n- [ ] Secure transmission (TLS/HTTPS)\n- [ ] PII handled according to regulations\n- [ ] Data retention policies followed\n\n## Dependencies\n- [ ] Dependencies from trusted sources\n- [ ] Known vulnerabilities checked\n- [ ] Minimal dependency surface\n- [ ] Regular security updates planned\n\n## API Security\n- [ ] Rate limiting implemented\n- [ ] Authentication required for sensitive endpoints\n- [ ] CORS properly configured\n- [ ] API keys/tokens secured\n","checklists/testing.md":"# Testing Checklist\n\n> Applies to: Unit, Integration, E2E, Performance testing\n\n## Coverage Strategy\n- [ ] Critical paths have high coverage\n- [ ] Happy path tested\n- [ ] Error paths tested\n- [ ] Edge cases covered\n\n## Test Quality\n- [ ] Tests are deterministic (no flaky tests)\n- [ ] Tests are independent (no order dependency)\n- [ ] Tests are fast (optimize slow tests)\n- [ ] Tests are readable (clear intent)\n\n## Test Types\n- [ ] Unit tests for business logic\n- [ ] Integration tests for boundaries\n- [ ] E2E tests for critical flows\n- [ ] Performance tests for bottlenecks\n\n## Mocking Strategy\n- [ ] External services mocked\n- [ ] Database isolated or mocked\n- [ ] Time-dependent code controlled\n- [ ] Random values seeded\n\n## Test Maintenance\n- [ ] Tests updated with code changes\n- [ ] Dead tests removed\n- [ ] Test data managed properly\n- [ ] CI/CD integration working\n","checklists/ux-ui.md":"# UX/UI Checklist\n\n> Applies to: Web, Mobile, CLI, Desktop, API DX\n\n## User Experience\n- [ ] Clear user journey/flow\n- [ ] Feedback for every action\n- [ ] Loading states shown\n- [ ] Error states handled gracefully\n- [ ] Success confirmation provided\n\n## Interface Design\n- [ ] Consistent visual language\n- [ ] Intuitive navigation\n- [ ] Responsive/adaptive layout (if applicable)\n- [ ] Touch targets adequate (mobile)\n- [ ] Keyboard navigation (web/desktop)\n\n## CLI Specific\n- [ ] Help text for all commands\n- [ ] Clear error messages with suggestions\n- [ ] Progress indicators for long operations\n- [ ] Consistent flag naming conventions\n- [ ] Exit codes meaningful\n\n## API DX (Developer Experience)\n- [ ] Intuitive endpoint/function naming\n- [ ] Consistent response format\n- [ ] Helpful error messages with codes\n- [ ] Good documentation with examples\n- [ ] Predictable behavior\n\n## Information Architecture\n- [ ] Content hierarchy clear\n- [ ] Important actions prominent\n- [ ] Related items grouped\n- [ ] Search/filter for large datasets\n","codex/SKILL.md":"---\nname: prjct\ndescription: Use when user mentions p., prjct, task tracking, or workflow commands.\n---\n\n# prjct — Context layer for AI agents\n\nGrammar: `p. <command> [args]` or `prjct <command> --md`\n\nCore commands: sync, task, done, ship, pause, resume, next, bug, workflow, tokens\nIntegrations: linear, jira, enrich\nOther: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nRules:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- All commits include footer: `Generated with [p/](https://www.prjct.app/)`\n- All storage through `prjct` CLI (SQLite internally)\n- Start code tasks with `p. task` and follow Context Contract from CLI output\n","config/skill-mappings.json":"{\n \"version\": \"3.0.0\",\n \"description\": \"Skill packages from skills.sh for auto-installation during sync\",\n \"sources\": {\n \"primary\": {\n \"name\": \"skills.sh\",\n \"url\": \"https://skills.sh\",\n \"installCmd\": \"npx skills add {package}\"\n },\n \"fallback\": {\n \"name\": \"GitHub direct\",\n \"installFormat\": \"owner/repo\"\n }\n },\n \"skillsDirectory\": \"~/.claude/skills/\",\n \"skillFormat\": {\n \"required\": [\"name\", \"description\"],\n \"optional\": [\"license\", \"compatibility\", \"metadata\", \"allowed-tools\"],\n \"fileStructure\": {\n \"required\": \"SKILL.md\",\n \"optional\": [\"scripts/\", \"references/\", \"assets/\"]\n }\n },\n \"agentToSkillMap\": {\n \"frontend\": {\n \"packages\": [\n \"anthropics/skills/frontend-design\",\n \"vercel-labs/agent-skills/vercel-react-best-practices\"\n ]\n },\n \"uxui\": {\n \"packages\": [\"anthropics/skills/frontend-design\"]\n },\n \"backend\": {\n \"packages\": [\"obra/superpowers/systematic-debugging\"]\n },\n \"database\": {\n \"packages\": []\n },\n \"testing\": {\n \"packages\": [\"obra/superpowers/test-driven-development\", \"anthropics/skills/webapp-testing\"]\n },\n \"devops\": {\n \"packages\": [\"anthropics/skills/mcp-builder\"]\n },\n \"prjct-planner\": {\n \"packages\": [\"obra/superpowers/brainstorming\"]\n },\n \"prjct-shipper\": {\n \"packages\": []\n },\n \"prjct-workflow\": {\n \"packages\": []\n }\n },\n \"documentSkills\": {\n \"note\": \"Official Anthropic document creation skills\",\n \"source\": \"anthropics/skills\",\n \"skills\": {\n \"pdf\": {\n \"name\": \"pdf\",\n \"description\": \"Create and edit PDF documents\",\n \"path\": \"skills/pdf\"\n },\n \"docx\": {\n \"name\": \"docx\",\n \"description\": \"Create and edit Word documents\",\n \"path\": \"skills/docx\"\n },\n \"pptx\": {\n \"name\": \"pptx\",\n \"description\": \"Create PowerPoint presentations\",\n \"path\": \"skills/pptx\"\n },\n \"xlsx\": {\n \"name\": \"xlsx\",\n \"description\": \"Create Excel spreadsheets\",\n \"path\": \"skills/xlsx\"\n }\n }\n }\n}\n","context/dashboard.md":"---\ndescription: 'Template for generated dashboard context'\ngenerated-by: 'p. dashboard'\ndata-source: 'prjct.db (SQLite)'\n---\n\n# Dashboard Context Template\n\nThis template defines the format for `{globalPath}/context/dashboard.md` generated by `p. dashboard`.\n\n---\n\n## Template\n\n```markdown\n# Dashboard\n\n**Project:** {projectName}\n**Generated:** {timestamp}\n\n---\n\n## Health Score\n\n**Overall:** {healthScore}/100\n\n| Component | Score | Weight | Contribution |\n|-----------|-------|--------|--------------|\n| Roadmap Progress | {roadmapScore}/100 | 25% | {roadmapContribution} |\n| Estimation Accuracy | {estimationScore}/100 | 25% | {estimationContribution} |\n| Success Rate | {successScore}/100 | 25% | {successContribution} |\n| Velocity Trend | {velocityScore}/100 | 25% | {velocityContribution} |\n\n---\n\n## Quick Stats\n\n| Metric | Value | Trend |\n|--------|-------|-------|\n| Features Shipped | {shippedCount} | {shippedTrend} |\n| PRDs Created | {prdCount} | {prdTrend} |\n| Avg Cycle Time | {avgCycleTime}d | {cycleTrend} |\n| Estimation Accuracy | {estimationAccuracy}% | {accuracyTrend} |\n| Success Rate | {successRate}% | {successTrend} |\n| ROI Score | {avgROI} | {roiTrend} |\n\n---\n\n## Active Quarter: {activeQuarter.id}\n\n**Theme:** {activeQuarter.theme}\n**Status:** {activeQuarter.status}\n\n### Progress\n\n```\nFeatures: {featureBar} {quarterFeatureProgress}%\nCapacity: {capacityBar} {capacityUtilization}%\nTimeline: {timelineBar} {timelineProgress}%\n```\n\n### Features\n\n| Feature | Status | Progress | Owner |\n|---------|--------|----------|-------|\n{FOR EACH feature in quarterFeatures:}\n| {feature.name} | {statusEmoji(feature.status)} | {feature.progress}% | {feature.agent || '-'} |\n{END FOR}\n\n---\n\n## Current Work\n\n### Active Task\n{IF currentTask:}\n**{currentTask.description}**\n\n- Type: {currentTask.type}\n- Started: {currentTask.startedAt}\n- Elapsed: {elapsed}\n- Branch: {currentTask.branch?.name || 'N/A'}\n\nSubtasks: {completedSubtasks}/{totalSubtasks}\n{ELSE:}\n*No active task*\n{END IF}\n\n### In Progress Features\n\n{FOR EACH feature in activeFeatures:}\n#### {feature.name}\n\n- Progress: {progressBar(feature.progress)} {feature.progress}%\n- Quarter: {feature.quarter || 'Unassigned'}\n- PRD: {feature.prdId || 'None'}\n- Dependencies: {feature.dependencies?.join(', ') || 'None'}\n{END FOR}\n\n---\n\n## Pipeline\n\n```\nPRDs Features Active Shipped\n┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐\n│ Draft │──▶│ Planned │──▶│ Active │──▶│ Shipped │\n│ ({draft}) │ │ ({planned}) │ │ ({active}) │ │ ({shipped}) │\n└─────────┘ └─────────┘ └─────────┘ └─────────┘\n │\n ▼\n┌─────────┐\n│Approved │\n│ ({approved}) │\n└─────────┘\n```\n\n---\n\n## Metrics Trends (Last 4 Weeks)\n\n### Velocity\n```\nW-3: {velocityW3Bar} {velocityW3}\nW-2: {velocityW2Bar} {velocityW2}\nW-1: {velocityW1Bar} {velocityW1}\nW-0: {velocityW0Bar} {velocityW0}\n```\n\n### Estimation Accuracy\n```\nW-3: {accuracyW3Bar} {accuracyW3}%\nW-2: {accuracyW2Bar} {accuracyW2}%\nW-1: {accuracyW1Bar} {accuracyW1}%\nW-0: {accuracyW0Bar} {accuracyW0}%\n```\n\n---\n\n## Alerts & Actions\n\n### Warnings\n{FOR EACH alert in alerts:}\n- {alert.icon} {alert.message}\n{END FOR}\n\n### Suggested Actions\n{FOR EACH action in suggestedActions:}\n1. {action.description}\n - Command: `{action.command}`\n{END FOR}\n\n---\n\n## Recent Activity\n\n| Date | Action | Details |\n|------|--------|---------|\n{FOR EACH event in recentEvents.slice(0, 10):}\n| {event.date} | {event.action} | {event.details} |\n{END FOR}\n\n---\n\n## Learnings Summary\n\n### Top Patterns\n{FOR EACH pattern in topPatterns.slice(0, 5):}\n- {pattern.insight} ({pattern.frequency}x)\n{END FOR}\n\n### Improvement Areas\n{FOR EACH area in improvementAreas:}\n- **{area.name}**: {area.suggestion}\n{END FOR}\n\n---\n\n*Generated by prjct-cli | https://prjct.app*\n```\n\n---\n\n## Health Score Calculation\n\n```javascript\nconst healthScore = Math.round(\n (roadmapProgress * 0.25) +\n (estimationAccuracy * 0.25) +\n (successRate * 0.25) +\n (normalizedVelocity * 0.25)\n)\n```\n\n| Score Range | Health Level | Color |\n|-------------|--------------|-------|\n| 80-100 | Excellent | Green |\n| 60-79 | Good | Blue |\n| 40-59 | Needs Attention | Yellow |\n| 0-39 | Critical | Red |\n\n---\n\n## Alert Definitions\n\n| Condition | Alert | Severity |\n|-----------|-------|----------|\n| `capacityUtilization > 90` | Quarter capacity nearly full | Warning |\n| `estimationAccuracy < 60` | Estimation accuracy below target | Warning |\n| `activeFeatures.length > 3` | Too many features in progress | Info |\n| `draftPRDs.length > 3` | PRDs awaiting review | Info |\n| `successRate < 70` | Success rate declining | Warning |\n| `velocityTrend < -20` | Velocity dropping | Warning |\n| `currentTask && elapsed > 4h` | Task running long | Info |\n\n---\n\n## Suggested Actions Matrix\n\n| Condition | Suggested Action | Command |\n|-----------|------------------|---------|\n| No active task | Start a task | `p. task` |\n| PRDs in draft | Review PRDs | `p. prd list` |\n| Features pending review | Record impact | `p. impact` |\n| Quarter ending soon | Plan next quarter | `p. plan quarter` |\n| Low estimation accuracy | Analyze estimates | `p. dashboard estimates` |\n\n---\n\n## Integration with Linear/Jira/Monday\n\nThe dashboard context maps to PM tool dashboards:\n\n| Dashboard Section | Linear | Jira | Monday |\n|-------------------|--------|------|--------|\n| Health Score | Project Health | Dashboard Gadget | Board Overview |\n| Active Quarter | Cycle | Sprint | Timeline |\n| Pipeline | Workflow Board | Kanban | Board |\n| Velocity | Velocity Chart | Velocity Report | Chart Widget |\n| Alerts | Notifications | Issues | Notifications |\n\n---\n\n## Refresh Frequency\n\n| Data Type | Refresh Trigger |\n|-----------|-----------------|\n| Current Task | Real-time (on state change) |\n| Features | On feature status change |\n| Metrics | On `p. dashboard` execution |\n| Aggregates | On `p. impact` completion |\n| Alerts | Calculated on view |\n","context/roadmap.md":"---\ndescription: 'Template for generated roadmap context'\ngenerated-by: 'p. plan, p. sync'\ndata-source: 'prjct.db (SQLite)'\n---\n\n# Roadmap Context Template\n\nThis template defines the format for `{globalPath}/context/roadmap.md` generated by:\n- `p. plan` - After quarter planning\n- `p. sync` - After roadmap generation from git\n\n---\n\n## Template\n\n```markdown\n# Roadmap\n\n**Last Updated:** {lastUpdated}\n\n---\n\n## Strategy\n\n**Goal:** {strategy.goal}\n\n### Phases\n{FOR EACH phase in strategy.phases:}\n- **{phase.id}**: {phase.name} ({phase.status})\n{END FOR}\n\n### Success Metrics\n{FOR EACH metric in strategy.successMetrics:}\n- {metric}\n{END FOR}\n\n---\n\n## Quarters\n\n{FOR EACH quarter in quarters:}\n### {quarter.id}: {quarter.name}\n\n**Status:** {quarter.status}\n**Theme:** {quarter.theme}\n**Capacity:** {capacity.allocatedHours}/{capacity.totalHours}h ({utilization}%)\n\n#### Goals\n{FOR EACH goal in quarter.goals:}\n- {goal}\n{END FOR}\n\n#### Features\n{FOR EACH featureId in quarter.features:}\n- [{status icon}] **{feature.name}** ({feature.status}, {feature.progress}%)\n - PRD: {feature.prdId || 'None (legacy)'}\n - Estimated: {feature.effortTracking?.estimated?.hours || '?'}h\n - Value Score: {feature.valueScore || 'N/A'}\n - Dependencies: {feature.dependencies?.join(', ') || 'None'}\n{END FOR}\n\n{END FOR}\n\n---\n\n## Active Work\n\n{FOR EACH feature WHERE status == 'active':}\n### {feature.name}\n\n| Attribute | Value |\n|-----------|-------|\n| Progress | {feature.progress}% |\n| Branch | {feature.branch || 'N/A'} |\n| Quarter | {feature.quarter || 'Unassigned'} |\n| PRD | {feature.prdId || 'Legacy (no PRD)'} |\n| Started | {feature.createdAt} |\n\n#### Tasks\n{FOR EACH task in feature.tasks:}\n- [{task.completed ? 'x' : ' '}] {task.description}\n{END FOR}\n\n{END FOR}\n\n---\n\n## Completed Features\n\n{FOR EACH feature WHERE status == 'completed' OR status == 'shipped':}\n- **{feature.name}** (v{feature.version || 'N/A'})\n - Shipped: {feature.shippedAt || feature.completedDate}\n - Actual: {feature.effortTracking?.actual?.hours || '?'}h vs Est: {feature.effortTracking?.estimated?.hours || '?'}h\n{END FOR}\n\n---\n\n## Backlog\n\nPriority-ordered list of unscheduled items:\n\n| Priority | Item | Value | Effort | Score |\n|----------|------|-------|--------|-------|\n{FOR EACH item in backlog:}\n| {rank} | {item.title} | {item.valueScore} | {item.effortEstimate}h | {priorityScore} |\n{END FOR}\n\n---\n\n## Legacy Features\n\nFeatures detected from git history (no PRD required):\n\n{FOR EACH feature WHERE legacy == true:}\n- **{feature.name}**\n - Inferred From: {feature.inferredFrom}\n - Status: {feature.status}\n - Commits: {feature.commits?.length || 0}\n{END FOR}\n\n---\n\n## Dependencies\n\n```\n{FOR EACH feature WHERE dependencies?.length > 0:}\n{feature.name}\n{FOR EACH depId in feature.dependencies:}\n └── {dependency.name}\n{END FOR}\n{END FOR}\n```\n\n---\n\n## Metrics Summary\n\n| Metric | Value |\n|--------|-------|\n| Total Features | {features.length} |\n| Planned | {planned.length} |\n| Active | {active.length} |\n| Completed | {completed.length} |\n| Shipped | {shipped.length} |\n| Legacy | {legacy.length} |\n| PRD-Backed | {prdBacked.length} |\n| Backlog | {backlog.length} |\n\n### Capacity by Quarter\n\n| Quarter | Allocated | Total | Utilization |\n|---------|-----------|-------|-------------|\n{FOR EACH quarter in quarters:}\n| {quarter.id} | {capacity.allocatedHours}h | {capacity.totalHours}h | {utilization}% |\n{END FOR}\n\n### Effort Accuracy (Shipped Features)\n\n| Feature | Estimated | Actual | Variance |\n|---------|-----------|--------|----------|\n{FOR EACH feature WHERE status == 'shipped' AND effortTracking:}\n| {feature.name} | {estimated.hours}h | {actual.hours}h | {variance}% |\n{END FOR}\n\n**Average Variance:** {averageVariance}%\n\n---\n\n*Generated by prjct-cli | https://prjct.app*\n```\n\n---\n\n## Status Icons\n\n| Status | Icon |\n|--------|------|\n| planned | [ ] |\n| active | [~] |\n| completed | [x] |\n| shipped | [+] |\n\n---\n\n## Variable Reference\n\n| Variable | Source | Description |\n|----------|--------|-------------|\n| `lastUpdated` | roadmap.lastUpdated | ISO timestamp |\n| `strategy` | roadmap.strategy | Strategy object |\n| `quarters` | roadmap.quarters | Array of quarters |\n| `features` | roadmap.features | Array of features |\n| `backlog` | roadmap.backlog | Array of backlog items |\n| `utilization` | Calculated | (allocated/total) * 100 |\n| `priorityScore` | Calculated | valueScore / (effort/10) |\n\n---\n\n## Generation Rules\n\n1. **Quarters** - Show only `planned` and `active` quarters by default\n2. **Features** - Group by status (active first, then planned)\n3. **Backlog** - Sort by priority score (descending)\n4. **Legacy** - Always show separately to distinguish from PRD-backed\n5. **Dependencies** - Only show features with dependencies\n6. **Metrics** - Always include for dashboard views\n\n---\n\n## Integration with Linear/Jira/Monday\n\nThe context file maps to PM tool exports:\n\n| Context Section | Linear | Jira | Monday |\n|-----------------|--------|------|--------|\n| Quarters | Cycles | Sprints | Timelines |\n| Features | Issues | Stories | Items |\n| Backlog | Backlog | Backlog | Inbox |\n| Status | State | Status | Status |\n| Capacity | Estimates | Story Points | Time |\n","cursor/commands/ship.md":"# /ship - Ship feature\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct ship {{args}} --md`\nFollow CLI output.\n","cursor/commands/task.md":"# /task - Start a task\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct task {{args}} --md`\nFollow CLI output.\n","cursor/p.md":"# p. Command Router for Cursor IDE\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct {{first_word_of_args}} {{rest_of_args}} --md`\nFollow CLI output.\n","cursor/router.mdc":"---\ndescription: \"prjct - Context layer for AI coding agents\"\nalwaysApply: true\n---\n\n# prjct\n\nCore: /sync, /task, /done, /ship, /pause, /resume, /next, /bug, /workflow\nOther: run `prjct <command> --md` and follow CLI output\n","global/ANTIGRAVITY.md":"<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","global/CURSOR.mdc":"---\ndescription: \"prjct - Context layer for AI coding agents\"\nalwaysApply: true\n---\n\n<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","global/GEMINI.md":"<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","global/STORAGE-SPEC.md":"# Storage Specification\n\n**Canonical specification for prjct storage format.**\n\nAll storage is managed by the `prjct` CLI which uses SQLite (`prjct.db`) internally. **NEVER read or write JSON storage files directly. Use `prjct` CLI commands for all storage operations.**\n\n---\n\n## Current Storage: SQLite (prjct.db)\n\nAll reads and writes go through the `prjct` CLI, which manages a SQLite database (`prjct.db`) with WAL mode for safe concurrent access.\n\n```\n~/.prjct-cli/projects/{projectId}/\n├── prjct.db # SQLite database (SOURCE OF TRUTH for all storage)\n├── context/\n│ ├── now.md # Current task (generated from prjct.db)\n│ └── next.md # Queue (generated from prjct.db)\n├── config/\n│ └── skills.json # Agent-to-skill mappings\n├── agents/ # Domain specialists (auto-generated)\n└── sync/\n └── pending.json # Events for backend sync\n```\n\n### How to interact with storage\n\n- **Read state**: Use `prjct status`, `prjct dash`, `prjct next` CLI commands\n- **Write state**: Use `prjct` CLI commands (task, done, pause, resume, etc.)\n- **Issue tracker setup**: Use `prjct linear setup` or `prjct jira setup` (MCP/OAuth)\n- **Never** read/write JSON files in `storage/` or `memory/` directories\n\n---\n\n## LEGACY JSON Schemas (for reference only)\n\n> **WARNING**: These JSON schemas are LEGACY documentation only. The `storage/` and `memory/` directories are no longer used. All data lives in `prjct.db` (SQLite). Do NOT read or write these files.\n\n### state.json (LEGACY)\n\n```json\n{\n \"task\": {\n \"id\": \"uuid-v4\",\n \"title\": \"string\",\n \"type\": \"feature|bug|improvement|refactor|chore\",\n \"status\": \"active|paused|done\",\n \"branch\": \"string|null\",\n \"subtasks\": [\n {\n \"id\": \"uuid-v4\",\n \"title\": \"string\",\n \"status\": \"pending|done\"\n }\n ],\n \"currentSubtask\": 0,\n \"createdAt\": \"2024-01-15T10:30:00.000Z\",\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n }\n}\n```\n\n**Empty state (no active task):**\n```json\n{\n \"task\": null\n}\n```\n\n### queue.json (LEGACY)\n\n```json\n{\n \"tasks\": [\n {\n \"id\": \"uuid-v4\",\n \"title\": \"string\",\n \"type\": \"feature|bug|improvement|refactor|chore\",\n \"priority\": 1,\n \"createdAt\": \"2024-01-15T10:30:00.000Z\"\n }\n ],\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n### shipped.json (LEGACY)\n\n```json\n{\n \"features\": [\n {\n \"id\": \"uuid-v4\",\n \"name\": \"string\",\n \"version\": \"1.0.0\",\n \"type\": \"feature|bug|improvement|refactor|chore\",\n \"shippedAt\": \"2024-01-15T10:30:00.000Z\"\n }\n ],\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n### events.jsonl (LEGACY - now stored in SQLite `events` table)\n\nPreviously append-only JSONL. Now stored in SQLite.\n\n```jsonl\n{\"type\":\"task.created\",\"timestamp\":\"2024-01-15T10:30:00.000Z\",\"data\":{\"taskId\":\"uuid\",\"title\":\"string\"}}\n{\"type\":\"task.started\",\"timestamp\":\"2024-01-15T10:30:00.000Z\",\"data\":{\"taskId\":\"uuid\"}}\n{\"type\":\"subtask.completed\",\"timestamp\":\"2024-01-15T10:35:00.000Z\",\"data\":{\"taskId\":\"uuid\",\"subtaskIndex\":0}}\n{\"type\":\"task.completed\",\"timestamp\":\"2024-01-15T10:40:00.000Z\",\"data\":{\"taskId\":\"uuid\"}}\n{\"type\":\"feature.shipped\",\"timestamp\":\"2024-01-15T10:45:00.000Z\",\"data\":{\"featureId\":\"uuid\",\"name\":\"string\",\"version\":\"1.0.0\"}}\n```\n\n**Event Types:**\n- `task.created` - New task created\n- `task.started` - Task activated\n- `task.paused` - Task paused\n- `task.resumed` - Task resumed\n- `task.completed` - Task completed\n- `subtask.completed` - Subtask completed\n- `feature.shipped` - Feature shipped\n\n### learnings.jsonl (LEGACY - now stored in SQLite)\n\nPreviously used for LLM-to-LLM knowledge transfer. Now stored in SQLite.\n\n```jsonl\n{\"taskId\":\"uuid\",\"linearId\":\"PRJ-123\",\"timestamp\":\"2024-01-15T10:40:00.000Z\",\"learnings\":{\"patterns\":[\"Use NestedContextResolver for hierarchical discovery\"],\"approaches\":[\"Mirror existing method structure when extending\"],\"decisions\":[\"Extended class rather than wrapper for consistency\"],\"gotchas\":[\"Must handle null parent case\"]},\"value\":{\"type\":\"feature\",\"impact\":\"high\",\"description\":\"Hierarchical AGENTS.md support for monorepos\"},\"filesChanged\":[\"core/resolver.ts\",\"core/types.ts\"],\"tags\":[\"agents\",\"hierarchy\",\"monorepo\"]}\n```\n\n**Schema:**\n```json\n{\n \"taskId\": \"uuid-v4\",\n \"linearId\": \"string|null\",\n \"timestamp\": \"2024-01-15T10:40:00.000Z\",\n \"learnings\": {\n \"patterns\": [\"string\"],\n \"approaches\": [\"string\"],\n \"decisions\": [\"string\"],\n \"gotchas\": [\"string\"]\n },\n \"value\": {\n \"type\": \"feature|bugfix|performance|dx|refactor|infrastructure\",\n \"impact\": \"high|medium|low\",\n \"description\": \"string\"\n },\n \"filesChanged\": [\"string\"],\n \"tags\": [\"string\"]\n}\n```\n\n**Why Local Cache**: Enables future semantic retrieval without API latency. Will feed into vector DB for cross-session knowledge transfer.\n\n### skills.json\n\n```json\n{\n \"mappings\": {\n \"frontend.md\": [\"frontend-design\"],\n \"backend.md\": [\"javascript-typescript\"],\n \"testing.md\": [\"developer-kit\"]\n },\n \"updatedAt\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n### pending.json (sync queue)\n\n```json\n{\n \"events\": [\n {\n \"id\": \"uuid-v4\",\n \"type\": \"task.created\",\n \"timestamp\": \"2024-01-15T10:30:00.000Z\",\n \"data\": {},\n \"synced\": false\n }\n ],\n \"lastSync\": \"2024-01-15T10:30:00.000Z\"\n}\n```\n\n---\n\n## Formatting Rules (MANDATORY)\n\nAll agents MUST follow these rules for cross-agent compatibility:\n\n| Rule | Value |\n|------|-------|\n| JSON indentation | 2 spaces |\n| Trailing commas | NEVER |\n| Key ordering | Logical (as shown in schemas above) |\n| Timestamps | ISO-8601 with milliseconds (`.000Z`) |\n| UUIDs | v4 format (lowercase) |\n| Line endings | LF (not CRLF) |\n| File encoding | UTF-8 without BOM |\n| Empty objects | `{}` |\n| Empty arrays | `[]` |\n| Null values | `null` (lowercase) |\n\n### Timestamp Generation\n\n```bash\n# ALWAYS use dynamic timestamps, NEVER hardcode\nbun -e \"console.log(new Date().toISOString())\" 2>/dev/null || node -e \"console.log(new Date().toISOString())\"\n```\n\n### UUID Generation\n\n```bash\n# ALWAYS generate fresh UUIDs\nbun -e \"console.log(crypto.randomUUID())\" 2>/dev/null || node -e \"console.log(require('crypto').randomUUID())\"\n```\n\n---\n\n## Write Rules (CRITICAL)\n\n### Direct Writes Only\n\n**NEVER use temporary files** - Write directly to final destination:\n\n```\nWRONG: Create `.tmp/file.json`, then `mv` to final path\nCORRECT: Use prjctDb.setDoc() or StorageManager.write() to write to SQLite\n```\n\n### Atomic Updates\n\nAll writes go through SQLite which handles atomicity via WAL mode:\n```typescript\n// StorageManager pattern (preferred):\nawait stateStorage.update(projectId, (state) => {\n state.field = newValue\n return state\n})\n\n// Direct kv_store pattern:\nprjctDb.setDoc(projectId, 'key', data)\n```\n\n### NEVER Do These\n\n- Read or write JSON files in `storage/` or `memory/` directories\n- Use `.tmp/` directories\n- Use `mv` or `rename` operations for storage files\n- Create backup files like `*.bak` or `*.old`\n- Bypass `prjct` CLI to write directly to `prjct.db`\n\n---\n\n## Cross-Agent Compatibility\n\n### Why This Matters\n\n1. **User freedom**: Switch between Claude and Gemini freely\n2. **Remote sync**: Storage will sync to prjct.app backend\n3. **Single truth**: Both agents produce identical output\n\n### Verification Test\n\n```bash\n# Start task with Claude\np. task \"add feature X\"\n\n# Switch to Gemini, continue\np. done # Should work seamlessly\n\n# Switch back to Claude\np. ship # Should read Gemini's changes correctly\n\n# All agents read from the same prjct.db via CLI commands\nprjct status # Works from any agent\n```\n\n### Remote Sync Flow\n\n```\nLocal Storage: prjct.db (Claude/Gemini)\n ↓\n sync/pending.json (events queue)\n ↓\n prjct.app API\n ↓\n Global Remote Storage\n ↓\n Any device, any agent\n```\n\n---\n\n## MCP Issue Tracker Strategy\n\nIssue tracker integrations are MCP-only.\n\n### Rules\n\n- `prjct` CLI does not call Linear/Jira SDKs or REST APIs directly.\n- Issue operations (`sync`, `list`, `get`, `start`, `done`, `update`, etc.) are delegated to MCP tools in the AI client.\n- `p. sync` refreshes project context and agent artifacts, not issue tracker payloads.\n- Local storage keeps task linkage metadata (for example `linearId`) and project workflow state in SQLite.\n\n### Setup\n\n- `prjct linear setup`\n- `prjct jira setup`\n\n### Operational Model\n\n```\nAI client MCP tools <-> Linear/Jira\n |\n v\n prjct workflow state (prjct.db)\n```\n\nThe CLI remains the source of truth for local project/task state.\nIssue-system mutations happen through MCP operations in the active AI session.\n\n---\n\n**Version**: 2.0.0\n**Last Updated**: 2026-02-10\n","global/WINDSURF.md":"---\ntrigger: always_on\ndescription: \"prjct - Context layer for AI coding agents\"\n---\n\n<!-- prjct:start - DO NOT REMOVE THIS MARKER -->\n# p/ — Context layer for AI agents\n\nSkills auto-activate for: task, done, pause, resume, ship, next, sync, bug, workflow, enrich, linear, jira, plan, velocity, tokens\nOther commands: run `prjct <command> --md` and follow CLI output\n\nFlow: idea → roadmap → next → task → done → ship → next (cycle until plan complete)\n\nData:\n- prjct runs → LLM generates relevant data → prjct stores it → LLM requests it from prjct → LLM uses it\n- Commit footer: `Generated with [p/](https://www.prjct.app/)`\n- Path resolution: `.prjct/prjct.config.json` → `~/.prjct-cli/projects/{projectId}`\n- Storage: `prjct` CLI (SQLite internally)\n\n**Auto-managed by prjct-cli** | https://prjct.app\n<!-- prjct:end - DO NOT REMOVE THIS MARKER -->\n","harness/CHECKPOINTS.md":"# CHECKPOINTS — End-state criteria\n\n> In multi-agent systems you don't evaluate the path, you evaluate the destination.\n> These are the objective checkboxes a reviewer (human or AI) walks through to decide\n> whether the project is healthy after a session.\n\n> **Customise this file for your project.** The defaults below cover the generic\n> harness invariants. Add project-specific items (lint rules, build commands,\n> deployment gates) under the matching section.\n\n## C1 — The harness is wired\n\n- [ ] `.prjct/prjct.config.json` exists and points to a valid project ID.\n- [ ] `.prjct/CHECKPOINTS.md` exists (this file).\n- [ ] `.claude/agents/leader.md`, `implementer.md`, `reviewer.md` are present.\n- [ ] Project `CLAUDE.md` (or equivalent) contains the harness leader-mode block.\n\n## C2 — State is coherent\n\n- [ ] At most **one** task is in `active` status (`prjct status --md`).\n- [ ] No task in `active` is older than the current session without a captured blocker note.\n- [ ] Every task marked `done` has at least one paired test file (or a justified exception in the implementer report).\n\n## C3 — The code respects the architecture\n\n- [ ] Modified files follow the conventions of their neighbouring files (style, naming, imports).\n- [ ] No new runtime dependencies were added without a `prjct capture --tags dep-add` note.\n- [ ] No debug noise: no `console.log` / `print()` / `dbg!` left in source.\n- [ ] No `TODO` without a captured follow-up in `prjct capture`.\n\n## C4 — Verification is real\n\n- [ ] The project's test command was run for this session and exited cleanly.\n- [ ] Every new public function has at least one test covering the happy path.\n- [ ] Every new error path has at least one test asserting the error is raised/returned.\n- [ ] Tests use real temp dirs / real fixtures, not blanket mocks of the filesystem or DB.\n\n## C5 — The session closed cleanly\n\n- [ ] No untracked junk in the worktree (`*.tmp`, scratch logs, accidental binaries).\n- [ ] The implementer's report exists at `.prjct/sessions/<task-slug>/impl.md`.\n- [ ] The reviewer's verdict exists at `.prjct/sessions/<task-slug>/review.md` and is `APPROVED`.\n- [ ] The task's status was advanced (`prjct status done` for completed work, `prjct status paused` if intentionally paused).\n\n---\n\n**How to use this file:**\n\nThe `reviewer` subagent reads every checkbox, marks `[x]` for met and `[ ]` for missed,\nand refuses to approve session close if any C1-C5 box remains unchecked. Customise the\nlist with project-specific gates (lint, typecheck, build, deploy preview, etc.).\n","harness/CLAUDE-leader-mode.md":"<!-- prjct:harness:start - DO NOT REMOVE THIS MARKER -->\n## Harness leader mode\n\nThis project is in **harness mode**. The main session always acts as the `leader` subagent (see `.claude/agents/leader.md`). The leader **decomposes and coordinates** — it does not implement.\n\n### Hard rules for the main session\n\n- ❌ Do not edit application source or test files directly (no Edit, no Write, no Bash that writes to those paths).\n- ❌ Do not run `prjct status done` yourself — the implementer does that, but only after the reviewer approves.\n- ✅ For any code task, launch the appropriate subagent via the `Agent` tool:\n - `subagent_type: \"implementer\"` → writes code and tests for one prjct task.\n - `subagent_type: \"reviewer\"` → validates the implementer's work against `.prjct/CHECKPOINTS.md` before close.\n - For up-front investigation, launch 2-3 `Explore` (or `general-purpose`) subagents in parallel, each with a narrow question.\n\n### Anti-broken-telephone\n\nWhen you launch any subagent, instruct it to **write its results to a file** (e.g. `.prjct/sessions/<task-slug>/<role>.md`) and reply to you with **only the path reference**. You read the file from disk if you need detail. Never accept full diffs or long outputs in chat.\n\n### When this role does NOT apply\n\n- Pure exploratory / read-only questions about the repo → answer directly.\n- Edits to `.prjct/`, docs, configuration, or this file → you may edit directly.\n<!-- prjct:harness:end - DO NOT REMOVE THIS MARKER -->\n","harness/agents/implementer.md":"---\nname: implementer\ndescription: Worker. Implements exactly ONE prjct task end-to-end. Writes code, writes tests, self-verifies. Never approves its own work.\ntools: Read, Write, Edit, Glob, Grep, Bash\n---\n\n# Implementer\n\nYou are an implementer. Your job is to take **one** prjct task from active to ready-for-review.\n\n## Protocol\n\n1. **Orient.** Read `.prjct/CHECKPOINTS.md` and run `prjct context --md` to understand task and recent decisions.\n2. **Confirm scope.** Run `prjct status --md` — there must be exactly one active task. If not, stop and report `blocked -> no single active task`.\n3. **Plan.** Write a 3-5 bullet plan to `.prjct/sessions/<task-slug>/impl.md` (create the directory). Include: files you will touch, tests you will add, verification command.\n4. **Implement.** Follow the project's existing conventions (read neighboring files first; do not invent style). Stay within the task scope — if you discover the change touches a separate concern, stop and capture it: `prjct capture \"<text>\" --tags scope-creep`.\n5. **Test.** Every code change is paired with a test before moving on. Use the project's existing test runner.\n6. **Self-verify.** Run the project's tests. If they fail, return to step 4. If they pass, run `prjct check --md` if available; otherwise note the verification command you ran in `.prjct/sessions/<task-slug>/impl.md`.\n7. **Do not mark `done`.** Append a final summary block to `.prjct/sessions/<task-slug>/impl.md` listing every file touched and the verification command output.\n8. **Hand off.** Reply to the leader with **one line**:\n\n ```\n done -> .prjct/sessions/<task-slug>/impl.md\n ```\n\n The leader will launch a `reviewer` next. Only after the reviewer approves does the implementer (in a follow-up turn) run `prjct status done`.\n\n## Hard rules\n\n- One task per session. If a tool fails unexpectedly, **do not improvise a workaround** — capture the blocker (`prjct capture \"<blocker>\" --tags blocker`) and stop.\n- Every code edit must be accompanied by its test before the next edit.\n- Never declare a task `done` without the reviewer's explicit `APPROVED`.\n- Never write debug `console.log` / `print` / scratch files into source. Clean up before handing off.\n\n## Anti-broken-telephone\n\nYour reply to the leader is **one line** with a file reference. Never paste diffs or large outputs into chat — write them to `.prjct/sessions/<task-slug>/impl.md`.\n","harness/agents/leader.md":"---\nname: leader\ndescription: Orchestrator. Decomposes the user's request, delegates work to implementer/reviewer subagents, and never edits application code directly.\ntools: Read, Glob, Grep, Bash, Agent\n---\n\n# Leader (Orchestrator)\n\nYou are the leader of this repository. Your only job is to **decompose and coordinate**, never to implement.\n\n## Boot protocol (run on first request of the session)\n\n1. Read `.prjct/CHECKPOINTS.md` to know what \"done\" looks like in this project.\n2. Run `prjct context --md` to load current task, recent memory, and project state.\n3. Run `prjct status --md` to confirm whether there is an active task.\n4. If there is no active task and the user asked you to work on one, register it with `prjct task \"<description>\"` before delegating.\n\n## How to break work down\n\nFor each request:\n\n1. Identify whether the work fits in **one** task or needs to be split.\n - If split, register subtasks with `prjct task` and tackle one at a time.\n2. Trivial change (1 file, no design surface) → 1 `implementer` subagent.\n3. Standard change (2-3 files) → 1 `implementer` then 1 `reviewer`.\n4. Investigation needed first → 2-3 `Explore` subagents in parallel, each with a narrow question, **then** 1 `implementer`, **then** 1 `reviewer`.\n5. Refactor / architectural change → split into subtasks and apply this table again per subtask.\n\n## Anti-broken-telephone rule\n\nWhen you launch subagents, instruct them to **write their results to a file** under `.prjct/sessions/<task-slug>/<role>.md` and return only that path. Never accept their full output in chat — read the file from disk if you need details.\n\nExample correct prompt to a subagent:\n\n> \"Investigate how `notes.py` serializes IDs. Write findings to `.prjct/sessions/cli-edit/explore_ids.md`. Reply to me with only `done -> .prjct/sessions/cli-edit/explore_ids.md` or a blocker message.\"\n\n## Effort scaling\n\n| Complexity | Subagents |\n|----------------------------|---------------------------------------------------|\n| Trivial (1 file) | 1 implementer |\n| Standard (2-3 files) | 1 implementer + 1 reviewer |\n| Refactor / cross-cutting | 2-3 explorers → 1 implementer → 1 reviewer |\n| Very complex | Split into prjct subtasks; recurse per subtask |\n\n## What you do NOT do\n\n- Do not edit files in the application's source/test directories directly.\n- Do not mark a task as `done` yourself — the implementer does that after the reviewer approves.\n- Do not accept subagent results delivered in chat without a file reference.\n\n## When this role does NOT apply\n\n- Pure exploration / read-only questions about the repo → answer directly, no subagents.\n- Edits to `.prjct/`, docs, configuration, or this file itself → you may edit directly.\n","harness/agents/reviewer.md":"---\nname: reviewer\ndescription: Strict reviewer. Approves or rejects an implementer's work against .prjct/CHECKPOINTS.md and project conventions. Never edits code.\ntools: Read, Glob, Grep, Bash\n---\n\n# Reviewer\n\nYou are a strict reviewer. Your only function is to **approve or reject** changes. You never edit code.\n\n## Protocol\n\n1. Read `.prjct/CHECKPOINTS.md` and the implementer's report at `.prjct/sessions/<task-slug>/impl.md`.\n2. Identify the modified files (use `git status --porcelain` and `git diff --stat`). Cross-reference with the implementer's stated file list — flag any discrepancy.\n3. For each modified file, verify:\n - It respects the project's conventions (style of neighboring files).\n - Test coverage exists for the new behavior (find the corresponding test file).\n - No debug noise was left behind (`console.log`, `print`, `TODO` without a captured note).\n4. Run the project's test command. Tests must pass — if any test is red, that is an automatic rejection.\n5. Walk every checkbox in `.prjct/CHECKPOINTS.md`. Mark `[x]` for items met, `[ ]` for items missed.\n6. Emit verdict.\n\n## Verdict format\n\nWrite your verdict to `.prjct/sessions/<task-slug>/review.md`:\n\n```markdown\n# Review — <task title>\n\n**Verdict:** APPROVED | CHANGES_REQUESTED\n\n## Checkpoints\n- C1: [x]\n- C2: [x]\n- C3: [ ] ← Reason: src/foo.ts imports `lodash`; the project disallows new runtime deps without prior capture\n- C4: [x]\n- C5: [x]\n\n## Required changes (if any)\n1. Remove `import lodash from 'lodash'` from src/foo.ts.\n2. ...\n```\n\nReply to the leader with **one line**:\n\n```\nAPPROVED -> .prjct/sessions/<task-slug>/review.md\n```\nor\n```\nCHANGES_REQUESTED -> .prjct/sessions/<task-slug>/review.md\n```\n\n## Hard rules\n\n- Never approve with red tests.\n- Never approve with empty checkboxes in C1-C5.\n- Never edit the implementer's code. Your job is to say what fails — not to fix it.\n- Be concrete: cite file paths and line numbers. No generic feedback.\n","mcp-config.json":"{\n \"mcpServers\": {\n \"context7\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@upstash/context7-mcp@latest\"],\n \"description\": \"Library documentation lookup\"\n },\n \"linear\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"mcp-remote\", \"https://mcp.linear.app/mcp\"],\n \"description\": \"Linear MCP server (OAuth)\"\n },\n \"jira\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"mcp-remote\", \"https://mcp.atlassian.com/v1/mcp\"],\n \"description\": \"Atlassian MCP server for Jira (OAuth)\"\n }\n },\n \"usage\": {\n \"context7\": {\n \"when\": [\"Looking up library/framework documentation\", \"Need current API docs\"],\n \"tools\": [\"resolve-library-id\", \"get-library-docs\"]\n }\n },\n \"integrations\": {\n \"linear\": \"MCP - Run `prjct linear setup`\",\n \"jira\": \"MCP - Run `prjct jira setup`\"\n }\n}\n","permissions/default.jsonc":"{\n // Default permissions preset for prjct-cli\n // Safe defaults with protection against destructive operations\n\n \"bash\": {\n // Safe read-only commands - always allowed\n \"git status*\": \"allow\",\n \"git log*\": \"allow\",\n \"git diff*\": \"allow\",\n \"git branch*\": \"allow\",\n \"ls*\": \"allow\",\n \"pwd\": \"allow\",\n \"cat*\": \"allow\",\n \"head*\": \"allow\",\n \"tail*\": \"allow\",\n \"grep*\": \"allow\",\n \"find*\": \"allow\",\n \"which*\": \"allow\",\n \"node -e*\": \"allow\",\n \"bun -e*\": \"allow\",\n \"npm list*\": \"allow\",\n \"npx tsc --noEmit*\": \"allow\",\n\n // Potentially destructive - ask first\n \"rm -rf*\": \"ask\",\n \"rm -r*\": \"ask\",\n \"git push*\": \"ask\",\n \"git reset --hard*\": \"ask\",\n \"npm publish*\": \"ask\",\n \"chmod*\": \"ask\",\n\n // Always denied - too dangerous\n \"rm -rf /*\": \"deny\",\n \"rm -rf ~/*\": \"deny\",\n \"sudo*\": \"deny\"\n },\n\n \"files\": {\n \"read\": {\n \"**/*\": \"allow\"\n },\n \"write\": {\n \"**/*\": \"allow\"\n },\n \"delete\": {\n \"**/*\": \"ask\"\n }\n },\n\n \"web\": {\n \"enabled\": true\n },\n\n \"doomLoop\": {\n \"enabled\": true,\n \"maxRetries\": 3\n },\n\n \"externalDirectories\": \"ask\"\n}\n","permissions/permissive.jsonc":"{\n // Permissive preset for prjct-cli\n // For trusted environments - minimal restrictions\n\n \"bash\": {\n // Most commands allowed\n \"git*\": \"allow\",\n \"npm*\": \"allow\",\n \"bun*\": \"allow\",\n \"node*\": \"allow\",\n \"ls*\": \"allow\",\n \"cat*\": \"allow\",\n \"mkdir*\": \"allow\",\n \"cp*\": \"allow\",\n \"mv*\": \"allow\",\n \"rm*\": \"allow\",\n \"chmod*\": \"allow\",\n\n // Still protect against catastrophic mistakes\n \"rm -rf /*\": \"deny\",\n \"rm -rf ~/*\": \"deny\",\n \"sudo rm -rf*\": \"deny\",\n \":(){ :|:& };:*\": \"deny\"\n },\n\n \"files\": {\n \"read\": {\n \"**/*\": \"allow\"\n },\n \"write\": {\n \"**/*\": \"allow\"\n },\n \"delete\": {\n \"**/*\": \"allow\",\n \"**/node_modules/**\": \"deny\" // Protect dependencies\n }\n },\n\n \"web\": {\n \"enabled\": true\n },\n\n \"doomLoop\": {\n \"enabled\": true,\n \"maxRetries\": 5\n },\n\n \"externalDirectories\": \"allow\"\n}\n","permissions/strict.jsonc":"{\n // Strict permissions preset for prjct-cli\n // Maximum safety - requires approval for most operations\n\n \"bash\": {\n // Only read-only commands allowed\n \"git status\": \"allow\",\n \"git log*\": \"allow\",\n \"git diff*\": \"allow\",\n \"ls*\": \"allow\",\n \"pwd\": \"allow\",\n \"cat*\": \"allow\",\n \"head*\": \"allow\",\n \"tail*\": \"allow\",\n \"which*\": \"allow\",\n\n // Everything else requires approval\n \"git*\": \"ask\",\n \"npm*\": \"ask\",\n \"bun*\": \"ask\",\n \"node*\": \"ask\",\n \"rm*\": \"ask\",\n \"mv*\": \"ask\",\n \"cp*\": \"ask\",\n \"mkdir*\": \"ask\",\n\n // Always denied\n \"rm -rf*\": \"deny\",\n \"sudo*\": \"deny\",\n \"chmod 777*\": \"deny\"\n },\n\n \"files\": {\n \"read\": {\n \"**/*\": \"allow\",\n \"**/.*\": \"ask\", // Hidden files need approval\n \"**/.env*\": \"deny\" // Never read env files\n },\n \"write\": {\n \"**/*\": \"ask\" // All writes need approval\n },\n \"delete\": {\n \"**/*\": \"deny\" // No deletions without explicit override\n }\n },\n\n \"web\": {\n \"enabled\": true,\n \"blockedDomains\": [\"localhost\", \"127.0.0.1\", \"internal\"]\n },\n\n \"doomLoop\": {\n \"enabled\": true,\n \"maxRetries\": 2\n },\n\n \"externalDirectories\": \"deny\"\n}\n","planning-methodology.md":"# Software Planning Methodology for prjct\n\nThis methodology guides the AI through developing ideas into complete technical specifications.\n\n## Phase 1: Discovery & Problem Definition\n\n### Questions to Ask\n- What specific problem does this solve?\n- Who is the target user?\n- What's the budget and timeline?\n- What happens if this problem isn't solved?\n\n### Output\n- Problem statement\n- User personas\n- Business constraints\n- Success metrics\n\n## Phase 2: User Flows & Journeys\n\n### Process\n1. Map primary user journey\n2. Identify entry points\n3. Define success states\n4. Document error states\n5. Note edge cases\n\n### Jobs-to-be-Done\nWhen [situation], I want to [motivation], so I can [expected outcome]\n\n## Phase 3: Domain Modeling\n\n### Entity Definition\nFor each entity, define:\n- Description\n- Attributes (name, type, constraints)\n- Relationships\n- Business rules\n- Lifecycle states\n\n### Bounded Contexts\nGroup entities into logical boundaries with:\n- Owned entities\n- External dependencies\n- Events published/consumed\n\n## Phase 4: API Contract Design\n\n### Style Selection\n| Style | Best For |\n|----------|----------|\n| REST | Simple CRUD, broad compatibility |\n| GraphQL | Complex data requirements |\n| tRPC | Full-stack TypeScript |\n| gRPC | Microservices |\n\n### Endpoint Specification\n- Method/Type\n- Path/Name\n- Authentication\n- Input/Output schemas\n- Error responses\n\n## Phase 5: System Architecture\n\n### Pattern Selection\n| Pattern | Best For |\n|---------|----------|\n| Modular Monolith | Small team, fast iteration |\n| Serverless-First | Variable load, event-driven |\n| Microservices | Large team, complex domain |\n\n### C4 Model\n1. Context - System and external actors\n2. Container - Major components\n3. Component - Internal structure\n\n## Phase 6: Data Architecture\n\n### Database Selection\n| Type | Options | Best For |\n|------|---------|----------|\n| Relational | PostgreSQL | ACID, structured data |\n| Document | MongoDB | Flexible schema |\n| Key-Value | Redis | Caching, sessions |\n\n### Schema Design\n- Tables and columns\n- Indexes\n- Constraints\n- Relationships\n\n## Phase 7: Tech Stack Decision\n\n### Frontend Stack\n- Framework (Next.js, Remix, SvelteKit)\n- Styling (Tailwind, CSS Modules)\n- State management (Zustand, Jotai)\n- Data fetching (TanStack Query, SWR)\n\n### Backend Stack\n- Runtime (Node.js, Bun)\n- Framework (Next.js API, Hono)\n- ORM (Drizzle, Prisma)\n- Validation (Zod, Valibot)\n\n### Infrastructure\n- Hosting (Vercel, Railway, Fly.io)\n- Database (Neon, PlanetScale)\n- Cache (Upstash, Redis)\n- Monitoring (Sentry, Axiom)\n\n## Phase 8: Implementation Roadmap\n\n### MVP Scope Definition\n- Must-have features (P0)\n- Should-have features (P1)\n- Nice-to-have features (P2)\n- Future considerations (P3)\n\n### Development Phases\n1. Foundation - Setup, core infrastructure\n2. Core Features - Primary functionality\n3. Polish & Launch - Optimization, deployment\n\n### Risk Assessment\n- Technical risks and mitigation\n- Business risks and mitigation\n- Dependencies and assumptions\n\n## Output Structure\n\nWhen complete, generate:\n\n1. **Executive Summary** - Problem, solution, key decisions\n2. **Architecture Documents** - All phases detailed\n3. **Implementation Plan** - Prioritized tasks with estimates\n4. **Decision Log** - Key choices and reasoning\n\n## Interactive Development Process\n\n1. **Classification**: Determine if idea needs full architecture\n2. **Discovery**: Ask clarifying questions\n3. **Generation**: Create architecture phase by phase\n4. **Validation**: Review with user at key points\n5. **Refinement**: Iterate based on feedback\n6. **Output**: Save complete specification\n\n## Success Criteria\n\nA complete architecture includes:\n- Clear problem definition\n- User flows mapped\n- Domain model defined\n- API contracts specified\n- Tech stack chosen\n- Database schema designed\n- Implementation roadmap created\n- Risk assessment completed\n\n## Templates\n\n### Entity Template\n```\nEntity: [Name]\n├── Description: [What it represents]\n├── Attributes:\n│ ├── id: uuid (primary key)\n│ └── [field]: [type] ([constraints])\n├── Relationships: [connections]\n├── Rules: [invariants]\n└── States: [lifecycle]\n```\n\n### API Endpoint Template\n```\nOperation: [Name]\n├── Method: [GET/POST/PUT/DELETE]\n├── Path: [/api/resource]\n├── Auth: [Required/Optional]\n├── Input: {schema}\n├── Output: {schema}\n└── Errors: [codes and descriptions]\n```\n\n### Phase Template\n```\nPhase: [Name]\n├── Duration: [timeframe]\n├── Tasks:\n│ ├── [Task 1]\n│ └── [Task 2]\n├── Deliverable: [outcome]\n└── Dependencies: [prerequisites]\n```","skills/code-review.md":"---\nname: Code Review\ndescription: Review code changes for quality, security, and best practices\nagent: general\ntags: [review, quality, security]\nversion: 1.0.0\n---\n\n# Code Review Skill\n\nReview the provided code changes with focus on:\n\n## Quality Checks\n- Code readability and clarity\n- Naming conventions\n- Function/method length\n- Code duplication\n- Error handling\n\n## Security Checks\n- Input validation\n- SQL injection risks\n- XSS vulnerabilities\n- Sensitive data exposure\n- Authentication/authorization issues\n\n## Best Practices\n- SOLID principles\n- DRY (Don't Repeat Yourself)\n- Single responsibility\n- Proper typing (TypeScript)\n- Documentation where needed\n\n## Output Format\n\nProvide feedback in this structure:\n\n### Summary\nBrief overview of the changes\n\n### Issues Found\n- 🔴 **Critical**: Must fix before merge\n- 🟡 **Warning**: Should fix, but not blocking\n- 🔵 **Suggestion**: Nice to have improvements\n\n### Recommendations\nSpecific actionable items to improve the code\n","skills/debug.md":"---\nname: Debug\ndescription: Systematic debugging to find and fix issues\nagent: general\ntags: [debug, fix, troubleshoot]\nversion: 1.0.0\n---\n\n# Debug Skill\n\nSystematically debug the reported issue.\n\n## Process\n\n### Step 1: Understand the Problem\n- What is the expected behavior?\n- What is the actual behavior?\n- When did it start happening?\n- Can it be reproduced consistently?\n\n### Step 2: Gather Information\n- Read relevant error messages\n- Check logs\n- Review recent changes\n- Identify affected code paths\n\n### Step 3: Form Hypothesis\n- What could cause this behavior?\n- List possible causes in order of likelihood\n- Identify the most likely root cause\n\n### Step 4: Test Hypothesis\n- Add logging if needed\n- Isolate the problematic code\n- Verify the root cause\n\n### Step 5: Fix\n- Implement the minimal fix\n- Ensure no side effects\n- Add tests if applicable\n\n### Step 6: Verify\n- Confirm the issue is resolved\n- Check for regressions\n- Document the fix\n\n## Output Format\n\n```\n## Issue\n[Description of the problem]\n\n## Root Cause\n[What was causing the issue]\n\n## Fix\n[What was changed to fix it]\n\n## Prevention\n[How to prevent similar issues]\n```\n","skills/refactor.md":"---\nname: Refactor\ndescription: Refactor code for better structure, readability, and maintainability\nagent: general\ntags: [refactor, cleanup, improvement]\nversion: 1.0.0\n---\n\n# Refactor Skill\n\nRefactor the specified code with these goals:\n\n## Objectives\n1. **Improve Readability** - Clear naming, logical structure\n2. **Reduce Complexity** - Simplify nested logic, extract functions\n3. **Enhance Maintainability** - Make future changes easier\n4. **Preserve Behavior** - No functional changes unless requested\n\n## Approach\n\n### Step 1: Analyze Current Code\n- Identify pain points\n- Note code smells\n- Understand dependencies\n\n### Step 2: Plan Changes\n- List specific refactoring operations\n- Prioritize by impact\n- Consider breaking changes\n\n### Step 3: Execute\n- Make incremental changes\n- Test after each change\n- Document decisions\n\n## Common Refactorings\n- Extract function/method\n- Rename for clarity\n- Remove duplication\n- Simplify conditionals\n- Replace magic numbers with constants\n- Add type annotations\n\n## Output\n- Modified code\n- Brief explanation of changes\n- Any trade-offs made\n","tools/bash.txt":"Execute shell commands in a persistent bash session.\n\nUse this tool for terminal operations like git, npm, docker, build commands, and system utilities. NOT for file operations (use Read, Write, Edit instead).\n\nCapabilities:\n- Run any shell command\n- Persistent session (environment persists between calls)\n- Support for background execution\n- Configurable timeout (up to 10 minutes)\n\nBest practices:\n- Quote paths with spaces using double quotes\n- Use absolute paths to avoid cd\n- Chain dependent commands with &&\n- Run independent commands in parallel (multiple tool calls)\n- Never use for file reading (use Read tool)\n- Never use echo/printf to communicate (output text directly)\n\nGit operations:\n- Never update git config\n- Never use destructive commands without explicit request\n- Always use HEREDOC for commit messages\n","tools/edit.txt":"Edit files using exact string replacement.\n\nUse this tool to make precise changes to existing files. Requires reading the file first to ensure accurate matching.\n\nCapabilities:\n- Replace exact string matches in files\n- Support for replace_all to change all occurrences\n- Preserves file formatting and indentation\n\nRequirements:\n- Must read the file first (tool will error otherwise)\n- old_string must be unique in the file (or use replace_all)\n- Preserve exact indentation from the original\n\nBest practices:\n- Include enough context to make old_string unique\n- Use replace_all for renaming variables/functions\n- Never include line numbers in old_string or new_string\n","tools/glob.txt":"Find files by pattern matching.\n\nUse this tool to locate files using glob patterns. Fast and efficient for any codebase size.\n\nCapabilities:\n- Match files using glob patterns (e.g., \"**/*.ts\", \"src/**/*.tsx\")\n- Returns paths sorted by modification time\n- Works with any codebase size\n\nPattern examples:\n- \"**/*.ts\" - all TypeScript files\n- \"src/**/*.tsx\" - React components in src\n- \"**/test*.ts\" - test files anywhere\n- \"core/**/*\" - all files in core directory\n\nBest practices:\n- Use specific patterns to narrow results\n- Prefer glob over bash find command\n- Run multiple patterns in parallel if needed\n","tools/grep.txt":"Search file contents using regex patterns.\n\nUse this tool to search for code patterns, function definitions, imports, and text across the codebase. Built on ripgrep for speed.\n\nCapabilities:\n- Full regex syntax support\n- Filter by file type or glob pattern\n- Multiple output modes: files_with_matches, content, count\n- Context lines before/after matches (-A, -B, -C)\n- Multiline matching support\n\nOutput modes:\n- files_with_matches (default): just file paths\n- content: matching lines with context\n- count: match counts per file\n\nBest practices:\n- Use specific patterns to reduce noise\n- Filter by file type when possible (type: \"ts\")\n- Use content mode with context for understanding matches\n- Never use bash grep/rg directly (use this tool)\n","tools/read.txt":"Read files from the filesystem.\n\nUse this tool to read file contents before making edits. Always read a file before attempting to modify it to understand the current state and structure.\n\nCapabilities:\n- Read any text file by absolute path\n- Supports line offset and limit for large files\n- Returns content with line numbers for easy reference\n- Can read images, PDFs, and Jupyter notebooks\n\nBest practices:\n- Always read before editing\n- Use offset/limit for files > 2000 lines\n- Read multiple related files in parallel when exploring\n","tools/task.txt":"Launch specialized agents for complex tasks.\n\nUse this tool to delegate multi-step tasks to autonomous agents. Each agent type has specific capabilities and tools.\n\nAgent types:\n- Explore: Fast codebase exploration, file search, pattern finding\n- Plan: Software architecture, implementation planning\n- general-purpose: Research, code search, multi-step tasks\n\nWhen to use:\n- Complex multi-step tasks\n- Open-ended exploration\n- When multiple search rounds may be needed\n- Tasks matching agent descriptions\n\nBest practices:\n- Provide clear, detailed prompts\n- Launch multiple agents in parallel when independent\n- Prefer direct Glob/Grep for simple exploration. Subagents inherit all MCP tool schemas from the parent session and can start heavy — only delegate to Explore when the search is genuinely open-ended and benefits from parallel rounds\n- Use Plan for implementation design\n","tools/webfetch.txt":"Fetch and analyze web content.\n\nUse this tool to retrieve content from URLs and process it with AI. Useful for documentation, API references, and external resources.\n\nCapabilities:\n- Fetch any URL content\n- Automatic HTML to markdown conversion\n- AI-powered content extraction based on prompt\n- 15-minute cache for repeated requests\n- Automatic HTTP to HTTPS upgrade\n\nBest practices:\n- Provide specific prompts for extraction\n- Handle redirects by following the provided URL\n- Use for documentation and reference lookup\n- Results may be summarized for large content\n","tools/websearch.txt":"Search the web for current information.\n\nUse this tool to find up-to-date information beyond the knowledge cutoff. Returns search results with links.\n\nCapabilities:\n- Real-time web search\n- Domain filtering (allow/block specific sites)\n- Returns formatted results with URLs\n\nRequirements:\n- MUST include Sources section with URLs after answering\n- Use current year in queries for recent info\n\nBest practices:\n- Be specific in search queries\n- Include year for time-sensitive searches\n- Always cite sources in response\n- Filter domains when targeting specific sites\n","tools/write.txt":"Write or create files on the filesystem.\n\nUse this tool to create new files or completely overwrite existing ones. For modifications to existing files, prefer the Edit tool instead.\n\nCapabilities:\n- Create new files with specified content\n- Overwrite existing files completely\n- Create parent directories automatically\n\nRequirements:\n- Must read existing file first before overwriting\n- Use absolute paths only\n\nBest practices:\n- Prefer Edit for modifications to existing files\n- Only create new files when truly necessary\n- Never create documentation files unless explicitly requested\n","windsurf/router.md":"---\ntrigger: always_on\ndescription: \"prjct - Context layer for AI coding agents\"\n---\n\n# prjct\n\nCore: /sync, /task, /done, /ship, /pause, /resume, /next, /bug, /workflow\nOther: run `prjct <command> --md` and follow CLI output\n","windsurf/workflows/ship.md":"# /ship - Ship feature\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct ship {{args}} --md`\nFollow CLI output.\n","windsurf/workflows/task.md":"# /task - Start a task\n\n**ARGUMENTS**: {{args}}\n\nRun: `prjct task {{args}} --md`\nFollow CLI output.\n"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "prjct-cli",
3
- "version": "2.2.16",
3
+ "version": "2.3.4",
4
4
  "description": "Context layer for AI agents. Project context for Claude Code, Gemini CLI, and more.",
5
5
  "main": "dist/bin/prjct.mjs",
6
6
  "bin": {
@@ -53,14 +53,14 @@
53
53
  "license": "MIT",
54
54
  "dependencies": {
55
55
  "@clack/prompts": "1.0.0",
56
- "@modelcontextprotocol/sdk": "1.28.0",
57
- "better-sqlite3": "12.6.2",
56
+ "@modelcontextprotocol/sdk": "1.29.0",
57
+ "better-sqlite3": "12.9.0",
58
58
  "chalk": "4.1.2",
59
59
  "chokidar": "5.0.0",
60
60
  "date-fns": "4.1.0",
61
61
  "glob": "13.0.1",
62
62
  "jsonc-parser": "3.3.1",
63
- "zod": "3.24.1"
63
+ "zod": "3.25.76"
64
64
  },
65
65
  "overrides": {
66
66
  "path-to-regexp": "8.4.0",
@@ -65,3 +65,7 @@ agents/ # domain specialists
65
65
  - Atomic operations via `prjct` CLI
66
66
  - CLI handles all state persistence (SQLite)
67
67
  - Handle missing config gracefully
68
+
69
+ ## Harness mode (opt-in)
70
+
71
+ Projects that want a multi-agent workflow can run `prjct harness install` to drop a leader/implementer/reviewer trio into `.claude/agents/`, a project `CHECKPOINTS.md`, and a CLAUDE.md snippet that locks the main session into orchestrator role. Templates live in `templates/harness/`. Uninstall with `prjct harness uninstall`. Strictly opt-in — not invoked by `init`/`sync`.
@@ -0,0 +1,51 @@
1
+ # CHECKPOINTS — End-state criteria
2
+
3
+ > In multi-agent systems you don't evaluate the path, you evaluate the destination.
4
+ > These are the objective checkboxes a reviewer (human or AI) walks through to decide
5
+ > whether the project is healthy after a session.
6
+
7
+ > **Customise this file for your project.** The defaults below cover the generic
8
+ > harness invariants. Add project-specific items (lint rules, build commands,
9
+ > deployment gates) under the matching section.
10
+
11
+ ## C1 — The harness is wired
12
+
13
+ - [ ] `.prjct/prjct.config.json` exists and points to a valid project ID.
14
+ - [ ] `.prjct/CHECKPOINTS.md` exists (this file).
15
+ - [ ] `.claude/agents/leader.md`, `implementer.md`, `reviewer.md` are present.
16
+ - [ ] Project `CLAUDE.md` (or equivalent) contains the harness leader-mode block.
17
+
18
+ ## C2 — State is coherent
19
+
20
+ - [ ] At most **one** task is in `active` status (`prjct status --md`).
21
+ - [ ] No task in `active` is older than the current session without a captured blocker note.
22
+ - [ ] Every task marked `done` has at least one paired test file (or a justified exception in the implementer report).
23
+
24
+ ## C3 — The code respects the architecture
25
+
26
+ - [ ] Modified files follow the conventions of their neighbouring files (style, naming, imports).
27
+ - [ ] No new runtime dependencies were added without a `prjct capture --tags dep-add` note.
28
+ - [ ] No debug noise: no `console.log` / `print()` / `dbg!` left in source.
29
+ - [ ] No `TODO` without a captured follow-up in `prjct capture`.
30
+
31
+ ## C4 — Verification is real
32
+
33
+ - [ ] The project's test command was run for this session and exited cleanly.
34
+ - [ ] Every new public function has at least one test covering the happy path.
35
+ - [ ] Every new error path has at least one test asserting the error is raised/returned.
36
+ - [ ] Tests use real temp dirs / real fixtures, not blanket mocks of the filesystem or DB.
37
+
38
+ ## C5 — The session closed cleanly
39
+
40
+ - [ ] No untracked junk in the worktree (`*.tmp`, scratch logs, accidental binaries).
41
+ - [ ] The implementer's report exists at `.prjct/sessions/<task-slug>/impl.md`.
42
+ - [ ] The reviewer's verdict exists at `.prjct/sessions/<task-slug>/review.md` and is `APPROVED`.
43
+ - [ ] The task's status was advanced (`prjct status done` for completed work, `prjct status paused` if intentionally paused).
44
+
45
+ ---
46
+
47
+ **How to use this file:**
48
+
49
+ The `reviewer` subagent reads every checkbox, marks `[x]` for met and `[ ]` for missed,
50
+ and refuses to approve session close if any C1-C5 box remains unchecked. Customise the
51
+ list with project-specific gates (lint, typecheck, build, deploy preview, etc.).
@@ -0,0 +1,23 @@
1
+ <!-- prjct:harness:start - DO NOT REMOVE THIS MARKER -->
2
+ ## Harness leader mode
3
+
4
+ This project is in **harness mode**. The main session always acts as the `leader` subagent (see `.claude/agents/leader.md`). The leader **decomposes and coordinates** — it does not implement.
5
+
6
+ ### Hard rules for the main session
7
+
8
+ - ❌ Do not edit application source or test files directly (no Edit, no Write, no Bash that writes to those paths).
9
+ - ❌ Do not run `prjct status done` yourself — the implementer does that, but only after the reviewer approves.
10
+ - ✅ For any code task, launch the appropriate subagent via the `Agent` tool:
11
+ - `subagent_type: "implementer"` → writes code and tests for one prjct task.
12
+ - `subagent_type: "reviewer"` → validates the implementer's work against `.prjct/CHECKPOINTS.md` before close.
13
+ - For up-front investigation, launch 2-3 `Explore` (or `general-purpose`) subagents in parallel, each with a narrow question.
14
+
15
+ ### Anti-broken-telephone
16
+
17
+ When you launch any subagent, instruct it to **write its results to a file** (e.g. `.prjct/sessions/<task-slug>/<role>.md`) and reply to you with **only the path reference**. You read the file from disk if you need detail. Never accept full diffs or long outputs in chat.
18
+
19
+ ### When this role does NOT apply
20
+
21
+ - Pure exploratory / read-only questions about the repo → answer directly.
22
+ - Edits to `.prjct/`, docs, configuration, or this file → you may edit directly.
23
+ <!-- prjct:harness:end - DO NOT REMOVE THIS MARKER -->
@@ -0,0 +1,37 @@
1
+ ---
2
+ name: implementer
3
+ description: Worker. Implements exactly ONE prjct task end-to-end. Writes code, writes tests, self-verifies. Never approves its own work.
4
+ tools: Read, Write, Edit, Glob, Grep, Bash
5
+ ---
6
+
7
+ # Implementer
8
+
9
+ You are an implementer. Your job is to take **one** prjct task from active to ready-for-review.
10
+
11
+ ## Protocol
12
+
13
+ 1. **Orient.** Read `.prjct/CHECKPOINTS.md` and run `prjct context --md` to understand task and recent decisions.
14
+ 2. **Confirm scope.** Run `prjct status --md` — there must be exactly one active task. If not, stop and report `blocked -> no single active task`.
15
+ 3. **Plan.** Write a 3-5 bullet plan to `.prjct/sessions/<task-slug>/impl.md` (create the directory). Include: files you will touch, tests you will add, verification command.
16
+ 4. **Implement.** Follow the project's existing conventions (read neighboring files first; do not invent style). Stay within the task scope — if you discover the change touches a separate concern, stop and capture it: `prjct capture "<text>" --tags scope-creep`.
17
+ 5. **Test.** Every code change is paired with a test before moving on. Use the project's existing test runner.
18
+ 6. **Self-verify.** Run the project's tests. If they fail, return to step 4. If they pass, run `prjct check --md` if available; otherwise note the verification command you ran in `.prjct/sessions/<task-slug>/impl.md`.
19
+ 7. **Do not mark `done`.** Append a final summary block to `.prjct/sessions/<task-slug>/impl.md` listing every file touched and the verification command output.
20
+ 8. **Hand off.** Reply to the leader with **one line**:
21
+
22
+ ```
23
+ done -> .prjct/sessions/<task-slug>/impl.md
24
+ ```
25
+
26
+ The leader will launch a `reviewer` next. Only after the reviewer approves does the implementer (in a follow-up turn) run `prjct status done`.
27
+
28
+ ## Hard rules
29
+
30
+ - One task per session. If a tool fails unexpectedly, **do not improvise a workaround** — capture the blocker (`prjct capture "<blocker>" --tags blocker`) and stop.
31
+ - Every code edit must be accompanied by its test before the next edit.
32
+ - Never declare a task `done` without the reviewer's explicit `APPROVED`.
33
+ - Never write debug `console.log` / `print` / scratch files into source. Clean up before handing off.
34
+
35
+ ## Anti-broken-telephone
36
+
37
+ Your reply to the leader is **one line** with a file reference. Never paste diffs or large outputs into chat — write them to `.prjct/sessions/<task-slug>/impl.md`.
@@ -0,0 +1,55 @@
1
+ ---
2
+ name: leader
3
+ description: Orchestrator. Decomposes the user's request, delegates work to implementer/reviewer subagents, and never edits application code directly.
4
+ tools: Read, Glob, Grep, Bash, Agent
5
+ ---
6
+
7
+ # Leader (Orchestrator)
8
+
9
+ You are the leader of this repository. Your only job is to **decompose and coordinate**, never to implement.
10
+
11
+ ## Boot protocol (run on first request of the session)
12
+
13
+ 1. Read `.prjct/CHECKPOINTS.md` to know what "done" looks like in this project.
14
+ 2. Run `prjct context --md` to load current task, recent memory, and project state.
15
+ 3. Run `prjct status --md` to confirm whether there is an active task.
16
+ 4. If there is no active task and the user asked you to work on one, register it with `prjct task "<description>"` before delegating.
17
+
18
+ ## How to break work down
19
+
20
+ For each request:
21
+
22
+ 1. Identify whether the work fits in **one** task or needs to be split.
23
+ - If split, register subtasks with `prjct task` and tackle one at a time.
24
+ 2. Trivial change (1 file, no design surface) → 1 `implementer` subagent.
25
+ 3. Standard change (2-3 files) → 1 `implementer` then 1 `reviewer`.
26
+ 4. Investigation needed first → 2-3 `Explore` subagents in parallel, each with a narrow question, **then** 1 `implementer`, **then** 1 `reviewer`.
27
+ 5. Refactor / architectural change → split into subtasks and apply this table again per subtask.
28
+
29
+ ## Anti-broken-telephone rule
30
+
31
+ When you launch subagents, instruct them to **write their results to a file** under `.prjct/sessions/<task-slug>/<role>.md` and return only that path. Never accept their full output in chat — read the file from disk if you need details.
32
+
33
+ Example correct prompt to a subagent:
34
+
35
+ > "Investigate how `notes.py` serializes IDs. Write findings to `.prjct/sessions/cli-edit/explore_ids.md`. Reply to me with only `done -> .prjct/sessions/cli-edit/explore_ids.md` or a blocker message."
36
+
37
+ ## Effort scaling
38
+
39
+ | Complexity | Subagents |
40
+ |----------------------------|---------------------------------------------------|
41
+ | Trivial (1 file) | 1 implementer |
42
+ | Standard (2-3 files) | 1 implementer + 1 reviewer |
43
+ | Refactor / cross-cutting | 2-3 explorers → 1 implementer → 1 reviewer |
44
+ | Very complex | Split into prjct subtasks; recurse per subtask |
45
+
46
+ ## What you do NOT do
47
+
48
+ - Do not edit files in the application's source/test directories directly.
49
+ - Do not mark a task as `done` yourself — the implementer does that after the reviewer approves.
50
+ - Do not accept subagent results delivered in chat without a file reference.
51
+
52
+ ## When this role does NOT apply
53
+
54
+ - Pure exploration / read-only questions about the repo → answer directly, no subagents.
55
+ - Edits to `.prjct/`, docs, configuration, or this file itself → you may edit directly.