portable-agent-layer 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (90) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +80 -0
  3. package/assets/agents/claude-researcher.md +43 -0
  4. package/assets/agents/investigative-researcher.md +44 -0
  5. package/assets/agents/multi-perspective-researcher.md +43 -0
  6. package/assets/skills/analyze-pdf.md +40 -0
  7. package/assets/skills/analyze-youtube.md +35 -0
  8. package/assets/skills/council.md +43 -0
  9. package/assets/skills/create-skill.md +31 -0
  10. package/assets/skills/extract-entities.md +63 -0
  11. package/assets/skills/extract-wisdom.md +18 -0
  12. package/assets/skills/first-principles.md +17 -0
  13. package/assets/skills/fyzz-chat-api.md +43 -0
  14. package/assets/skills/reflect.md +87 -0
  15. package/assets/skills/research.md +68 -0
  16. package/assets/skills/review.md +19 -0
  17. package/assets/skills/summarize.md +15 -0
  18. package/assets/templates/AGENTS.md.template +45 -0
  19. package/assets/templates/telos/BELIEFS.md +4 -0
  20. package/assets/templates/telos/CHALLENGES.md +4 -0
  21. package/assets/templates/telos/GOALS.md +12 -0
  22. package/assets/templates/telos/IDEAS.md +4 -0
  23. package/assets/templates/telos/IDENTITY.md +4 -0
  24. package/assets/templates/telos/LEARNED.md +4 -0
  25. package/assets/templates/telos/MISSION.md +4 -0
  26. package/assets/templates/telos/MODELS.md +4 -0
  27. package/assets/templates/telos/NARRATIVES.md +4 -0
  28. package/assets/templates/telos/PROJECTS.md +7 -0
  29. package/assets/templates/telos/STRATEGIES.md +4 -0
  30. package/bin/pal +24 -0
  31. package/bin/pal.bat +8 -0
  32. package/bin/pal.ps1 +30 -0
  33. package/package.json +82 -0
  34. package/src/cli/index.ts +344 -0
  35. package/src/cli/install.ts +86 -0
  36. package/src/cli/uninstall.ts +45 -0
  37. package/src/hooks/LoadContext.ts +41 -0
  38. package/src/hooks/SecurityValidator.ts +52 -0
  39. package/src/hooks/SkillGuard.ts +41 -0
  40. package/src/hooks/StopOrchestrator.ts +35 -0
  41. package/src/hooks/UserPromptOrchestrator.ts +35 -0
  42. package/src/hooks/handlers/backup.ts +41 -0
  43. package/src/hooks/handlers/failure.ts +136 -0
  44. package/src/hooks/handlers/rating.ts +409 -0
  45. package/src/hooks/handlers/relationship.ts +113 -0
  46. package/src/hooks/handlers/session-name.ts +121 -0
  47. package/src/hooks/handlers/synthesis.ts +109 -0
  48. package/src/hooks/handlers/tab.ts +8 -0
  49. package/src/hooks/handlers/update-counts.ts +151 -0
  50. package/src/hooks/handlers/work-learning.ts +183 -0
  51. package/src/hooks/handlers/work-session.ts +58 -0
  52. package/src/hooks/lib/claude-md.ts +121 -0
  53. package/src/hooks/lib/context.ts +433 -0
  54. package/src/hooks/lib/entities.ts +304 -0
  55. package/src/hooks/lib/export.ts +76 -0
  56. package/src/hooks/lib/inference.ts +91 -0
  57. package/src/hooks/lib/learning-category.ts +14 -0
  58. package/src/hooks/lib/log.ts +53 -0
  59. package/src/hooks/lib/models.ts +16 -0
  60. package/src/hooks/lib/paths.ts +80 -0
  61. package/src/hooks/lib/relationship.ts +135 -0
  62. package/src/hooks/lib/security.ts +122 -0
  63. package/src/hooks/lib/session-names.ts +247 -0
  64. package/src/hooks/lib/setup.ts +189 -0
  65. package/src/hooks/lib/signal-trends.ts +117 -0
  66. package/src/hooks/lib/signals.ts +37 -0
  67. package/src/hooks/lib/stdin.ts +18 -0
  68. package/src/hooks/lib/stop.ts +155 -0
  69. package/src/hooks/lib/time.ts +19 -0
  70. package/src/hooks/lib/token-usage.ts +42 -0
  71. package/src/hooks/lib/transcript.ts +76 -0
  72. package/src/hooks/lib/wisdom.ts +48 -0
  73. package/src/hooks/lib/work-tracking.ts +193 -0
  74. package/src/hooks/setup-check.ts +42 -0
  75. package/src/targets/claude/install.ts +145 -0
  76. package/src/targets/claude/uninstall.ts +101 -0
  77. package/src/targets/lib.ts +337 -0
  78. package/src/targets/opencode/install.ts +59 -0
  79. package/src/targets/opencode/plugin.ts +328 -0
  80. package/src/targets/opencode/uninstall.ts +57 -0
  81. package/src/tools/entity-save.ts +110 -0
  82. package/src/tools/export.ts +34 -0
  83. package/src/tools/fyzz-api.ts +104 -0
  84. package/src/tools/import.ts +123 -0
  85. package/src/tools/pattern-synthesis.ts +435 -0
  86. package/src/tools/pdf-download.ts +102 -0
  87. package/src/tools/relationship-reflect.ts +362 -0
  88. package/src/tools/session-summary.ts +206 -0
  89. package/src/tools/token-cost.ts +301 -0
  90. package/src/tools/youtube-analyze.ts +105 -0
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Richard Kovacs
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,80 @@
1
+ # Portable Agent Layer (PAL)
2
+
3
+ A cross-platform, cross-agent layer for portable AI workflows, memory, and accumulated knowledge.
4
+
5
+ PAL lets you carry your agent context across **Windows**, **macOS**, and **Linux**, and work across different agent runtimes and interfaces such as **Claude** and **OpenCode**. Its core idea is simple: your knowledge and workflows should belong to **you**, not to a single machine, tool, or vendor.
6
+
7
+ > Inspired in part by [Daniel Miessler](https://danielmiessler.com)'s work on [Personal AI Infrastructure](https://github.com/danielmiessler/Personal_AI_Infrastructure). PAL is an independent open-source implementation focused on portability across platforms and agents. It is not affiliated with or endorsed by Daniel Miessler.
8
+
9
+ ---
10
+
11
+ ## Why PAL?
12
+
13
+ Most AI setups are fragmented.
14
+
15
+ Your prompts live in one place, your context in another, your notes somewhere else, and your workflows are often tied to a specific operating system or a specific agent tool.
16
+
17
+ PAL is designed to fix that.
18
+
19
+ With PAL, you can:
20
+
21
+ - keep your AI workflow portable
22
+ - move accumulated knowledge between machines
23
+ - work across multiple agent environments
24
+ - avoid lock-in to a single platform or interface
25
+ - build a durable personal layer that outlives any one tool
26
+
27
+ ---
28
+
29
+ ## Core idea
30
+
31
+ PAL stands for **Portable Agent Layer**.
32
+
33
+ It is a layer that sits between **you** and the tools you use, helping preserve and transfer:
34
+
35
+ - context
36
+ - memory
37
+ - notes
38
+ - workflows
39
+ - reusable prompts
40
+ - agent-specific configurations
41
+ - accumulated knowledge
42
+
43
+ The emphasis is on **portability**.
44
+
45
+ Your setup should be able to travel with you.
46
+
47
+ ---
48
+
49
+ ## Features
50
+
51
+ - **Cross-platform**: works on Windows, macOS, and Linux
52
+ - **Cross-agent**: designed to work across multiple agent ecosystems
53
+ - **Portable knowledge**: export and import accumulated knowledge
54
+ - **TypeScript-first**: built in TypeScript from day one
55
+ - **Open source**: hackable, inspectable, extensible
56
+ - **Composable**: intended to fit into real developer workflows
57
+
58
+ ---
59
+
60
+ ## Philosophy
61
+
62
+ PAL is built around a few simple beliefs:
63
+
64
+ - your AI context should be **portable**
65
+ - your workflows should be **tool-agnostic**
66
+ - your knowledge should be **exportable**
67
+ - your personal system should be **owned by you**
68
+ - agent tooling will change, but your layer should remain useful
69
+
70
+ ---
71
+
72
+ ## Who this is for
73
+
74
+ PAL is for people who want:
75
+
76
+ - a personal AI layer they control
77
+ - to switch between agents without losing continuity
78
+ - to move between machines without rebuilding everything
79
+ - a durable way to store and reuse context
80
+ - an open foundation for portable agent workflows
@@ -0,0 +1,43 @@
1
+ ---
2
+ name: claude-researcher
3
+ description: Deep research with academic rigor — query decomposition, multi-source synthesis, scholarly depth. Use for research tasks requiring thorough analysis.
4
+ tools: WebSearch, WebFetch, Read, Grep, Glob
5
+ model: sonnet
6
+ ---
7
+
8
+ You are a research specialist focused on **depth and academic rigor**.
9
+
10
+ ## Methodology
11
+
12
+ 1. **Decompose** the query into 2-3 sub-questions that target the core of the topic
13
+ 2. **Search** each sub-question using WebSearch — prioritize authoritative sources (papers, docs, official sites)
14
+ 3. **Read** the most promising results with WebFetch to extract detail
15
+ 4. **Synthesize** findings into a structured analysis
16
+
17
+ ## Guidelines
18
+
19
+ - Prioritize primary sources over summaries
20
+ - Distinguish between established facts, expert consensus, and speculation
21
+ - Note methodology limitations when citing research
22
+ - If a claim has no strong source, say so — do not fabricate citations
23
+ - Keep findings concise but substantive
24
+
25
+ ## Output Format
26
+
27
+ ```markdown
28
+ ## Findings
29
+
30
+ [Numbered list of key discoveries, each with a brief explanation]
31
+
32
+ ## Sources
33
+
34
+ [Verified URLs with one-line descriptions — only include URLs you actually visited]
35
+
36
+ ## Confidence
37
+
38
+ [High/Medium/Low rating per finding, with brief justification]
39
+
40
+ ## Gaps
41
+
42
+ [What couldn't be answered or needs further investigation]
43
+ ```
@@ -0,0 +1,44 @@
1
+ ---
2
+ name: investigative-researcher
3
+ description: Investigative research with verification rigor — triple-checks sources, cross-references claims, assesses credibility. Use for research requiring high factual confidence.
4
+ tools: WebSearch, WebFetch, Read, Grep, Glob
5
+ model: sonnet
6
+ ---
7
+
8
+ You are a research specialist focused on **verification and investigative rigor**.
9
+
10
+ ## Methodology
11
+
12
+ 1. **Search** the topic broadly using WebSearch to map the information landscape
13
+ 2. **Verify** key claims by finding 2+ independent sources for each
14
+ 3. **Assess** source credibility — check publication date, author expertise, potential bias
15
+ 4. **Cross-reference** findings to identify contradictions or unsupported claims
16
+ 5. **Report** with clear evidence chains
17
+
18
+ ## Guidelines
19
+
20
+ - Every factual claim should have at least 2 independent sources
21
+ - Note when a claim is single-sourced or comes from a potentially biased source
22
+ - Check publication dates — flag stale information
23
+ - Distinguish between verified facts, likely true (single credible source), and unverified claims
24
+ - If a claim has no strong source, say so — do not fabricate citations
25
+
26
+ ## Output Format
27
+
28
+ ```markdown
29
+ ## Findings
30
+
31
+ [Numbered list of verified findings, each tagged: ✓ verified (2+ sources) | ~ likely (1 credible source) | ? unverified]
32
+
33
+ ## Sources
34
+
35
+ [Verified URLs with one-line descriptions — only include URLs you actually visited]
36
+
37
+ ## Confidence
38
+
39
+ [High/Medium/Low rating per finding, with evidence chain summary]
40
+
41
+ ## Flags
42
+
43
+ [Contradictions found, stale information, potential bias in sources]
44
+ ```
@@ -0,0 +1,43 @@
1
+ ---
2
+ name: multi-perspective-researcher
3
+ description: Breadth-focused research — generates multiple query variations, explores different angles, synthesizes diverse viewpoints. Use for research needing perspective diversity.
4
+ tools: WebSearch, WebFetch, Read, Grep, Glob
5
+ model: sonnet
6
+ ---
7
+
8
+ You are a research specialist focused on **breadth and perspective diversity**.
9
+
10
+ ## Methodology
11
+
12
+ 1. **Reframe** the query from 3-5 different angles (stakeholders, disciplines, timeframes)
13
+ 2. **Search** each angle separately using WebSearch — cast a wide net across domains
14
+ 3. **Identify** where perspectives agree, conflict, or reveal blind spots
15
+ 4. **Synthesize** a balanced analysis that captures the full picture
16
+
17
+ ## Guidelines
18
+
19
+ - Actively seek opposing viewpoints — do not default to the mainstream take
20
+ - Consider different stakeholder perspectives (users, builders, regulators, critics)
21
+ - Look for cross-domain connections that a single-angle search would miss
22
+ - Flag genuine disagreements rather than forcing consensus
23
+ - If a claim has no strong source, say so — do not fabricate citations
24
+
25
+ ## Output Format
26
+
27
+ ```markdown
28
+ ## Findings
29
+
30
+ [Numbered list organized by perspective/angle, noting where views converge or diverge]
31
+
32
+ ## Sources
33
+
34
+ [Verified URLs with one-line descriptions — only include URLs you actually visited]
35
+
36
+ ## Confidence
37
+
38
+ [High/Medium/Low rating per finding, with brief justification]
39
+
40
+ ## Gaps
41
+
42
+ [Perspectives not yet explored or questions that remain open]
43
+ ```
@@ -0,0 +1,40 @@
1
+ ---
2
+ name: analyze-pdf
3
+ description: Download and analyze PDF files from URLs or local paths — extract text, answer questions, summarize content
4
+ ---
5
+
6
+ When the user asks to analyze, read, or extract information from a PDF:
7
+
8
+ ## How to get the PDF
9
+
10
+ - **URL**: Use the `ai:pdf-download` CLI tool to download and archive the PDF:
11
+ ```bash
12
+ bun run ai:pdf-download -- <url> [--filename <name.pdf>]
13
+ ```
14
+ The tool downloads the file, saves it to `memory/downloads/{YYYY}/{MM}/{DD}/{filename}.pdf`, and returns JSON with the saved `path`.
15
+
16
+ - **Local path**: Use the file directly.
17
+
18
+ ## How to read it
19
+
20
+ Use your native PDF reading capability (e.g. a Read tool or equivalent). Most modern multimodal models can parse PDF content directly — no external dependencies, libraries, or conversion tools are needed.
21
+
22
+ Do NOT install PDF processing tools (poppler, pdftotext, etc.) unless the user explicitly asks. Native reading is sufficient.
23
+
24
+ ## What to do with it
25
+
26
+ Follow the user's request. Common tasks:
27
+
28
+ - **Answer a specific question** about the PDF content
29
+ - **Summarize** the document (defer to /summarize if installed)
30
+ - **Extract structured data** (tables, references, entities)
31
+ - **Extract wisdom** (defer to /extract-wisdom if installed)
32
+ - **Compare** with other documents or information
33
+
34
+ ## Guidelines
35
+
36
+ - Always run the tool from the PAL directory (the `ai:pdf-download` script is registered there)
37
+ - For large PDFs, read specific page ranges rather than the entire document
38
+ - Preserve the original structure (headings, lists, tables) when relevant
39
+ - Quote verbatim when the user asks about specific content
40
+ - If the PDF contains images or diagrams, describe them
@@ -0,0 +1,35 @@
1
+ ---
2
+ name: analyze-youtube
3
+ description: Analyze YouTube videos using Gemini's native video understanding — summarize, extract insights, answer questions
4
+ ---
5
+
6
+ When the user asks to analyze, summarize, or extract information from a YouTube video:
7
+
8
+ ## How to analyze
9
+
10
+ Use the `ai:youtube-analyze` CLI tool. It sends the video to Gemini, which processes both visual and audio content natively.
11
+
12
+ ```bash
13
+ bun run ai:youtube-analyze -- <youtube-url> [--prompt "your question"]
14
+ ```
15
+
16
+ - Without `--prompt`, it returns a structured summary with key insights, topics, people, and quotes.
17
+ - With `--prompt`, it answers the specific question about the video.
18
+
19
+ ## What to do with the result
20
+
21
+ Follow the user's request. Common tasks:
22
+
23
+ - **Summarize** the video content
24
+ - **Answer a specific question** about what's shown or discussed
25
+ - **Extract entities** (defer to /extract-entities if installed)
26
+ - **Extract wisdom** (defer to /extract-wisdom if installed)
27
+ - **Compare** with other content
28
+
29
+ ## Guidelines
30
+
31
+ - Always run the tool from the PAL directory (the `ai:youtube-analyze` script is registered there)
32
+ - For long videos, consider asking a focused question via `--prompt` rather than a full analysis
33
+ - Gemini sees both visuals and audio — mention on-screen content (slides, code, diagrams) when relevant
34
+ - Quote speakers verbatim when the user asks about specific statements
35
+ - If the tool reports a missing API key, tell the user to get one at https://aistudio.google.com/apikey and set `GEMINI_API_KEY` in their shell profile or PAL settings
@@ -0,0 +1,43 @@
1
+ ---
2
+ name: council
3
+ description: Multi-perspective parallel debate on a decision — 3-5 independent perspectives argue in parallel, then synthesize into a verdict
4
+ ---
5
+
6
+ When the user invokes /council <question or decision>:
7
+
8
+ ## Step 1: Define Perspectives
9
+
10
+ Choose 3-5 perspectives relevant to the topic. Each should represent a genuinely different viewpoint — not slight variations of the same position. Examples:
11
+ - Technical feasibility vs business value vs user experience
12
+ - Short-term pragmatism vs long-term architecture vs risk aversion
13
+ - Builder vs user vs critic
14
+
15
+ ## Step 2: Round 1 — Opening Positions (Parallel)
16
+
17
+ Spawn one subagent per perspective **in parallel (in a single message)**.
18
+
19
+ Each subagent should be a general-purpose agent with this prompt:
20
+ > You are arguing from the perspective of [PERSPECTIVE]. The question is: [QUESTION]. State your position and primary argument in 100-200 words. Be specific and opinionated — do not hedge.
21
+
22
+ ## Step 3: Round 2 — Challenges (Parallel)
23
+
24
+ After collecting Round 1 results, spawn another batch of subagents **in parallel (in a single message)**. Each perspective challenges the weakest point of another's argument:
25
+
26
+ Each subagent prompt:
27
+ > You are [PERSPECTIVE A]. Here are the opening positions: [all Round 1 results]. Challenge the weakest point of [PERSPECTIVE B]'s argument in 100 words. Be direct.
28
+
29
+ ## Step 4: Synthesis (You, Not Agents)
30
+
31
+ As the orchestrating agent, synthesize the debate:
32
+
33
+ - **Points of agreement** across perspectives
34
+ - **Remaining disagreements** and why they persist
35
+ - **Verdict:** recommended action with confidence level (high/medium/low)
36
+ - **Key risk** if the recommendation is wrong
37
+ - **Decision reversal trigger** — what new information would change the recommendation
38
+
39
+ ## Important
40
+
41
+ - All subagent spawns per round MUST be in a **single message** for parallel execution
42
+ - Perspectives should be genuinely diverse, not strawmen
43
+ - The synthesis is YOUR job — do not ask a subagent to synthesize
@@ -0,0 +1,31 @@
1
+ ---
2
+ name: create-skill
3
+ description: Scaffold a new PAL skill from a description
4
+ ---
5
+
6
+ When the user invokes /create-skill <name> <description>:
7
+
8
+ 1. Create a new markdown file in the skills/ directory
9
+ 2. Use this template:
10
+
11
+ ```markdown
12
+ ---
13
+ name: <name>
14
+ description: <one-line description>
15
+ ---
16
+
17
+ When the user invokes /<name> <args>:
18
+
19
+ 1. <step>
20
+ 2. <step>
21
+ 3. <step>
22
+
23
+ Output format:
24
+ - <format specification>
25
+ ```
26
+
27
+ 3. Validate the skill has:
28
+ - Clear trigger (when to invoke)
29
+ - Specific steps (not vague)
30
+ - Defined output format
31
+ - Reasonable scope (one skill, one job)
@@ -0,0 +1,63 @@
1
+ ---
2
+ name: extract-entities
3
+ description: Extract people and companies from content (articles, videos, URLs, pasted text)
4
+ ---
5
+
6
+ When the user invokes /extract-entities <content, URL, or pasted text>:
7
+
8
+ 1. Read/fetch the content
9
+ 2. Extract ALL people and companies mentioned
10
+
11
+ ## People
12
+
13
+ For each person, extract:
14
+ - **name**: Full name
15
+ - **role**: author | subject | mentioned | quoted | expert | interviewer | interviewee
16
+ - **title**: Job title (null if unknown)
17
+ - **company**: Company affiliation (null if unknown)
18
+ - **social**: twitter (@handle), linkedin (URL), email, website — null if unknown
19
+ - **context**: Why this person is mentioned and their relevance
20
+ - **importance**: primary (central to content) | secondary (supporting) | minor (brief mention)
21
+
22
+ ## Companies
23
+
24
+ For each company/organization, extract:
25
+ - **name**: Official name
26
+ - **domain**: Primary website domain (e.g. "anthropic.com", null if unknown)
27
+ - **industry**: Classification (AI, security, fintech, healthcare, etc.)
28
+ - **context**: How and why mentioned
29
+ - **mentioned_as**: subject | source | example | competitor | partner | acquisition | product | other
30
+ - **sentiment**: positive | neutral | negative | mixed
31
+
32
+ ## Output
33
+
34
+ Return structured JSON:
35
+
36
+ ```json
37
+ {
38
+ "people": [...],
39
+ "companies": [...]
40
+ }
41
+ ```
42
+
43
+ ## Guidelines
44
+
45
+ - Accuracy over quantity — use null for unknown fields, never guess
46
+ - Include authors, subjects, quoted individuals, and anyone significantly mentioned
47
+ - For research papers: all authors get "author" role
48
+ - For interviews: distinguish interviewer vs interviewee
49
+ - Universities and research institutions count as companies
50
+ - Extract social handles from bios, signatures, or text body
51
+ - Context fields should explain relevance, not just repeat the mention
52
+
53
+ ## Persistence
54
+
55
+ Always run the tool from the PAL directory (the `ai:entity-save` script is registered there).
56
+
57
+ After displaying results, ask the user if they want to save. When saving, pipe the JSON output through the entity-save tool which handles deduplication automatically:
58
+
59
+ ```bash
60
+ echo '<the JSON output>' | bun run ai:entity-save -- --source "<URL or content origin>"
61
+ ```
62
+
63
+ The tool deduplicates against the entity index (`memory/entities/entity-index.json`), assigns stable UUIDs, tracks occurrences, and reports what was new vs existing.
@@ -0,0 +1,18 @@
1
+ ---
2
+ name: extract-wisdom
3
+ description: Extract structured insights from content (articles, videos, podcasts)
4
+ ---
5
+
6
+ When the user invokes /extract-wisdom <content or URL>:
7
+
8
+ 1. Read/fetch the content
9
+ 2. Extract:
10
+ - **Summary** (3-5 sentences)
11
+ - **Ideas** (novel or surprising ideas, one line each)
12
+ - **Insights** (deeper implications or connections)
13
+ - **Quotes** (most impactful, verbatim)
14
+ - **Habits** (actionable behaviors mentioned)
15
+ - **Facts** (verifiable claims with numbers)
16
+ - **References** (books, papers, people mentioned)
17
+ - **Recommendations** (what the author suggests doing)
18
+ 3. Rate overall quality: must-read / worth-reading / skim / skip
@@ -0,0 +1,17 @@
1
+ ---
2
+ name: first-principles
3
+ description: Break down a problem to its fundamental constraints and build up a solution
4
+ ---
5
+
6
+ When the user invokes /first-principles <problem>:
7
+
8
+ 1. **State the problem** clearly in one sentence
9
+ 2. **Identify assumptions** — what are we taking for granted?
10
+ 3. **Classify constraints**:
11
+ - Hard (physics, math, API limits, laws)
12
+ - Soft (conventions, habits, "how it's always been done")
13
+ - Assumptions (things believed true but unverified)
14
+ 4. **Remove soft constraints** — what's possible without them?
15
+ 5. **Build up** — from hard constraints only, what's the simplest solution?
16
+ 6. **Compare** — how does the first-principles solution differ from the conventional one?
17
+ 7. **Recommend** — which approach to take and why
@@ -0,0 +1,43 @@
1
+ ---
2
+ name: fyzz-chat-api
3
+ description: Query Fyzz Chat conversations and projects via the REST API (API key is handled securely by the CLI tool)
4
+ ---
5
+
6
+ When you need to access the user's Fyzz Chat conversations or projects, use the `ai:fyzz-api` CLI tool. The tool reads the API key from the `FYZZ_API_KEY` environment variable automatically — never attempt to read, print, or reference the API key or the env var directly.
7
+
8
+ ## Available commands
9
+
10
+ ### List conversations
11
+
12
+ ```bash
13
+ bun run ai:fyzz-api -- conversations [--limit 20] [--search "query"] [--project-id <id>] [--cursor <cursor>]
14
+ ```
15
+
16
+ ### Get a single conversation with messages
17
+
18
+ ```bash
19
+ bun run ai:fyzz-api -- conversations <conversation-id>
20
+ ```
21
+
22
+ ### List projects
23
+
24
+ ```bash
25
+ bun run ai:fyzz-api -- projects
26
+ ```
27
+
28
+ ## Setup
29
+
30
+ If the tool reports a missing API key:
31
+
32
+ 1. Ask the user to create one in Fyzz Chat → Settings → API Keys
33
+ 2. They should set `FYZZ_API_KEY` in their shell profile or in PAL's `settings.json` env section
34
+ 3. Optionally set `FYZZ_BASE_URL` (defaults to `http://localhost:3000`)
35
+
36
+ ## Guidelines
37
+
38
+ - Always run the tool from the PAL directory (the `ai:fyzz-api` script is registered there)
39
+ - The API key is never visible in this conversation — that is by design
40
+ - Use `--search` for keyword-based lookup across titles and message content
41
+ - Use `--project-id` to scope results to a specific project
42
+ - Paginate with `--cursor` using the cursor from the previous response's `meta.cursor` field
43
+ - For large conversations, consider whether you need all messages or just the metadata (list mode returns titles only)
@@ -0,0 +1,87 @@
1
+ ---
2
+ name: reflect
3
+ description: Diagnose why a PAL behavior did not trigger as expected — trace hooks, instructions, and logic to find the gap
4
+ ---
5
+
6
+ When the user invokes `/reflect [optional: description of what didn't happen]`:
7
+
8
+ You are debugging PAL itself. The user noticed that something PAL should have done — automatically or via instructions — did not happen. Your job is to trace the full execution path and find where it broke.
9
+
10
+ ## 1. Identify the expected behavior
11
+
12
+ If the user described it, restate it clearly. If not, ask:
13
+ - **What did you expect PAL to do?** (e.g., update projects.json, write a relationship note, extract wisdom, trigger a hook)
14
+ - **When should it have happened?** (during the session, at stop, on prompt submit, on session start)
15
+
16
+ ## 2. Classify the behavior type
17
+
18
+ Determine which PAL subsystem owns this behavior:
19
+
20
+ | Type | Mechanism | Key files |
21
+ |------|-----------|-----------|
22
+ | **Hook-automated** | Runs automatically via StopOrchestrator or UserPromptOrchestrator | `hooks/StopOrchestrator.ts`, `hooks/UserPromptOrchestrator.ts`, `hooks/lib/stop.ts` |
23
+ | **Instruction-driven** | AI is told to do it via CLAUDE.md / AGENTS.md instructions | `~/.claude/CLAUDE.md`, project CLAUDE.md files |
24
+ | **Context-dependent** | Requires specific context to be loaded at session start | `hooks/LoadContext.ts`, `hooks/lib/context.ts` |
25
+ | **Skill-triggered** | Should have been invoked via a skill | `~/.agents/skills/*/SKILL.md` |
26
+
27
+ ## 3. Trace the execution path
28
+
29
+ Based on the type, investigate the relevant chain:
30
+
31
+ ### For hook-automated behaviors:
32
+ 1. Read the relevant handler source code in `hooks/handlers/`
33
+ 2. Check the orchestrator that calls it (`StopOrchestrator.ts` or `UserPromptOrchestrator.ts`)
34
+ 3. Check `~/.claude/settings.json` to confirm the hook is registered
35
+ 4. Check PAL logs at `portable-agent-layer/memory/state/debug.log` for errors
36
+ 5. Check if the handler has conditions that weren't met (e.g., message count < 2, missing session_id)
37
+
38
+ ### For instruction-driven behaviors:
39
+ 1. Read the relevant CLAUDE.md instructions that should have triggered the behavior
40
+ 2. Assess whether the instructions are clear and unambiguous enough for the AI to act on
41
+ 3. Check if the instructions have conditions/triggers — were those conditions met in this session?
42
+ 4. Consider whether the conversation context made it unlikely the AI would prioritize this action (e.g., too focused on the user's task, no natural pause point)
43
+
44
+ ### For context-dependent behaviors:
45
+ 1. Check `hooks/LoadContext.ts` and `hooks/lib/context.ts`
46
+ 2. Verify the expected context was actually injected (check system-reminder content in conversation)
47
+ 3. Check if context was truncated or missing
48
+
49
+ ### For skill-triggered behaviors:
50
+ 1. Verify the skill exists in `~/.agents/skills/`
51
+ 2. Check if SkillGuard blocked it
52
+ 3. Check if the skill's trigger conditions match what happened
53
+
54
+ ## 4. Identify the gap
55
+
56
+ State clearly:
57
+ - **What should have happened** (the expected behavior)
58
+ - **Where it broke** (the specific point in the chain where execution stopped or diverged)
59
+ - **Why it broke** (root cause — missing condition, unclear instruction, silent error, edge case, etc.)
60
+
61
+ ## 5. Propose a fix
62
+
63
+ Suggest one or more fixes, classified by approach:
64
+ - **Code fix** — change in hooks, handlers, or lib code
65
+ - **Instruction fix** — clearer or restructured CLAUDE.md instructions
66
+ - **Architecture fix** — behavior needs to move from one subsystem to another (e.g., from instruction-driven to hook-automated)
67
+ - **No fix needed** — the behavior is working as designed; the expectation was wrong
68
+
69
+ For each fix, note the specific file(s) to change and what the change would be.
70
+
71
+ ## Output format
72
+
73
+ ```
74
+ ## Reflect: [short description of the issue]
75
+
76
+ **Expected:** [what should have happened]
77
+ **Classified as:** [hook-automated | instruction-driven | context-dependent | skill-triggered]
78
+
79
+ ### Trace
80
+ [Step-by-step trace of the execution path, with file references]
81
+
82
+ ### Gap found
83
+ [Where and why it broke]
84
+
85
+ ### Recommended fix
86
+ [Concrete proposal with file paths and change description]
87
+ ```