@justestif/pk 0.1.13 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -30,38 +30,39 @@ Non-interactive:
30
30
 
31
31
  ```bash
32
32
  pk init my-project --harness claude
33
- pk init my-project --harness claude,omp,cursor # multiple harnesses
33
+ pk init my-project --harness claude,omp # multiple harnesses
34
34
  ```
35
35
 
36
- Available harnesses: `claude`, `claude-desktop`, `omp`, `cursor`, `opencode`, `codex`.
36
+ Available harnesses: `claude` (Claude Code), `omp` (Oh My Pi), `cursor` (Cursor), `gemini` (Gemini CLI), `codex` (Codex), `opencode` (OpenCode).
37
37
 
38
38
  `pk init` does three things:
39
39
 
40
40
  1. Creates `~/.pk/<name>/` as the knowledge home for this project
41
41
  2. Writes an MCP server config so your harness discovers `pk mcp`
42
- 3. Copies the `pk` skill to the harness skill directory (where supported)
42
+ 3. Installs a harness adapter that calls `pk prime` at session start to inject the skill into context
43
43
 
44
- | Harness | Config file written |
44
+ | Harness | Files written |
45
45
  |---|---|
46
- | `claude` | `.mcp.json` (project root) |
47
- | `claude-desktop` | `~/Library/Application Support/Claude/claude_desktop_config.json` |
48
- | `cursor` | `.cursor/mcp.json` |
49
- | `omp` | `.omp/mcp.json` |
50
- | `opencode` | `opencode.json` |
51
- | `codex` | `.codex/config.toml` |
46
+ | `claude` | `.mcp.json`, `CLAUDE.md`, `.claude/hooks/pk-eval.ts`, `.claude/settings.json` |
47
+ | `omp` | `.omp/mcp.json`, `AGENTS.md`, `.omp/extensions/pk-eval.ts` |
48
+ | `cursor` | `.cursor/mcp.json`, `.cursor/rules/pk.mdc`, `.cursor/hooks/pk-eval.sh`, `.cursor/hooks.json` |
49
+ | `gemini` | `.gemini/settings.json`, `GEMINI.md`, `.gemini/hooks/pk-eval.sh` |
50
+ | `codex` | `.codex/config.toml`, `AGENTS.md`, `.codex/hooks/pk-eval.sh`, `.codex/hooks.json` |
51
+ | `opencode` | `opencode.json`, `AGENTS.md`, `CLAUDE.md`, `.opencode/plugins/pk-eval.ts` |
52
52
 
53
53
  ## How it works
54
54
 
55
- `pk mcp` runs a stdio MCP server that exposes four tools to any connected agent:
55
+ `pk mcp` runs a stdio MCP server that exposes five tools to any connected agent:
56
56
 
57
57
  | Tool | What it does |
58
58
  |---|---|
59
59
  | `pk_search` | Full-text search over the knowledge base (BM25, porter stemming) |
60
- | `pk_synthesize` | Summarise notes matching a query or the whole base |
61
- | `pk_new` | Create a new note (type, title, tags, body) |
60
+ | `pk_synthesize` | Ranked context dump by query, session start, or all notes |
61
+ | `pk_read` | Read the full content of a note by path |
62
+ | `pk_new` | Create a new typed note skeleton, returns path to fill in |
62
63
  | `pk_lint` | Validate all notes for schema and quality rules |
63
64
 
64
- The agent calls these tools directly — no hooks, no shell extensions, no prompt injection.
65
+ The agent calls these tools directly — no hooks, no shell extensions, no prompt injection. Agents should never read or write knowledge files directly.
65
66
 
66
67
  ## Commands
67
68
 
@@ -102,8 +103,8 @@ pk synthesize
102
103
 
103
104
  ## Knowledge structure
104
105
 
105
- Notes live in `~/.pk/<name>/` as plain markdown files — human-editable,
106
- git-diffable, readable without any tool.
106
+ Notes live in `~/.pk/<name>/` as plain markdown files — human-editable and git-diffable.
107
+ Agents access them exclusively through the MCP tools; humans can read and edit them directly.
107
108
 
108
109
  ```
109
110
  ~/.pk/
package/dist/index.js CHANGED
@@ -17713,7 +17713,7 @@ var {
17713
17713
  var package_default = {
17714
17714
  name: "@justestif/pk",
17715
17715
  type: "module",
17716
- version: "0.1.13",
17716
+ version: "0.2.0",
17717
17717
  description: "Project knowledge \u2014 structured intake, search, and recall",
17718
17718
  bin: {
17719
17719
  pk: "dist/index.js"
@@ -32791,60 +32791,47 @@ function registerLint(program2) {
32791
32791
  }
32792
32792
 
32793
32793
  // src/lib/config.ts
32794
- import {
32795
- existsSync as existsSync3,
32796
- mkdirSync as mkdirSync3,
32797
- readFileSync,
32798
- writeFileSync
32799
- } from "fs";
32794
+ import { mkdirSync as mkdirSync3 } from "fs";
32800
32795
  import path6 from "path";
32801
32796
  import os2 from "os";
32802
32797
  var DEFAULT = { auto_commit: true, embedding: "" };
32803
32798
  function configPath() {
32804
32799
  return path6.join(os2.homedir(), ".pk", "config.json");
32805
32800
  }
32806
- function loadConfig() {
32801
+ async function loadConfig() {
32807
32802
  const p = configPath();
32808
- if (!existsSync3(p)) {
32809
- return { ...DEFAULT };
32810
- }
32811
32803
  try {
32812
- const merged = { ...DEFAULT, ...JSON.parse(readFileSync(p, "utf8")) };
32804
+ const text = await Bun.file(p).text();
32805
+ const merged = { ...DEFAULT, ...JSON.parse(text) };
32813
32806
  return merged;
32814
32807
  } catch {
32815
32808
  return { ...DEFAULT };
32816
32809
  }
32817
32810
  }
32818
- function saveConfig(config2) {
32811
+ async function saveConfig(config2) {
32819
32812
  const p = configPath();
32820
32813
  mkdirSync3(path6.dirname(p), { recursive: true });
32821
- writeFileSync(p, JSON.stringify(config2, null, 2) + `
32814
+ await Bun.write(p, JSON.stringify(config2, null, 2) + `
32822
32815
  `);
32823
32816
  }
32824
32817
 
32825
32818
  // src/commands/config-cmd.ts
32826
32819
  function registerConfig(program2) {
32827
- program2.command("config").description("Show or update pk configuration (~/.pk/config.json)").option("--auto-commit <bool>", "Auto-commit knowledge operations (true/false)").option("--embedding <model>", "Embedding model (empty to disable)").action((opts) => {
32828
- const config2 = loadConfig();
32820
+ program2.command("config").description("Show or update pk configuration (~/.pk/config.json)").option("--auto-commit <bool>", "Auto-commit knowledge operations (true/false)").option("--embedding <model>", "Embedding model (empty to disable)").action(async (opts) => {
32821
+ const config2 = await loadConfig();
32829
32822
  if (opts.autoCommit !== undefined) {
32830
32823
  config2.auto_commit = opts.autoCommit === "true";
32831
32824
  }
32832
32825
  if (opts.embedding !== undefined) {
32833
32826
  config2.embedding = opts.embedding;
32834
32827
  }
32835
- saveConfig(config2);
32828
+ await saveConfig(config2);
32836
32829
  console.log(JSON.stringify(config2, null, 2));
32837
32830
  });
32838
32831
  }
32839
32832
 
32840
32833
  // src/commands/init.ts
32841
- import {
32842
- cpSync,
32843
- existsSync as existsSync4,
32844
- mkdirSync as mkdirSync4,
32845
- readFileSync as readFileSync2,
32846
- writeFileSync as writeFileSync2
32847
- } from "fs";
32834
+ import { cpSync, existsSync as existsSync3, mkdirSync as mkdirSync4 } from "fs";
32848
32835
  import os3 from "os";
32849
32836
  import path7 from "path";
32850
32837
 
@@ -34125,28 +34112,28 @@ ${c2}
34125
34112
 
34126
34113
  // src/commands/init.ts
34127
34114
  var HARNESSES = [
34128
- { hint: ".mcp.json in project root", label: "Claude Code", value: "claude" },
34129
- { hint: "~/Library/\u2026/claude_desktop_config.json", label: "Claude Desktop", value: "claude-desktop" },
34130
- { hint: ".cursor/mcp.json in project root", label: "Cursor", value: "cursor" },
34131
- { hint: ".omp/mcp.json in project root", label: "Oh My Pi", value: "omp" },
34132
- { hint: "opencode.json in project root", label: "OpenCode", value: "opencode" },
34133
- { hint: ".codex/config.toml in project root", label: "Codex CLI", value: "codex" }
34115
+ { hint: ".mcp.json + CLAUDE.md + forced-eval hook", label: "Claude Code", value: "claude" },
34116
+ { hint: ".omp/mcp.json + AGENTS.md + forced-eval hook", label: "Oh My Pi", value: "omp" },
34117
+ { hint: ".cursor/mcp.json + .cursor/rules/pk.mdc + hook", label: "Cursor", value: "cursor" },
34118
+ { hint: ".gemini/settings.json + GEMINI.md + hook", label: "Gemini CLI", value: "gemini" },
34119
+ { hint: ".codex/config.toml + AGENTS.md + hook", label: "Codex", value: "codex" },
34120
+ { hint: "opencode.json + plugin (reads AGENTS.md/CLAUDE.md)", label: "OpenCode", value: "opencode" }
34134
34121
  ];
34135
34122
  var HARNESS_VALUES = new Set(HARNESSES.map((h2) => h2.value));
34136
34123
  var HARNESS_ACTIVATION = {
34137
34124
  claude: "start a new Claude Code session in this project",
34138
- "claude-desktop": "quit and restart Claude Desktop",
34139
- codex: "restart the Codex CLI",
34140
- cursor: "reload the Cursor window (Cmd+Shift+P \u2192 Reload Window)",
34141
34125
  omp: "restart your Oh My Pi session",
34142
- opencode: "restart opencode"
34126
+ cursor: "restart Cursor for changes to take effect",
34127
+ gemini: "restart your Gemini CLI session",
34128
+ codex: "restart Codex for MCP to connect",
34129
+ opencode: "reload OpenCode or restart the app"
34143
34130
  };
34144
34131
  function buildOutro(created, knowledgeDir, harnesses, skillPaths) {
34145
34132
  const lines = [
34146
34133
  created ? `Created project: ${knowledgeDir}` : `Connected to existing project: ${knowledgeDir}`
34147
34134
  ];
34148
34135
  for (const h2 of harnesses) {
34149
- lines.push(` ${h2}: MCP config written \u2192 ${HARNESS_ACTIVATION[h2]}`);
34136
+ lines.push(` ${h2}: configured \u2192 ${HARNESS_ACTIVATION[h2]}`);
34150
34137
  }
34151
34138
  for (const sp of skillPaths) {
34152
34139
  lines.push(` skill installed to ${sp}`);
@@ -34172,102 +34159,289 @@ function pkMcpEntry(knowledgeDir) {
34172
34159
  env: { PK_KNOWLEDGE_DIR: knowledgeDir }
34173
34160
  };
34174
34161
  }
34175
- function readJson(filePath) {
34176
- if (!existsSync4(filePath)) {
34177
- return {};
34178
- }
34162
+ async function readJson(filePath) {
34179
34163
  try {
34180
- return JSON.parse(readFileSync2(filePath, "utf8"));
34164
+ return JSON.parse(await Bun.file(filePath).text());
34181
34165
  } catch {
34182
34166
  return {};
34183
34167
  }
34184
34168
  }
34185
- function writeJson(filePath, data) {
34169
+ async function writeJson(filePath, data) {
34186
34170
  mkdirSync4(path7.dirname(filePath), { recursive: true });
34187
- writeFileSync2(filePath, JSON.stringify(data, null, 2) + `
34171
+ await Bun.write(filePath, JSON.stringify(data, null, 2) + `
34188
34172
  `);
34189
34173
  }
34190
- function writeClaudeConfig(projectRoot, _name, knowledgeDir) {
34174
+ async function writeClaudeConfig(projectRoot, _name, knowledgeDir) {
34191
34175
  const cfgPath = path7.join(projectRoot, ".mcp.json");
34192
- const cfg = readJson(cfgPath);
34176
+ const cfg = await readJson(cfgPath);
34193
34177
  const servers = cfg.mcpServers ?? {};
34194
34178
  servers.pk = pkMcpEntry(knowledgeDir);
34195
34179
  cfg.mcpServers = servers;
34196
- writeJson(cfgPath, cfg);
34180
+ await writeJson(cfgPath, cfg);
34197
34181
  }
34198
- function writeClaudeDesktopConfig(homeDir, name, knowledgeDir) {
34199
- const cfgPath = path7.join(homeDir, "Library", "Application Support", "Claude", "claude_desktop_config.json");
34200
- const cfg = readJson(cfgPath);
34182
+ async function writeOmpConfig(projectRoot, _name, knowledgeDir) {
34183
+ const cfgPath = path7.join(projectRoot, ".omp", "mcp.json");
34184
+ const cfg = await readJson(cfgPath);
34201
34185
  const servers = cfg.mcpServers ?? {};
34202
- servers[`pk-${name}`] = pkMcpEntry(knowledgeDir);
34186
+ servers.pk = pkMcpEntry(knowledgeDir);
34203
34187
  cfg.mcpServers = servers;
34204
- writeJson(cfgPath, cfg);
34188
+ await writeJson(cfgPath, cfg);
34189
+ }
34190
+ var PK_SECTION_START = "<!-- pk:start -->";
34191
+ var PK_SECTION_END = "<!-- pk:end -->";
34192
+ var PK_INSTRUCTION = `## pk \u2014 project knowledge
34193
+
34194
+ Use the pk skill and its MCP tools (pk_synthesize, pk_search, pk_read, pk_new, pk_lint) to manage project knowledge for this project. Run pk_synthesize({ sessionStart: true }) at the start of every session.`;
34195
+ async function writeInstructionSection(filePath, content) {
34196
+ const section = `${PK_SECTION_START}
34197
+ ${content}
34198
+ ${PK_SECTION_END}
34199
+ `;
34200
+ let existing = "";
34201
+ try {
34202
+ existing = await Bun.file(filePath).text();
34203
+ } catch {}
34204
+ const startIdx = existing.indexOf(PK_SECTION_START);
34205
+ const endIdx = existing.indexOf(PK_SECTION_END);
34206
+ let updated;
34207
+ if (startIdx !== -1 && endIdx !== -1) {
34208
+ updated = existing.slice(0, startIdx) + section + existing.slice(endIdx + PK_SECTION_END.length + 1);
34209
+ } else {
34210
+ updated = existing ? existing.trimEnd() + `
34211
+
34212
+ ` + section : section;
34213
+ }
34214
+ mkdirSync4(path7.dirname(filePath), { recursive: true });
34215
+ await Bun.write(filePath, updated);
34216
+ }
34217
+ async function writeClaudeMd(projectRoot) {
34218
+ await writeInstructionSection(path7.join(projectRoot, "CLAUDE.md"), PK_INSTRUCTION);
34219
+ }
34220
+ async function writeAgentsMd(projectRoot) {
34221
+ await writeInstructionSection(path7.join(projectRoot, "AGENTS.md"), PK_INSTRUCTION);
34222
+ }
34223
+ var FORCED_EVAL_PROMPT = "SKILL EVALUATION REQUIRED: Before responding, explicitly decide YES or NO" + " \u2014 does this request involve project work, investigation, decisions, or knowledge capture" + " where the pk skill applies? If YES, activate the pk skill, then proceed.";
34224
+ function claudeHookScript() {
34225
+ return `// pk forced-eval hook \u2014 auto-generated by pk init
34226
+ async function handleUserPromptSubmit() {
34227
+ console.log(JSON.stringify({
34228
+ hookSpecificOutput: {
34229
+ hookEventName: 'UserPromptSubmit',
34230
+ additionalContext: ${JSON.stringify(FORCED_EVAL_PROMPT)},
34231
+ },
34232
+ suppressOutput: true,
34233
+ }));
34234
+ }
34235
+
34236
+ handleUserPromptSubmit().catch(() => process.exit(0));
34237
+ `;
34238
+ }
34239
+ async function writeClaudeHook(projectRoot) {
34240
+ const hookDir = path7.join(projectRoot, ".claude", "hooks");
34241
+ const hookPath = path7.join(hookDir, "pk-eval.ts");
34242
+ mkdirSync4(hookDir, { recursive: true });
34243
+ await Bun.write(hookPath, claudeHookScript());
34244
+ const settingsPath = path7.join(projectRoot, ".claude", "settings.json");
34245
+ const settings = await readJson(settingsPath);
34246
+ const hooks = settings.hooks ?? {};
34247
+ const ups = hooks.UserPromptSubmit ?? [];
34248
+ const hookCmd = `bun run ${hookPath}`;
34249
+ if (!ups.some((h2) => h2.command === hookCmd)) {
34250
+ ups.push({ command: hookCmd });
34251
+ }
34252
+ hooks.UserPromptSubmit = ups;
34253
+ settings.hooks = hooks;
34254
+ await writeJson(settingsPath, settings);
34255
+ }
34256
+ function ompHookScript() {
34257
+ return `// pk forced-eval hook \u2014 auto-generated by pk init
34258
+ import type {HookAPI} from '@oh-my-pi/pi-coding-agent/extensibility/hooks';
34259
+
34260
+ export default function (pi: HookAPI) {
34261
+ pi.on('before_agent_start', (event: {systemPrompt?: string}) => ({
34262
+ systemPrompt: ${JSON.stringify(FORCED_EVAL_PROMPT)} + '\\n\\n' + (event.systemPrompt ?? ''),
34263
+ }));
34264
+ }
34265
+ `;
34266
+ }
34267
+ async function writeOmpHook(projectRoot) {
34268
+ const hookPath = path7.join(projectRoot, ".omp", "extensions", "pk-eval.ts");
34269
+ mkdirSync4(path7.dirname(hookPath), { recursive: true });
34270
+ await Bun.write(hookPath, ompHookScript());
34205
34271
  }
34206
- function writeCursorConfig(projectRoot, _name, knowledgeDir) {
34272
+ async function writeCursorConfig(projectRoot, _name, knowledgeDir) {
34207
34273
  const cfgPath = path7.join(projectRoot, ".cursor", "mcp.json");
34208
- const cfg = readJson(cfgPath);
34274
+ const cfg = await readJson(cfgPath);
34209
34275
  const servers = cfg.mcpServers ?? {};
34210
34276
  servers.pk = pkMcpEntry(knowledgeDir);
34211
34277
  cfg.mcpServers = servers;
34212
- writeJson(cfgPath, cfg);
34278
+ await writeJson(cfgPath, cfg);
34213
34279
  }
34214
- function writeOmpConfig(projectRoot, _name, knowledgeDir) {
34215
- const cfgPath = path7.join(projectRoot, ".omp", "mcp.json");
34216
- const cfg = readJson(cfgPath);
34280
+ async function writeCursorRules(projectRoot) {
34281
+ const rulesPath = path7.join(projectRoot, ".cursor", "rules", "pk.mdc");
34282
+ const content = `---
34283
+ description: "pk knowledge base integration"
34284
+ alwaysApply: true
34285
+ ---
34286
+
34287
+ ${PK_INSTRUCTION}
34288
+ `;
34289
+ mkdirSync4(path7.dirname(rulesPath), { recursive: true });
34290
+ await Bun.write(rulesPath, content);
34291
+ }
34292
+ function cursorHookScript() {
34293
+ return `#!/bin/bash
34294
+ # pk forced-eval hook \u2014 auto-generated by pk init
34295
+ # Exit 0 to allow, exit 2 to block
34296
+
34297
+ # Return the forced-eval prompt as additional context
34298
+ cat <<EOF
34299
+ {
34300
+ "continue": true,
34301
+ "additionalContext": ${JSON.stringify(FORCED_EVAL_PROMPT)}
34302
+ }
34303
+ EOF
34304
+ `;
34305
+ }
34306
+ async function writeCursorHook(projectRoot) {
34307
+ const hookPath = path7.join(projectRoot, ".cursor", "hooks", "pk-eval.sh");
34308
+ mkdirSync4(path7.dirname(hookPath), { recursive: true });
34309
+ await Bun.write(hookPath, cursorHookScript());
34310
+ const hooksPath = path7.join(projectRoot, ".cursor", "hooks.json");
34311
+ const hooks = await readJson(hooksPath);
34312
+ const hooksObj = hooks.hooks ?? {};
34313
+ const bsp = hooksObj.beforeSubmitPrompt ?? [];
34314
+ if (!bsp.some((h2) => h2.command === hookPath)) {
34315
+ bsp.push({ command: hookPath });
34316
+ }
34317
+ hooksObj.beforeSubmitPrompt = bsp;
34318
+ hooks.hooks = hooksObj;
34319
+ await writeJson(hooksPath, hooks);
34320
+ }
34321
+ async function writeGeminiConfig(projectRoot, _name, knowledgeDir) {
34322
+ const cfgPath = path7.join(projectRoot, ".gemini", "settings.json");
34323
+ const cfg = await readJson(cfgPath);
34217
34324
  const servers = cfg.mcpServers ?? {};
34218
34325
  servers.pk = pkMcpEntry(knowledgeDir);
34219
34326
  cfg.mcpServers = servers;
34220
- writeJson(cfgPath, cfg);
34327
+ await writeJson(cfgPath, cfg);
34328
+ }
34329
+ async function writeGeminiMd(projectRoot) {
34330
+ await writeInstructionSection(path7.join(projectRoot, "GEMINI.md"), PK_INSTRUCTION);
34221
34331
  }
34222
- function writeOpenCodeConfig(projectRoot, _name, knowledgeDir) {
34332
+ function geminiHookScript() {
34333
+ return `#!/bin/bash
34334
+ # pk forced-eval hook \u2014 auto-generated by pk init
34335
+ # Output JSON to stdout with the forced-eval prompt
34336
+
34337
+ cat <<EOF
34338
+ {
34339
+ "hookSpecificOutput": {
34340
+ "hookEventName": "BeforeAgent",
34341
+ "additionalContext": ${JSON.stringify(FORCED_EVAL_PROMPT)}
34342
+ }
34343
+ }
34344
+ EOF
34345
+ `;
34346
+ }
34347
+ async function writeGeminiHook(projectRoot) {
34348
+ const hookPath = path7.join(projectRoot, ".gemini", "hooks", "pk-eval.sh");
34349
+ mkdirSync4(path7.dirname(hookPath), { recursive: true });
34350
+ await Bun.write(hookPath, geminiHookScript());
34351
+ const cfgPath = path7.join(projectRoot, ".gemini", "settings.json");
34352
+ const cfg = await readJson(cfgPath);
34353
+ const hooks = cfg.hooks ?? {};
34354
+ const beforeAgent = hooks.BeforeAgent ?? [];
34355
+ if (!beforeAgent.some((h2) => h2.command === hookPath)) {
34356
+ beforeAgent.push({ command: hookPath });
34357
+ }
34358
+ hooks.BeforeAgent = beforeAgent;
34359
+ cfg.hooks = hooks;
34360
+ await writeJson(cfgPath, cfg);
34361
+ }
34362
+ async function writeToml(filePath, content) {
34363
+ mkdirSync4(path7.dirname(filePath), { recursive: true });
34364
+ await Bun.write(filePath, content);
34365
+ }
34366
+ async function writeCodexConfig(projectRoot, _name, knowledgeDir) {
34367
+ const cfgPath = path7.join(projectRoot, ".codex", "config.toml");
34368
+ const pkCmd = resolvePkCommand();
34369
+ const toml = `[mcp_servers.pk]
34370
+ command = "${pkCmd}"
34371
+ args = ["mcp"]
34372
+ env = { PK_KNOWLEDGE_DIR = "${knowledgeDir}" }
34373
+ `;
34374
+ await writeToml(cfgPath, toml);
34375
+ }
34376
+ function codexHookScript() {
34377
+ return `#!/bin/bash
34378
+ # pk forced-eval hook \u2014 auto-generated by pk init
34379
+ # Output JSON to stdout
34380
+
34381
+ cat <<EOF
34382
+ {
34383
+ "continue": true,
34384
+ "suppressOutput": false
34385
+ }
34386
+ EOF
34387
+ `;
34388
+ }
34389
+ async function writeCodexHook(projectRoot) {
34390
+ const hookPath = path7.join(projectRoot, ".codex", "hooks", "pk-eval.sh");
34391
+ mkdirSync4(path7.dirname(hookPath), { recursive: true });
34392
+ await Bun.write(hookPath, codexHookScript());
34393
+ const hooksPath = path7.join(projectRoot, ".codex", "hooks.json");
34394
+ const hooks = await readJson(hooksPath);
34395
+ const hooksObj = hooks.hooks ?? {};
34396
+ const ups = hooksObj.UserPromptSubmit ?? [];
34397
+ if (!ups.some((h2) => typeof h2 === "object" && h2 !== null && ("command" in h2) && typeof h2.command === "string" && h2.command.includes("pk-eval"))) {
34398
+ ups.push({ command: hookPath });
34399
+ }
34400
+ hooksObj.UserPromptSubmit = ups;
34401
+ hooks.hooks = hooksObj;
34402
+ await writeJson(hooksPath, hooks);
34403
+ }
34404
+ async function writeOpenCodeConfig(projectRoot, _name, knowledgeDir) {
34223
34405
  const cfgPath = path7.join(projectRoot, "opencode.json");
34224
- const cfg = readJson(cfgPath);
34406
+ const cfg = await readJson(cfgPath);
34225
34407
  const mcp = cfg.mcp ?? {};
34226
- mcp.pk = pkMcpEntry(knowledgeDir);
34408
+ const pkCmd = resolvePkCommand();
34409
+ mcp.pk = {
34410
+ type: "local",
34411
+ enabled: true,
34412
+ command: [pkCmd, "mcp"],
34413
+ environment: { PK_KNOWLEDGE_DIR: knowledgeDir }
34414
+ };
34227
34415
  cfg.mcp = mcp;
34228
- writeJson(cfgPath, cfg);
34416
+ await writeJson(cfgPath, cfg);
34417
+ }
34418
+ function openCodePluginScript() {
34419
+ return `// pk forced-eval plugin \u2014 auto-generated by pk init
34420
+ export const experimental = {
34421
+ async 'chat.system.transform'({ system }: { system: string[] }): Promise<void> {
34422
+ system.unshift(${JSON.stringify(FORCED_EVAL_PROMPT)});
34423
+ },
34424
+ };
34425
+ `;
34229
34426
  }
34230
- function writeCodexConfig(projectRoot, _name, knowledgeDir) {
34231
- const cfgPath = path7.join(projectRoot, ".codex", "config.toml");
34232
- const toml = [
34233
- "[mcp_servers.pk]",
34234
- `command = "${resolvePkCommand()}"`,
34235
- 'args = ["mcp"]',
34236
- "",
34237
- "[mcp_servers.pk.env]",
34238
- `PK_KNOWLEDGE_DIR = "${knowledgeDir}"`,
34239
- ""
34240
- ].join(`
34241
- `);
34242
- if (existsSync4(cfgPath)) {
34243
- const existing = readFileSync2(cfgPath, "utf8");
34244
- if (existing.includes("[mcp_servers.pk]")) {
34245
- return;
34246
- }
34247
- mkdirSync4(path7.dirname(cfgPath), { recursive: true });
34248
- writeFileSync2(cfgPath, existing.trimEnd() + `
34249
-
34250
- ` + toml);
34251
- } else {
34252
- mkdirSync4(path7.dirname(cfgPath), { recursive: true });
34253
- writeFileSync2(cfgPath, toml);
34254
- }
34427
+ async function writeOpenCodePlugin(projectRoot) {
34428
+ const pluginPath = path7.join(projectRoot, ".opencode", "plugins", "pk-eval.ts");
34429
+ mkdirSync4(path7.dirname(pluginPath), { recursive: true });
34430
+ await Bun.write(pluginPath, openCodePluginScript());
34255
34431
  }
34256
34432
  function skillTargetDir(harness, projectRoot) {
34257
34433
  switch (harness) {
34258
- case "claude":
34259
- case "claude-desktop": {
34434
+ case "claude": {
34260
34435
  return path7.join(os3.homedir(), ".claude", "skills", "pk");
34261
34436
  }
34262
- case "omp": {
34437
+ case "omp":
34438
+ case "cursor":
34439
+ case "gemini":
34440
+ case "opencode": {
34263
34441
  return path7.join(projectRoot, ".agents", "skills", "pk");
34264
34442
  }
34265
- case "cursor": {
34266
- return path7.join(projectRoot, ".cursor", "skills", "pk");
34267
- }
34268
- case "opencode":
34269
34443
  case "codex": {
34270
- return "";
34444
+ return path7.join(os3.homedir(), ".codex", "skills", "pk");
34271
34445
  }
34272
34446
  }
34273
34447
  }
@@ -34280,61 +34454,72 @@ function installSkill(harness, projectRoot) {
34280
34454
  return "";
34281
34455
  }
34282
34456
  const src = skillSourceDir();
34283
- if (!existsSync4(src)) {
34457
+ if (!existsSync3(src)) {
34284
34458
  return "";
34285
34459
  }
34286
- if (existsSync4(target)) {
34460
+ if (existsSync3(target)) {
34287
34461
  return target;
34288
34462
  }
34289
34463
  cpSync(src, target, { recursive: true });
34290
34464
  return target;
34291
34465
  }
34292
- function ensureProject(name) {
34466
+ async function ensureProject(name) {
34293
34467
  const kDir = projectDir(name);
34294
- const alreadyExists = existsSync4(kDir);
34468
+ const alreadyExists = existsSync3(kDir);
34295
34469
  for (const dir of Object.values(TYPE_DIRS)) {
34296
34470
  mkdirSync4(path7.join(kDir, dir), { recursive: true });
34297
34471
  }
34298
34472
  const gi = path7.join(kDir, ".gitignore");
34299
- if (!existsSync4(gi)) {
34300
- writeFileSync2(gi, `.index.db
34473
+ if (!existsSync3(gi)) {
34474
+ await Bun.write(gi, `.index.db
34301
34475
  `);
34302
34476
  }
34303
34477
  return { created: !alreadyExists, knowledgeDir: kDir };
34304
34478
  }
34305
- function applyHarness(harness, ctx) {
34306
- const { name, knowledgeDir, projectRoot, home } = ctx;
34479
+ async function applyHarness(harness, ctx) {
34480
+ const { name, knowledgeDir, projectRoot } = ctx;
34307
34481
  switch (harness) {
34308
34482
  case "claude": {
34309
- writeClaudeConfig(projectRoot, name, knowledgeDir);
34483
+ await writeClaudeConfig(projectRoot, name, knowledgeDir);
34484
+ await writeClaudeMd(projectRoot);
34485
+ await writeClaudeHook(projectRoot);
34310
34486
  break;
34311
34487
  }
34312
- case "claude-desktop": {
34313
- writeClaudeDesktopConfig(home, name, knowledgeDir);
34488
+ case "omp": {
34489
+ await writeOmpConfig(projectRoot, name, knowledgeDir);
34490
+ await writeAgentsMd(projectRoot);
34491
+ await writeOmpHook(projectRoot);
34314
34492
  break;
34315
34493
  }
34316
- case "codex": {
34317
- writeCodexConfig(projectRoot, name, knowledgeDir);
34494
+ case "cursor": {
34495
+ await writeCursorConfig(projectRoot, name, knowledgeDir);
34496
+ await writeCursorRules(projectRoot);
34497
+ await writeCursorHook(projectRoot);
34318
34498
  break;
34319
34499
  }
34320
- case "cursor": {
34321
- writeCursorConfig(projectRoot, name, knowledgeDir);
34500
+ case "gemini": {
34501
+ await writeGeminiConfig(projectRoot, name, knowledgeDir);
34502
+ await writeGeminiMd(projectRoot);
34503
+ await writeGeminiHook(projectRoot);
34322
34504
  break;
34323
34505
  }
34324
- case "omp": {
34325
- writeOmpConfig(projectRoot, name, knowledgeDir);
34506
+ case "codex": {
34507
+ await writeCodexConfig(projectRoot, name, knowledgeDir);
34508
+ await writeAgentsMd(projectRoot);
34509
+ await writeCodexHook(projectRoot);
34326
34510
  break;
34327
34511
  }
34328
34512
  case "opencode": {
34329
- writeOpenCodeConfig(projectRoot, name, knowledgeDir);
34513
+ await writeOpenCodeConfig(projectRoot, name, knowledgeDir);
34514
+ await writeAgentsMd(projectRoot);
34515
+ await writeClaudeMd(projectRoot);
34516
+ await writeOpenCodePlugin(projectRoot);
34330
34517
  break;
34331
34518
  }
34332
34519
  }
34333
34520
  }
34334
- function applyHarnesses(harnesses, ctx) {
34335
- for (const h2 of harnesses) {
34336
- applyHarness(h2, ctx);
34337
- }
34521
+ async function applyHarnesses(harnesses, ctx) {
34522
+ await Promise.all(harnesses.map(async (h2) => applyHarness(h2, ctx)));
34338
34523
  const seen = new Set;
34339
34524
  const installed = [];
34340
34525
  for (const h2 of harnesses) {
@@ -34361,14 +34546,14 @@ function registerInit(program2) {
34361
34546
  flagHarnesses = result;
34362
34547
  }
34363
34548
  if (nameArg && flagHarnesses) {
34364
- const { created: created2, knowledgeDir: knowledgeDir2 } = ensureProject(nameArg);
34549
+ const { created: created2, knowledgeDir: knowledgeDir2 } = await ensureProject(nameArg);
34365
34550
  const ctx2 = {
34366
34551
  home,
34367
34552
  knowledgeDir: knowledgeDir2,
34368
34553
  name: nameArg,
34369
34554
  projectRoot
34370
34555
  };
34371
- const skillPaths2 = applyHarnesses(flagHarnesses, ctx2);
34556
+ const skillPaths2 = await applyHarnesses(flagHarnesses, ctx2);
34372
34557
  console.log(buildOutro(created2, knowledgeDir2, flagHarnesses, skillPaths2).join(`
34373
34558
  `));
34374
34559
  return;
@@ -34424,14 +34609,14 @@ function registerInit(program2) {
34424
34609
  }
34425
34610
  harnesses = picked;
34426
34611
  }
34427
- const { created, knowledgeDir } = ensureProject(name);
34612
+ const { created, knowledgeDir } = await ensureProject(name);
34428
34613
  const ctx = {
34429
34614
  home,
34430
34615
  knowledgeDir,
34431
34616
  name,
34432
34617
  projectRoot
34433
34618
  };
34434
- const skillPaths = applyHarnesses(harnesses, ctx);
34619
+ const skillPaths = await applyHarnesses(harnesses, ctx);
34435
34620
  ye(buildOutro(created, knowledgeDir, harnesses, skillPaths).join(`
34436
34621
  `));
34437
34622
  });
@@ -42919,6 +43104,7 @@ function createPkMcpServer() {
42919
43104
  const dir = requireKnowledgeDir();
42920
43105
  try {
42921
43106
  const outPath = await createNote(dir, type, title, tags ?? "");
43107
+ await rebuild(dir);
42922
43108
  return { content: [{ type: "text", text: outPath }] };
42923
43109
  } catch (error51) {
42924
43110
  return {
@@ -42927,6 +43113,29 @@ function createPkMcpServer() {
42927
43113
  };
42928
43114
  }
42929
43115
  });
43116
+ server.registerTool("pk_read", {
43117
+ description: "Read the full content of a knowledge note by its path. Use paths returned by pk_search or pk_synthesize.",
43118
+ inputSchema: {
43119
+ path: exports_external.string().describe("Absolute path to the note file, as returned by pk_search or pk_synthesize")
43120
+ }
43121
+ }, async ({ path: notePath }) => {
43122
+ const dir = requireKnowledgeDir();
43123
+ if (!notePath.startsWith(dir)) {
43124
+ return {
43125
+ content: [{ type: "text", text: `Path must be inside the knowledge directory: ${dir}` }],
43126
+ isError: true
43127
+ };
43128
+ }
43129
+ try {
43130
+ const text = await Bun.file(notePath).text();
43131
+ return { content: [{ type: "text", text }] };
43132
+ } catch {
43133
+ return {
43134
+ content: [{ type: "text", text: `File not found: ${notePath}` }],
43135
+ isError: true
43136
+ };
43137
+ }
43138
+ });
42930
43139
  server.registerTool("pk_lint", {
42931
43140
  description: "Validate knowledge note structure and frontmatter. Returns lint issues grouped by severity.",
42932
43141
  inputSchema: {}
@@ -42958,6 +43167,18 @@ function registerMcp(program2) {
42958
43167
  });
42959
43168
  }
42960
43169
 
43170
+ // src/commands/prime.ts
43171
+ import path8 from "path";
43172
+ function skillPath() {
43173
+ return path8.resolve(import.meta.dir, "..", "skill", "SKILL.md");
43174
+ }
43175
+ function registerPrime(program2) {
43176
+ program2.command("prime").description("Print the pk skill to stdout \u2014 used by harness adapters to inject into system prompt at session start").action(async () => {
43177
+ const text = await Bun.file(skillPath()).text();
43178
+ process.stdout.write(text.replace(/^---[\s\S]*?---\n/v, ""));
43179
+ });
43180
+ }
43181
+
42961
43182
  // src/index.ts
42962
43183
  var program2 = new Command().name("pk").description("Project knowledge \u2014 structured intake, search, and recall").version(package_default.version);
42963
43184
  registerNew(program2);
@@ -42970,4 +43191,5 @@ registerInit(program2);
42970
43191
  registerInstructions(program2);
42971
43192
  registerVocab(program2);
42972
43193
  registerMcp(program2);
43194
+ registerPrime(program2);
42973
43195
  program2.parse();
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@justestif/pk",
3
3
  "type": "module",
4
- "version": "0.1.13",
4
+ "version": "0.2.0",
5
5
  "description": "Project knowledge — structured intake, search, and recall",
6
6
  "bin": {
7
7
  "pk": "dist/index.js"
package/skill/SKILL.md CHANGED
@@ -5,43 +5,87 @@ description: "Load when maintaining project knowledge, capturing decisions or qu
5
5
 
6
6
  # pk
7
7
 
8
- Structured project knowledge — intake, search, recall, and audit over `knowledge/`.
8
+ Structured project knowledge — intake, search, recall, and audit.
9
+
10
+ **Search, synthesis, creation, and validation go through MCP tools. Writing note body content uses your standard file Edit tool — but only on paths returned by `pk_new` or `pk_search`, never by navigating the filesystem yourself.**
9
11
 
10
12
  ## Tools
11
13
 
12
- | Task | Tool |
13
- |---|---|
14
- | Search notes | `pk_search` |
15
- | Context dump / session start | `pk_synthesize` |
16
- | Create a note | `pk_new` |
17
- | Validate structure | `pk_lint` |
14
+ ### `pk_synthesize` orient before any investigation
15
+
16
+ ```
17
+ pk_synthesize({ sessionStart: true }) # open questions + accepted decisions + active notes
18
+ pk_synthesize({ query: "auth flow" }) # ranked context for a topic
19
+ pk_synthesize({ query: "auth", type: "decision", limit: 5 })
20
+ ```
21
+
22
+ Returns formatted markdown with title, type, status, tags, and an excerpt per note. Use `sessionStart: true` at the start of every session.
23
+
24
+ ### `pk_search` — locate notes by content
25
+
26
+ ```
27
+ pk_search({ query: "database schema" })
28
+ pk_search({ query: "api", type: "question", status: "open" })
29
+ pk_search({ query: "deploy", tag: "infra", limit: 5 })
30
+ ```
31
+
32
+ Returns `[{ path, type, status, title, tags, snippet }]`. Always call before `pk_new` — duplicates erode trust faster than gaps do.
33
+
34
+ ### `pk_read` — full note body
35
+
36
+ ```
37
+ pk_read({ path: "/abs/path/returned/by/pk_search" })
38
+ ```
39
+
40
+ Returns complete file contents including frontmatter. Use paths from `pk_search` or `pk_synthesize` output.
41
+
42
+ ### `pk_new` — create a typed note skeleton
43
+
44
+ ```
45
+ pk_new({ type: "note", title: "Auth token expiry behaviour", tags: "auth,security" })
46
+ pk_new({ type: "decision", title: "Use JWT over sessions" })
47
+ pk_new({ type: "question", title: "Should we rate-limit the search endpoint?" })
48
+ pk_new({ type: "source", title: "Meeting notes 2024-06-01" })
49
+ ```
50
+
51
+ Returns the absolute path. Frontmatter (id, dates, status, tags as YAML array) is generated automatically from your inputs — you don't edit frontmatter after creation. After receiving the path: call `pk_read` to see the skeleton, then use your standard file Edit tool to fill in the required sections.
52
+
53
+ **Required sections by type:**
54
+ - `note` → `## Summary`, `## Details`, `## Evidence`, `## Related`
55
+ - `decision` → `## Decision`, `## Context`, `## Rationale`, `## Consequences`, `## Related`
56
+ - `question` → `## Question`, `## Why It Matters`, `## Current Understanding`, `## Resolution`
57
+ - `source` → `## Source`, `## Raw Material`, `## Extracted Items`
58
+
59
+ **`source` vs `note`:** `source` = raw/provenance-heavy input (meeting notes, transcripts, external docs, unprocessed data). `note` = stable synthesised fact or constraint you've derived. When synthesising across multiple sources into one insight: create a `note` and put source paths in `## Evidence`.
60
+
61
+ ### `pk_lint` — validate before committing
18
62
 
19
- ## Intake
63
+ ```
64
+ pk_lint({})
65
+ ```
20
66
 
21
- **Search before creating** — always call `pk_search` first.
67
+ **Errors block commits** (missing frontmatter, duplicate id, wrong folder, missing required sections, broken links). **Warnings are advisory** (empty tags, note too long, source marked processed with no extracted items) fix when practical, not required to commit.
22
68
 
23
- - Substantial messy input → `source`. Extract `note`, `decision`, `question` only when durable beyond this session.
24
- - Update existing when the match is obvious; otherwise create and link in body.
25
- - Call `pk_lint` before committing. Auto-commit coherent operations only when lint passes and no unrelated files are staged.
69
+ ### Status transitions
26
70
 
27
- ## Asking
71
+ No MCP tool for status changes. Use your file Edit tool directly on the frontmatter, fill in the resolution section, then lint.
28
72
 
29
- 1. `pk_search` with the relevant query
30
- 2. Read top results directly
31
- 3. Answer with citations to note paths/IDs
32
- 4. If silent or ambiguous, offer to create a `question` note
73
+ **MANDATORY READ `references/knowledge-model.md`** when: creating a note type you haven't used before, unsure which folder a type belongs in, validating frontmatter fields, or unsure which status values are valid for a given type. (Read with your standard file Read tool — these are local skill files, not MCP-accessible.)
33
74
 
34
75
  ## NEVER
35
76
 
36
- - **Skip `pk_search` before creating** — duplicates erode trust in the knowledge base
37
- - **Dump raw input into durable notes** preserve in `source`, extract selectively
38
- - **Silently merge related-but-different claims** create and link instead
39
- - **Auto-commit when lint fails or unrelated files are staged**
77
+ - **NEVER skip `pk_search` before `pk_new`**
78
+ **Why:** Duplicates silently fragment knowledge — two notes on the same topic never get reconciled, and future searches return noise.
79
+ **Instead:** Search first; update the existing note if found, or create and link if genuinely different.
40
80
 
41
- ## References
81
+ - **NEVER dump raw input into a `note` or `decision`**
82
+ **Why:** Durable note types are for stable, verified claims. Raw input contains noise, ambiguity, and provenance that decays poorly.
83
+ **Instead:** Create a `source` note, then extract `note`/`decision`/`question` entries from it selectively.
42
84
 
43
- Load only when the task requires it:
85
+ - **NEVER silently overwrite a conflicting claim**
86
+ **Why:** Silent overwrites destroy the rationale trail — you lose why the old claim existed.
87
+ **Instead:** Create a new note explaining the conflict, link both, and use `status: superseded` on the old one.
44
88
 
45
- - `references/knowledge-model.md` types, folders, frontmatter schema, required sections
46
- - `references/git-workflow.md` commit policy, safety stops
47
- - `references/source-principles.md` documentation governance
89
+ - **NEVER commit when `pk_lint` returns errors or unrelated files are staged**
90
+ **Why:** Lint errors mean required structure is broken; mixed commits make knowledge changes unauditable.
91
+ **Instead:** Fix errors, unstage unrelated files, then commit.
@@ -4,17 +4,17 @@
4
4
 
5
5
  This system is project-specific. It is not a personal second brain, a full wiki platform, or a semantic memory database.
6
6
 
7
- The canonical store is plain markdown under `knowledge/`. Beans/issues remain for trackable work; the knowledge base is for durable context and future answers.
7
+ The canonical store is plain markdown files managed via `pk` MCP tools. The root directory is set by `PK_KNOWLEDGE_DIR`. The knowledge base is for durable context and future answers — not for task tracking.
8
8
 
9
9
  ## Folder and Type Rules
10
10
 
11
- | Type | Folder | Purpose | Status values |
11
+ | Type | Subfolder | Purpose | Status values |
12
12
  | --- | --- | --- | --- |
13
- | `source` | `knowledge/sources/` | Provenance and raw/lightly cleaned input | `unprocessed`, `processed`, `archived` |
14
- | `note` | `knowledge/notes/` | Durable project knowledge | `active`, `superseded`, `archived` |
15
- | `decision` | `knowledge/decisions/` | Chosen direction and rationale | `proposed`, `accepted`, `superseded` |
16
- | `question` | `knowledge/questions/` | Unresolved or resolved uncertainty | `open`, `answered`, `obsolete` |
17
- | `index` | `knowledge/indexes/` | Navigation/MOC pages | `active`, `archived` |
13
+ | `source` | `sources/` | Provenance and raw/lightly cleaned input | `unprocessed`, `processed`, `archived` |
14
+ | `note` | `notes/` | Durable project knowledge | `active`, `superseded`, `archived` |
15
+ | `decision` | `decisions/` | Chosen direction and rationale | `proposed`, `accepted`, `superseded` |
16
+ | `question` | `questions/` | Unresolved or resolved uncertainty | `open`, `answered`, `obsolete` |
17
+ | `index` | `indexes/` | Navigation/MOC pages | `active`, `archived` |
18
18
 
19
19
  ## Frontmatter
20
20
 
@@ -34,7 +34,7 @@ tags: [tag-one, tag-two]
34
34
 
35
35
  Rules:
36
36
 
37
- - `id` must be unique across `knowledge/**/*.md`.
37
+ - `id` must be unique across all notes in the knowledge directory.
38
38
  - `type` must match both status set and folder.
39
39
  - `tags` must be a flat list of lowercase slugs.
40
40
  - Do not use nested YAML, multiline YAML, or relationship arrays.
@@ -85,7 +85,7 @@ Classify extracted material this way:
85
85
  - Chosen path with rationale/consequences → `decision`
86
86
  - Unknown that blocks or informs work → `question`
87
87
  - Navigation over a topic/type/tag → `index`
88
- - Action item/task → issue tracker, not a knowledge note
88
+ - Action item/task → not a knowledge note; track elsewhere
89
89
  - Low-signal commentary → ignore or keep only in `source`
90
90
 
91
91
  ## Update Policy
@@ -1,29 +0,0 @@
1
- ---
2
- id: decision-{{date}}-{{slug}}
3
- type: decision
4
- title: {{title}}
5
- created: {{date}}
6
- updated: {{date}}
7
- status: accepted
8
- tags: [{{tags}}]
9
- ---
10
-
11
- ## Decision
12
-
13
- What was decided.
14
-
15
- ## Context
16
-
17
- Why this decision came up.
18
-
19
- ## Rationale
20
-
21
- Why this option won.
22
-
23
- ## Consequences
24
-
25
- What this changes or constrains.
26
-
27
- ## Related
28
-
29
- Links to source, questions, notes, or superseded decisions.
@@ -1,25 +0,0 @@
1
- ---
2
- id: index-{{slug}}
3
- type: index
4
- title: {{title}}
5
- created: {{date}}
6
- updated: {{date}}
7
- status: active
8
- tags: [{{tags}}]
9
- ---
10
-
11
- ## Purpose
12
-
13
- What this index helps navigate.
14
-
15
- ## Key Links
16
-
17
- Curated links.
18
-
19
- ## Open Questions
20
-
21
- Relevant unresolved questions.
22
-
23
- ## Recent Changes
24
-
25
- Notable updates.
@@ -1,25 +0,0 @@
1
- ---
2
- id: note-{{date}}-{{slug}}
3
- type: note
4
- title: {{title}}
5
- created: {{date}}
6
- updated: {{date}}
7
- status: active
8
- tags: [{{tags}}]
9
- ---
10
-
11
- ## Summary
12
-
13
- One short paragraph.
14
-
15
- ## Details
16
-
17
- Durable project knowledge.
18
-
19
- ## Evidence
20
-
21
- Source links, files, quotes, or observations.
22
-
23
- ## Related
24
-
25
- Links to related notes, decisions, or questions.
@@ -1,25 +0,0 @@
1
- ---
2
- id: question-{{date}}-{{slug}}
3
- type: question
4
- title: {{title}}
5
- created: {{date}}
6
- updated: {{date}}
7
- status: open
8
- tags: [{{tags}}]
9
- ---
10
-
11
- ## Question
12
-
13
- The unresolved question.
14
-
15
- ## Why It Matters
16
-
17
- What decision or work this blocks or informs.
18
-
19
- ## Current Understanding
20
-
21
- Known facts and candidate answers.
22
-
23
- ## Resolution
24
-
25
- Answer once resolved.
@@ -1,21 +0,0 @@
1
- ---
2
- id: source-{{date}}-{{slug}}
3
- type: source
4
- title: {{title}}
5
- created: {{date}}
6
- updated: {{date}}
7
- status: unprocessed
8
- tags: [{{tags}}]
9
- ---
10
-
11
- ## Source
12
-
13
- Where this came from.
14
-
15
- ## Raw Material
16
-
17
- Original or lightly cleaned content.
18
-
19
- ## Extracted Items
20
-
21
- Links to notes, decisions, or questions created from this source.
@@ -1,77 +0,0 @@
1
- # Git Workflow for Project Knowledge
2
-
3
- Git is the audit, safety, and review layer for `knowledge/`.
4
-
5
- ## Before Modifying Knowledge
6
-
7
- Check repository state:
8
-
9
- ```bash
10
- git status --short
11
- ```
12
-
13
- Stop before editing if unrelated user changes would be mixed into the same commit. If changes are clearly knowledge-system work in scope, continue.
14
-
15
- ## After Modifying Knowledge
16
-
17
- Run mechanical maintenance:
18
-
19
- ```bash
20
- scripts/knowledge-index
21
- scripts/knowledge-lint
22
- ```
23
-
24
- Review what changed:
25
-
26
- ```bash
27
- git diff -- knowledge scripts/knowledge-* assets/templates hk.pkl
28
- ```
29
-
30
- Summarize:
31
-
32
- - files created
33
- - files updated
34
- - generated indexes
35
- - lint result
36
- - any ignored/deferred material
37
-
38
- ## Auto-Commit Policy
39
-
40
- Auto-commit coherent completed knowledge operations unless a safety stop applies.
41
-
42
- Normal intake commits stage only:
43
-
44
- ```bash
45
- git add knowledge/
46
- ```
47
-
48
- Knowledge-system implementation commits may stage:
49
-
50
- ```bash
51
- git add knowledge/ scripts/knowledge-* assets/templates/ hk.pkl
52
- ```
53
-
54
- Use concise commit messages:
55
-
56
- ```txt
57
- knowledge: intake <topic>
58
- knowledge: update <topic>
59
- knowledge: answer <topic>
60
- knowledge-system: update <topic>
61
- ```
62
-
63
- ## Safety Stops
64
-
65
- Do not auto-commit when:
66
-
67
- - unrelated project files are modified
68
- - lint fails
69
- - the edit deletes or rewrites existing knowledge ambiguously
70
- - the working tree contains user changes outside the allowed commit boundary
71
- - the user says not to commit
72
-
73
- When stopped, report the exact reason and show the staged/unstaged state.
74
-
75
- ## Hooks
76
-
77
- If hk is used, run `knowledge-lint` on pre-commit only when `knowledge/**/*.md` changes. Avoid every-N-commit scheduling; it is arbitrary and less transparent than change-triggered checks.