cortex-engine 0.4.1 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (89) hide show
  1. package/README.md +63 -57
  2. package/dist/bin/anomalies-cmd.d.ts +17 -0
  3. package/dist/bin/anomalies-cmd.d.ts.map +1 -0
  4. package/dist/bin/anomalies-cmd.js +417 -0
  5. package/dist/bin/anomalies-cmd.js.map +1 -0
  6. package/dist/bin/cli.d.ts +5 -0
  7. package/dist/bin/cli.d.ts.map +1 -1
  8. package/dist/bin/cli.js +84 -0
  9. package/dist/bin/cli.js.map +1 -1
  10. package/dist/bin/health-cmd.d.ts +15 -0
  11. package/dist/bin/health-cmd.d.ts.map +1 -0
  12. package/dist/bin/health-cmd.js +273 -0
  13. package/dist/bin/health-cmd.js.map +1 -0
  14. package/dist/bin/init.d.ts.map +1 -1
  15. package/dist/bin/init.js +169 -12
  16. package/dist/bin/init.js.map +1 -1
  17. package/dist/bin/maintain-cmd.d.ts +17 -0
  18. package/dist/bin/maintain-cmd.d.ts.map +1 -0
  19. package/dist/bin/maintain-cmd.js +352 -0
  20. package/dist/bin/maintain-cmd.js.map +1 -0
  21. package/dist/bin/report-cmd.d.ts +17 -0
  22. package/dist/bin/report-cmd.d.ts.map +1 -0
  23. package/dist/bin/report-cmd.js +309 -0
  24. package/dist/bin/report-cmd.js.map +1 -0
  25. package/dist/bin/store-factory.d.ts +21 -0
  26. package/dist/bin/store-factory.d.ts.map +1 -0
  27. package/dist/bin/store-factory.js +64 -0
  28. package/dist/bin/store-factory.js.map +1 -0
  29. package/dist/bin/vitals-cmd.d.ts +16 -0
  30. package/dist/bin/vitals-cmd.d.ts.map +1 -0
  31. package/dist/bin/vitals-cmd.js +425 -0
  32. package/dist/bin/vitals-cmd.js.map +1 -0
  33. package/dist/core/types.d.ts +6 -0
  34. package/dist/core/types.d.ts.map +1 -1
  35. package/dist/engines/cognition.d.ts.map +1 -1
  36. package/dist/engines/cognition.js +39 -3
  37. package/dist/engines/cognition.js.map +1 -1
  38. package/dist/index.d.ts +8 -0
  39. package/dist/index.d.ts.map +1 -1
  40. package/dist/index.js +9 -0
  41. package/dist/index.js.map +1 -1
  42. package/dist/mcp/tools.d.ts +1 -1
  43. package/dist/mcp/tools.d.ts.map +1 -1
  44. package/dist/mcp/tools.js +22 -2
  45. package/dist/mcp/tools.js.map +1 -1
  46. package/dist/tools/evolution-list.d.ts +6 -0
  47. package/dist/tools/evolution-list.d.ts.map +1 -0
  48. package/dist/tools/evolution-list.js +52 -0
  49. package/dist/tools/evolution-list.js.map +1 -0
  50. package/dist/tools/evolve.d.ts +6 -0
  51. package/dist/tools/evolve.d.ts.map +1 -0
  52. package/dist/tools/evolve.js +63 -0
  53. package/dist/tools/evolve.js.map +1 -0
  54. package/dist/tools/journal-read.d.ts +6 -0
  55. package/dist/tools/journal-read.d.ts.map +1 -0
  56. package/dist/tools/journal-read.js +59 -0
  57. package/dist/tools/journal-read.js.map +1 -0
  58. package/dist/tools/journal-write.d.ts +6 -0
  59. package/dist/tools/journal-write.d.ts.map +1 -0
  60. package/dist/tools/journal-write.js +71 -0
  61. package/dist/tools/journal-write.js.map +1 -0
  62. package/dist/tools/thread-create.d.ts +6 -0
  63. package/dist/tools/thread-create.d.ts.map +1 -0
  64. package/dist/tools/thread-create.js +57 -0
  65. package/dist/tools/thread-create.js.map +1 -0
  66. package/dist/tools/thread-resolve.d.ts +6 -0
  67. package/dist/tools/thread-resolve.d.ts.map +1 -0
  68. package/dist/tools/thread-resolve.js +42 -0
  69. package/dist/tools/thread-resolve.js.map +1 -0
  70. package/dist/tools/thread-update.d.ts +6 -0
  71. package/dist/tools/thread-update.d.ts.map +1 -0
  72. package/dist/tools/thread-update.js +91 -0
  73. package/dist/tools/thread-update.js.map +1 -0
  74. package/dist/tools/threads-list.d.ts +6 -0
  75. package/dist/tools/threads-list.d.ts.map +1 -0
  76. package/dist/tools/threads-list.js +70 -0
  77. package/dist/tools/threads-list.js.map +1 -0
  78. package/fozikio.json +14 -0
  79. package/hooks/cognitive-grounding.sh +26 -0
  80. package/hooks/cortex-telemetry.sh +53 -0
  81. package/hooks/observe-first.sh +25 -0
  82. package/hooks/project-board-gate.sh +195 -0
  83. package/hooks/session-lifecycle.sh +21 -0
  84. package/package.json +119 -107
  85. package/reflex-rules/cognitive-grounding.yaml +17 -0
  86. package/reflex-rules/note-about-doing.yaml +17 -0
  87. package/reflex-rules/observe-first.yaml +17 -0
  88. package/skills/cortex-query/SKILL.md +86 -0
  89. package/skills/cortex-review/SKILL.md +67 -0
@@ -0,0 +1,195 @@
1
+ #!/usr/bin/env bash
2
+ # ============================================================================
3
+ # project-board-gate.sh — PreToolUse hook (Bash + MCP tools)
4
+ # ============================================================================
5
+ # Production gate for tracked repos. Configurable requirements before push:
6
+ # - Board update (gh issue/project commands)
7
+ # - Ops logging (ops_append via cortex MCP)
8
+ #
9
+ # Config: .claude/state/project-boards.json
10
+ # State: .claude/state/push-gate-state.txt (session-scoped)
11
+ #
12
+ # The hook does two things depending on what's happening:
13
+ # 1. On board/ops actions → records them in state file
14
+ # 2. On git push → checks state file, blocks if requirements unmet
15
+ # ============================================================================
16
+
17
+ set -euo pipefail
18
+
19
+ PROJECT_DIR="${CLAUDE_PROJECT_DIR:-.}"
20
+ CONFIG_FILE="$PROJECT_DIR/.claude/state/project-boards.json"
21
+ STATE_FILE="$PROJECT_DIR/.claude/state/push-gate-state.txt"
22
+
23
+ # No config = no gate
24
+ if [[ ! -f "$CONFIG_FILE" ]]; then
25
+ echo '{}'
26
+ exit 0
27
+ fi
28
+
29
+ # Read config
30
+ CONFIG=$(python3 -c "
31
+ import json, sys
32
+ c = json.load(open('$CONFIG_FILE'))
33
+ print(json.dumps({
34
+ 'enabled': c.get('enabled', True),
35
+ 'strength': c.get('strength', 'block'),
36
+ 'require_board': c.get('on_push', {}).get('require_board_update', True),
37
+ 'require_ops': c.get('on_push', {}).get('require_ops_log', False),
38
+ 'ops_message': c.get('on_push', {}).get('require_ops_log_message', ''),
39
+ 'repos': {k: v for k, v in c.get('repos', {}).items()}
40
+ }))
41
+ " 2>/dev/null || echo '{"enabled":true,"strength":"block","require_board":true,"require_ops":false,"repos":{}}')
42
+
43
+ ENABLED=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['enabled'])" 2>/dev/null || echo "True")
44
+ STRENGTH=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['strength'])" 2>/dev/null || echo "block")
45
+ REQUIRE_BOARD=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['require_board'])" 2>/dev/null || echo "True")
46
+ REQUIRE_OPS=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['require_ops'])" 2>/dev/null || echo "False")
47
+ OPS_MESSAGE=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['ops_message'])" 2>/dev/null || echo "")
48
+
49
+ if [[ "$ENABLED" == "False" || "$STRENGTH" == "off" ]]; then
50
+ echo '{}'
51
+ exit 0
52
+ fi
53
+
54
+ # Read input
55
+ INPUT=$(cat)
56
+
57
+ # Detect tool context
58
+ TOOL_NAME=$(echo "$INPUT" | python3 -c "import json,sys; print(json.load(sys.stdin).get('tool_name',''))" 2>/dev/null || echo "")
59
+
60
+ # ═══════════════════════════════════════════════════════════════════════════
61
+ # TRACK: MCP ops_append calls → mark ops logged
62
+ # ═══════════════════════════════════════════════════════════════════════════
63
+ if [[ "$TOOL_NAME" == "mcp__cortex__ops_append" ]]; then
64
+ mkdir -p "$(dirname "$STATE_FILE")"
65
+ echo "ops_logged" >> "$STATE_FILE" 2>/dev/null
66
+ echo '{}'
67
+ exit 0
68
+ fi
69
+
70
+ # Only process Bash commands from here
71
+ if [[ "$TOOL_NAME" != "Bash" && -n "$TOOL_NAME" ]]; then
72
+ echo '{}'
73
+ exit 0
74
+ fi
75
+
76
+ COMMAND=$(echo "$INPUT" | python3 -c "import json,sys; print(json.load(sys.stdin).get('tool_input',{}).get('command',''))" 2>/dev/null || echo "")
77
+
78
+ # ═══════════════════════════════════════════════════════════════════════════
79
+ # TRACK: gh board/issue commands → mark board updated for detected repo
80
+ # ═══════════════════════════════════════════════════════════════════════════
81
+ if echo "$COMMAND" | grep -qE 'gh\s+(project\s+item|issue\s+(create|comment|close|edit)|pr\s+create)'; then
82
+ REPOS=$(echo "$CONFIG" | python3 -c "import json,sys; [print(r) for r in json.load(sys.stdin)['repos']]" 2>/dev/null || echo "")
83
+
84
+ mkdir -p "$(dirname "$STATE_FILE")"
85
+ for repo in $REPOS; do
86
+ if echo "$COMMAND" | grep -qi "$repo"; then
87
+ echo "board:$repo" >> "$STATE_FILE" 2>/dev/null
88
+ fi
89
+ done
90
+
91
+ # Fallback: cwd-based detection
92
+ CWD_REPO=$(basename "$(pwd)" 2>/dev/null || echo "")
93
+ if echo "$CONFIG" | python3 -c "import json,sys; c=json.load(sys.stdin); exit(0 if '$CWD_REPO' in c['repos'] else 1)" 2>/dev/null; then
94
+ echo "board:$CWD_REPO" >> "$STATE_FILE" 2>/dev/null
95
+ fi
96
+
97
+ echo '{}'
98
+ exit 0
99
+ fi
100
+
101
+ # ═══════════════════════════════════════════════════════════════════════════
102
+ # GATE: git push/merge → check requirements
103
+ # ═══════════════════════════════════════════════════════════════════════════
104
+ IS_GIT_PUSH=false
105
+ if echo "$COMMAND" | grep -qE 'git\s+push'; then
106
+ IS_GIT_PUSH=true
107
+ fi
108
+
109
+ if [[ "$IS_GIT_PUSH" == "false" ]]; then
110
+ echo '{}'
111
+ exit 0
112
+ fi
113
+
114
+ # Detect which tracked repo
115
+ DETECTED_REPO=""
116
+ REPOS=$(echo "$CONFIG" | python3 -c "import json,sys; [print(r) for r in json.load(sys.stdin)['repos']]" 2>/dev/null || echo "")
117
+
118
+ for repo in $REPOS; do
119
+ if echo "$COMMAND" | grep -qi "$repo"; then
120
+ DETECTED_REPO="$repo"
121
+ break
122
+ fi
123
+ done
124
+
125
+ if [[ -z "$DETECTED_REPO" ]]; then
126
+ CMD_DIR=$(echo "$COMMAND" | sed -n 's/.*cd[[:space:]]\+\([^[:space:];&]*\).*/\1/p' | head -1 2>/dev/null || echo "")
127
+ DIR_BASE=$(basename "${CMD_DIR:-$(pwd)}" 2>/dev/null || echo "")
128
+
129
+ for repo in $REPOS; do
130
+ if [[ "$DIR_BASE" == "$repo" ]]; then
131
+ DETECTED_REPO="$repo"
132
+ break
133
+ fi
134
+ done
135
+ fi
136
+
137
+ # Not a tracked repo — allow
138
+ if [[ -z "$DETECTED_REPO" ]]; then
139
+ echo '{}'
140
+ exit 0
141
+ fi
142
+
143
+ # === Check requirements ===
144
+ MISSING=""
145
+
146
+ if [[ "$REQUIRE_BOARD" == "True" ]]; then
147
+ if ! grep -q "^board:${DETECTED_REPO}$" "$STATE_FILE" 2>/dev/null; then
148
+ BOARD_NUM=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['repos']['$DETECTED_REPO']['board_number'])" 2>/dev/null || echo "?")
149
+ BOARD_OWNER=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['repos']['$DETECTED_REPO']['board_owner'])" 2>/dev/null || echo "?")
150
+ MISSING="${MISSING}\\n**Board update required:**\\n"
151
+ MISSING="${MISSING}1. Check board: \`gh project item-list $BOARD_NUM --owner $BOARD_OWNER\`\\n"
152
+ MISSING="${MISSING}2. Update issue: \`gh issue create/comment -R $BOARD_OWNER/$DETECTED_REPO\`\\n"
153
+ MISSING="${MISSING}3. Move items to reflect current status\\n"
154
+ fi
155
+ fi
156
+
157
+ if [[ "$REQUIRE_OPS" == "True" ]]; then
158
+ if ! grep -q "^ops_logged$" "$STATE_FILE" 2>/dev/null; then
159
+ MISSING="${MISSING}\\n**Ops log required:**\\n"
160
+ if [[ -n "$OPS_MESSAGE" ]]; then
161
+ MISSING="${MISSING}${OPS_MESSAGE}\\n"
162
+ else
163
+ MISSING="${MISSING}Call \`ops_append()\` to log what you're pushing and why.\\n"
164
+ fi
165
+ MISSING="${MISSING}\`mcp__cortex__ops_append({ content: \"Pushing: [what changed]\", project: \"$DETECTED_REPO\" })\`\\n"
166
+ fi
167
+ fi
168
+
169
+ # All requirements met — allow
170
+ if [[ -z "$MISSING" ]]; then
171
+ echo '{}'
172
+ exit 0
173
+ fi
174
+
175
+ REPO_DESC=$(echo "$CONFIG" | python3 -c "import json,sys; print(json.load(sys.stdin)['repos']['$DETECTED_REPO'].get('description',''))" 2>/dev/null || echo "")
176
+ MESSAGE="**[PUSH GATE]** Pushing to \`$DETECTED_REPO\` ($REPO_DESC) — requirements not met:\\n${MISSING}\\nComplete the above, then retry your push.\\n\\n*Config: \`.claude/state/project-boards.json\` — set \`strength\` to \`off\` to disable.*"
177
+
178
+ if [[ "$STRENGTH" == "block" ]]; then
179
+ HOOK_EVENT=$(echo "$INPUT" | python3 -c "import json,sys; print(json.load(sys.stdin).get('hook_event_name','PreToolUse'))" 2>/dev/null || echo "PreToolUse")
180
+ cat <<EOF
181
+ {
182
+ "hookSpecificOutput": {
183
+ "hookEventName": "$HOOK_EVENT",
184
+ "permissionDecision": "deny"
185
+ },
186
+ "systemMessage": "$MESSAGE"
187
+ }
188
+ EOF
189
+ else
190
+ cat <<EOF
191
+ {
192
+ "systemMessage": "$MESSAGE"
193
+ }
194
+ EOF
195
+ fi
@@ -0,0 +1,21 @@
1
+ #!/usr/bin/env bash
2
+ # ============================================================================
3
+ # session-lifecycle.sh — Claude Code Hook
4
+ # ============================================================================
5
+ # Event: SessionStart
6
+ # Purpose: Resets session-scoped state files at the start of each session.
7
+ # Clears cortex telemetry logs and push-gate state so hooks start
8
+ # fresh without stale data from previous sessions.
9
+ # How: Truncates .claude/state/cortex-calls.log and push-gate-state.txt.
10
+ # Disable: Delete this file from .claude/hooks/ — no other config needed.
11
+ # Part of: fozikio — supports cortex-telemetry.sh and project-board-gate.sh.
12
+ # ============================================================================
13
+
14
+ STATE_DIR="${CLAUDE_PROJECT_DIR:-.}/.claude/state"
15
+ mkdir -p "$STATE_DIR"
16
+
17
+ # Clear session-scoped state from previous session
18
+ > "$STATE_DIR/cortex-calls.log" 2>/dev/null
19
+ > "$STATE_DIR/push-gate-state.txt" 2>/dev/null
20
+
21
+ echo '{}'
package/package.json CHANGED
@@ -1,107 +1,119 @@
1
- {
2
- "name": "cortex-engine",
3
- "version": "0.4.1",
4
- "description": "Portable cognitive engine for AI agents — storage, embeddings, memory, FSRS, and MCP server",
5
- "type": "module",
6
- "main": "dist/index.js",
7
- "types": "dist/index.d.ts",
8
- "bin": {
9
- "cortex-engine": "dist/bin/serve.js",
10
- "fozikio": "dist/bin/cli.js"
11
- },
12
- "scripts": {
13
- "build": "tsc",
14
- "dev": "tsc --watch",
15
- "serve": "node dist/bin/serve.js",
16
- "test": "node --experimental-vm-modules node_modules/.bin/vitest run",
17
- "test:watch": "node --experimental-vm-modules node_modules/.bin/vitest"
18
- },
19
- "engines": {
20
- "node": ">=20"
21
- },
22
- "dependencies": {
23
- "@huggingface/transformers": "^3.8.1",
24
- "@modelcontextprotocol/sdk": "^1.6.1",
25
- "better-sqlite3": "^11.7.0",
26
- "yaml": "^2.7.0"
27
- },
28
- "peerDependencies": {
29
- "@anthropic-ai/sdk": "^0.39.0",
30
- "@google-cloud/aiplatform": "^3.12.0",
31
- "@google-cloud/firestore": "^7.11.0",
32
- "@google-cloud/vertexai": "^1.9.0",
33
- "firebase-admin": "^13.0.0",
34
- "openai": "^4.77.0"
35
- },
36
- "peerDependenciesMeta": {
37
- "@anthropic-ai/sdk": {
38
- "optional": true
39
- },
40
- "@google-cloud/aiplatform": {
41
- "optional": true
42
- },
43
- "@google-cloud/firestore": {
44
- "optional": true
45
- },
46
- "@google-cloud/vertexai": {
47
- "optional": true
48
- },
49
- "firebase-admin": {
50
- "optional": true
51
- },
52
- "openai": {
53
- "optional": true
54
- }
55
- },
56
- "devDependencies": {
57
- "@anthropic-ai/sdk": "^0.39.0",
58
- "@google-cloud/aiplatform": "^3.12.0",
59
- "@google-cloud/firestore": "^7.11.0",
60
- "@google-cloud/vertexai": "^1.9.0",
61
- "@types/better-sqlite3": "^7.6.12",
62
- "@types/node": "^22.10.0",
63
- "firebase-admin": "^13.0.0",
64
- "openai": "^4.77.0",
65
- "typescript": "^5.7.0",
66
- "vitest": "^3.0.0"
67
- },
68
- "keywords": [
69
- "cortex",
70
- "cognitive",
71
- "memory",
72
- "fsrs",
73
- "mcp",
74
- "ai-agent",
75
- "embeddings"
76
- ],
77
- "license": "MIT",
78
- "author": "Fozikio <hello@fozikio.com>",
79
- "repository": {
80
- "type": "git",
81
- "url": "https://github.com/Fozikio/cortex-engine.git"
82
- },
83
- "homepage": "https://github.com/Fozikio/cortex-engine#readme",
84
- "files": [
85
- "dist",
86
- "LICENSE",
87
- "README.md"
88
- ],
89
- "exports": {
90
- ".": {
91
- "import": "./dist/index.js",
92
- "types": "./dist/index.d.ts"
93
- },
94
- "./stores/firestore": {
95
- "import": "./dist/stores/firestore.js",
96
- "types": "./dist/stores/firestore.d.ts"
97
- },
98
- "./stores/sqlite": {
99
- "import": "./dist/stores/sqlite.js",
100
- "types": "./dist/stores/sqlite.d.ts"
101
- },
102
- "./mcp": {
103
- "import": "./dist/mcp/server.js",
104
- "types": "./dist/mcp/server.d.ts"
105
- }
106
- }
107
- }
1
+ {
2
+ "name": "cortex-engine",
3
+ "version": "0.5.0",
4
+ "description": "Portable cognitive engine for AI agents — storage, embeddings, memory, FSRS, and MCP server",
5
+ "type": "module",
6
+ "main": "dist/index.js",
7
+ "types": "dist/index.d.ts",
8
+ "bin": {
9
+ "cortex-engine": "dist/bin/serve.js",
10
+ "fozikio": "dist/bin/cli.js"
11
+ },
12
+ "scripts": {
13
+ "build": "tsc",
14
+ "dev": "tsc --watch",
15
+ "serve": "node dist/bin/serve.js",
16
+ "test": "node --experimental-vm-modules node_modules/.bin/vitest run",
17
+ "test:watch": "node --experimental-vm-modules node_modules/.bin/vitest"
18
+ },
19
+ "engines": {
20
+ "node": ">=20"
21
+ },
22
+ "dependencies": {
23
+ "@huggingface/transformers": "^3.8.1",
24
+ "@modelcontextprotocol/sdk": "^1.6.1",
25
+ "better-sqlite3": "^11.7.0",
26
+ "yaml": "^2.7.0"
27
+ },
28
+ "peerDependencies": {
29
+ "@anthropic-ai/sdk": "^0.39.0",
30
+ "@google-cloud/aiplatform": "^3.12.0",
31
+ "@google-cloud/firestore": "^7.11.0",
32
+ "@google-cloud/vertexai": "^1.9.0",
33
+ "firebase-admin": "^13.0.0",
34
+ "openai": "^4.77.0"
35
+ },
36
+ "peerDependenciesMeta": {
37
+ "@anthropic-ai/sdk": {
38
+ "optional": true
39
+ },
40
+ "@google-cloud/aiplatform": {
41
+ "optional": true
42
+ },
43
+ "@google-cloud/firestore": {
44
+ "optional": true
45
+ },
46
+ "@google-cloud/vertexai": {
47
+ "optional": true
48
+ },
49
+ "firebase-admin": {
50
+ "optional": true
51
+ },
52
+ "openai": {
53
+ "optional": true
54
+ }
55
+ },
56
+ "devDependencies": {
57
+ "@anthropic-ai/sdk": "^0.39.0",
58
+ "@google-cloud/aiplatform": "^3.12.0",
59
+ "@google-cloud/firestore": "^7.11.0",
60
+ "@google-cloud/vertexai": "^1.9.0",
61
+ "@types/better-sqlite3": "^7.6.12",
62
+ "@types/node": "^22.10.0",
63
+ "firebase-admin": "^13.0.0",
64
+ "openai": "^4.77.0",
65
+ "typescript": "^5.7.0",
66
+ "vitest": "^3.0.0"
67
+ },
68
+ "keywords": [
69
+ "cortex",
70
+ "cognitive",
71
+ "memory",
72
+ "fsrs",
73
+ "mcp",
74
+ "ai-agent",
75
+ "embeddings",
76
+ "fozikio",
77
+ "persistent-memory",
78
+ "knowledge-graph",
79
+ "spaced-repetition"
80
+ ],
81
+ "license": "MIT",
82
+ "author": "Fozikio <hello@fozikio.com>",
83
+ "repository": {
84
+ "type": "git",
85
+ "url": "https://github.com/Fozikio/cortex-engine.git"
86
+ },
87
+ "homepage": "https://github.com/Fozikio/cortex-engine/wiki",
88
+ "funding": {
89
+ "type": "github",
90
+ "url": "https://github.com/sponsors/idapixl"
91
+ },
92
+ "files": [
93
+ "dist",
94
+ "hooks",
95
+ "skills",
96
+ "reflex-rules",
97
+ "fozikio.json",
98
+ "LICENSE",
99
+ "README.md"
100
+ ],
101
+ "exports": {
102
+ ".": {
103
+ "import": "./dist/index.js",
104
+ "types": "./dist/index.d.ts"
105
+ },
106
+ "./stores/firestore": {
107
+ "import": "./dist/stores/firestore.js",
108
+ "types": "./dist/stores/firestore.d.ts"
109
+ },
110
+ "./stores/sqlite": {
111
+ "import": "./dist/stores/sqlite.js",
112
+ "types": "./dist/stores/sqlite.d.ts"
113
+ },
114
+ "./mcp": {
115
+ "import": "./dist/mcp/server.js",
116
+ "types": "./dist/mcp/server.d.ts"
117
+ }
118
+ }
119
+ }
@@ -0,0 +1,17 @@
1
+ name: cognitive-grounding
2
+ version: 1
3
+ tier: recommended
4
+ category: cognitive
5
+ description: Nudge agent to query cortex before substantive cognitive work
6
+ events: [prompt_submit]
7
+ conditions:
8
+ - field: user_prompt
9
+ op: regex
10
+ pattern: '(?i)(evaluat|review|design|assess|analyz|creat|build|architect|plan|propos|critique|audit|diagnos)'
11
+ action: warn
12
+ severity: medium
13
+ message: |
14
+ This prompt involves substantive cognitive work. Before responding, call query() on the topic to ground your work in accumulated experience.
15
+ override:
16
+ allow_disable: true
17
+ allow_downgrade: true
@@ -0,0 +1,17 @@
1
+ name: note-about-doing
2
+ version: 1
3
+ tier: custom
4
+ category: cognitive
5
+ description: Suggest capturing new threads of thought with thread_create()
6
+ events: [prompt_submit]
7
+ conditions:
8
+ - field: user_prompt
9
+ op: regex
10
+ pattern: '(?i)(I should|I need to|I want to|let me|I''ll)'
11
+ action: log
12
+ severity: info
13
+ message: |
14
+ If this is a new thread of thought, consider capturing it with thread_create() rather than just noting it and moving on.
15
+ override:
16
+ allow_disable: true
17
+ allow_downgrade: true
@@ -0,0 +1,17 @@
1
+ name: observe-first
2
+ version: 1
3
+ tier: recommended
4
+ category: cognitive
5
+ description: Warn when writing to memory directories without querying cortex first
6
+ events: [file_write, file_edit]
7
+ conditions:
8
+ - field: file_path
9
+ op: regex
10
+ pattern: '(Mind|Journal|memory)'
11
+ action: warn
12
+ severity: medium
13
+ message: |
14
+ Writing to a memory directory. Have you called observe() or query() first? Memory writes should be grounded in cortex context.
15
+ override:
16
+ allow_disable: true
17
+ allow_downgrade: true
@@ -0,0 +1,86 @@
1
+ ---
2
+ name: cortex-query
3
+ description: Persistent memory for AI agents — search, record, and build knowledge across sessions using cortex-engine MCP tools
4
+ ---
5
+
6
+ # Cortex Memory — Query & Record
7
+
8
+ Your agent has persistent memory via cortex-engine. Knowledge survives across sessions — you can recall what you learned last week, track evolving beliefs, and build a knowledge graph over time.
9
+
10
+ ## Core Loop
11
+
12
+ **Read before you write.** Always check what you already know before adding more.
13
+
14
+ ### Search for knowledge
15
+
16
+ ```
17
+ query("authentication architecture decisions")
18
+ ```
19
+
20
+ Be specific. `query("JWT token expiry policy")` beats `query("auth")`. Results include relevance scores and connected concepts.
21
+
22
+ After finding a relevant memory, explore around it:
23
+ ```
24
+ neighbors(memory_id)
25
+ ```
26
+
27
+ ### Record what you learn
28
+
29
+ **Facts** — things you confirmed or noticed to be true:
30
+ ```
31
+ observe("The API rate limits at 1000 req/min per API key, not per user")
32
+ ```
33
+
34
+ **Questions** — things you want to explore but haven't resolved:
35
+ ```
36
+ wonder("Why does the sync daemon stall after 300k seconds?")
37
+ ```
38
+
39
+ **Hypotheses** — ideas that might be true but aren't confirmed:
40
+ ```
41
+ speculate("Switching to connection pooling might fix the timeout issues")
42
+ ```
43
+
44
+ These are stored separately so questions don't pollute your knowledge base.
45
+
46
+ ### Update beliefs
47
+
48
+ When your understanding changes:
49
+ ```
50
+ believe(concept_id, "Revised understanding based on new evidence", "reason for change")
51
+ ```
52
+
53
+ ### Track work across sessions
54
+
55
+ ```
56
+ ops_append("Finished auth refactor, tests passing", project="api-v2")
57
+ ```
58
+
59
+ Next session, pick up where you left off:
60
+ ```
61
+ ops_query(project="api-v2")
62
+ ```
63
+
64
+ ## Session Pattern
65
+
66
+ 1. **Start:** `query()` the topic you're working on
67
+ 2. **During:** `observe()` facts, `wonder()` questions as they come up
68
+ 3. **End:** `ops_append()` what you did and what's unfinished
69
+ 4. **Periodically:** `dream()` to consolidate observations into long-term memories
70
+
71
+ ## Available Tools
72
+
73
+ **Write:** observe, wonder, speculate, believe, reflect, digest
74
+ **Read:** query, recall, predict, validate, neighbors, wander
75
+ **Ops:** ops_append, ops_query, ops_update
76
+ **System:** stats, dream
77
+
78
+ ## Setup
79
+
80
+ ```bash
81
+ npm install cortex-engine
82
+ npx fozikio init my-agent
83
+ npx cortex-engine # starts MCP server
84
+ ```
85
+
86
+ Defaults to local SQLite + Ollama. No cloud accounts needed.
@@ -0,0 +1,67 @@
1
+ ---
2
+ name: cortex-review
3
+ description: Code and design review grounded in persistent memory — compares new work against accumulated knowledge and patterns
4
+ ---
5
+
6
+ # Cortex Review
7
+
8
+ Review code, designs, or proposals by comparing them against your accumulated knowledge in cortex.
9
+
10
+ ## Workflow
11
+
12
+ ### 1. Ground in memory
13
+
14
+ Before reading the work under review, query cortex for the topic:
15
+
16
+ ```
17
+ query("the domain or system being reviewed")
18
+ ```
19
+
20
+ Read the results. They contain past decisions, patterns, and lessons learned that inform your review.
21
+
22
+ ### 2. Review against context
23
+
24
+ As you read the work, compare it against what cortex returned:
25
+
26
+ - **Does it align** with established patterns and past decisions?
27
+ - **Does it diverge** from known approaches — intentionally or accidentally?
28
+ - **Does it introduce** novel patterns worth capturing?
29
+
30
+ ### 3. Record what you find
31
+
32
+ **New patterns:** If the work introduces something worth remembering:
33
+ ```
34
+ observe("The new caching layer uses write-through with 5min TTL — effective for this read-heavy workload")
35
+ ```
36
+
37
+ **Open questions:** If something isn't clear:
38
+ ```
39
+ wonder("Why did they bypass the rate limiter for internal services?")
40
+ ```
41
+
42
+ **Belief updates:** If the work changes your understanding:
43
+ ```
44
+ believe(concept_id, "Updated understanding", "Evidence from this review")
45
+ ```
46
+
47
+ ### 4. Output format
48
+
49
+ ```markdown
50
+ ## Review — Grounded in Memory
51
+
52
+ ### Aligned with known patterns
53
+ - [what matches cortex context]
54
+
55
+ ### Divergences
56
+ - [what differs, with reasoning about whether intentional]
57
+
58
+ ### New patterns to capture
59
+ - [novel approaches worth an observe() call]
60
+
61
+ ### Open questions
62
+ - [things to wonder() about]
63
+ ```
64
+
65
+ ## Why This Matters
66
+
67
+ Without memory grounding, every review starts from zero. You'll miss that "we tried this approach 3 weeks ago and it caused latency spikes" or "this pattern was explicitly chosen over the alternative for compliance reasons." Cortex holds that context.