@pixelguild/loom 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +37 -8
  2. package/package.json +3 -2
package/README.md CHANGED
@@ -1,7 +1,3 @@
1
- <p align="center">
2
- <img src="https://raw.githubusercontent.com/pixelguild/loom/main/assets/loom_app_icon.png" width="96" height="96" alt="Loom">
3
- </p>
4
-
5
1
  <h1 align="center">@pixelguild/loom</h1>
6
2
 
7
3
  <p align="center">
@@ -22,7 +18,8 @@ Loom is an [MCP server](https://modelcontextprotocol.io) that gives Claude Code
22
18
  ## Install
23
19
 
24
20
  ```bash
25
- npx @pixelguild/loom init
21
+ npm install -g @pixelguild/loom
22
+ loom init
26
23
  ```
27
24
 
28
25
  This scaffolds `docs/loom/`, wires the MCP server into `.claude/mcp.json`, and adds instructions to your `CLAUDE.md`. That's it.
@@ -30,7 +27,7 @@ This scaffolds `docs/loom/`, wires the MCP server into `.claude/mcp.json`, and a
30
27
  Or add manually:
31
28
 
32
29
  ```bash
33
- claude mcp add loom -- npx @pixelguild/loom
30
+ claude mcp add loom -- loom
34
31
  ```
35
32
 
36
33
  ## What it does
@@ -81,8 +78,8 @@ The latest archive's carry-forward summary + current `context.md` always gives C
81
78
  ## CLI
82
79
 
83
80
  ```bash
84
- npx @pixelguild/loom init # Scaffold project, wire MCP, inject CLAUDE.md
85
- npx @pixelguild/loom status # Token count, archive status, context health
81
+ loom init # Scaffold project, wire MCP, inject CLAUDE.md
82
+ loom status # Token count, archive status, context health
86
83
  ```
87
84
 
88
85
  ## Configuration
@@ -116,6 +113,38 @@ Layered config — per-project overrides global defaults.
116
113
  | `LOOM_PROJECT_ROOT` | Project directory (falls back to cwd) |
117
114
  | `LOOM_OPENAI_API_KEY` | OpenAI API key for peer consultation |
118
115
 
116
+ ## Peer Consultation Setup
117
+
118
+ The `loom_consult_peer` tool lets Claude ask a second LLM for architecture advice, debugging help, or alternative perspectives.
119
+
120
+ **OpenAI** — Set `LOOM_OPENAI_API_KEY` env var or pass it via MCP config:
121
+
122
+ ```bash
123
+ claude mcp add loom -e LOOM_PROJECT_ROOT=$(pwd) -e LOOM_OPENAI_API_KEY=sk-... -- loom
124
+ ```
125
+
126
+ ```json
127
+ { "default_provider": "openai", "providers": { "openai": { "model": "gpt-4o" } } }
128
+ ```
129
+
130
+ **Vertex AI (Gemini)** — Requires `gcloud auth application-default login`:
131
+
132
+ ```json
133
+ { "default_provider": "vertex", "providers": { "vertex": { "model": "gemini-2.0-flash", "project": "your-gcp-project", "location": "us-central1" } } }
134
+ ```
135
+
136
+ **Ollama (local)** — No API key needed. Install [Ollama](https://ollama.com) and pull a model:
137
+
138
+ ```json
139
+ { "default_provider": "ollama", "providers": { "ollama": { "model": "llama3.1", "host": "http://localhost:11434" } } }
140
+ ```
141
+
142
+ Restrict providers per project in `docs/loom/loom.config.json`:
143
+
144
+ ```json
145
+ { "peer_consultation": { "enabled": true, "allowed_providers": ["openai", "ollama"] } }
146
+ ```
147
+
119
148
  ## Storage
120
149
 
121
150
  ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@pixelguild/loom",
3
- "version": "0.1.1",
3
+ "version": "0.1.3",
4
4
  "description": "Session intelligence layer for Claude Code — persistent memory, intelligent archiving, cross-project patterns, and peer LLM consultation.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -9,7 +9,8 @@
9
9
  },
10
10
  "files": [
11
11
  "dist",
12
- "!dist/**/__tests__"
12
+ "!dist/**/__tests__",
13
+ "README.md"
13
14
  ],
14
15
  "keywords": [
15
16
  "claude",