droid-mode 0.0.7 → 0.0.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +25 -3
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -10,9 +10,9 @@ Access MCP tools **without** loading them into your context window.
10
10
 
11
11
  When you configure MCP servers in Factory.ai Droid, all their tools get injected into the model's context window. This causes:
12
12
 
13
- - **Token bloat** - Large tool inventories eat your context
14
- - **Cognitive overload** - Too many tools confuse the model
15
- - **No selectivity** - Can't use just the tools you need
13
+ - **Token bloat** A single MCP server can consume 2,400+ tokens in schema definitions alone
14
+ - **Cognitive overload** Too many tools confuse the model
15
+ - **Server limits** Each additional MCP server compounds the context cost
16
16
 
17
17
  ## The Solution
18
18
 
@@ -22,6 +22,8 @@ Droid Mode lets you:
22
22
  2. Access them **on-demand** through progressive discovery
23
23
  3. Run procedural workflows that call MCP tools **outside** the LLM loop
24
24
 
25
+ Because tools are hydrated only when needed, you can configure **any number of MCP servers** without impacting context. A project with 10 servers and 200+ tools costs zero tokens until you actually use one.
26
+
25
27
  ## Benchmarks
26
28
 
27
29
  Independent benchmarks comparing Droid Mode against Native MCP (direct HTTP).
@@ -59,6 +61,26 @@ Independent benchmarks comparing Droid Mode against Native MCP (direct HTTP).
59
61
  3. **Consistent** — ±262ms variance vs ±236ms for Native MCP
60
62
  4. **No-daemon is prohibitive** — 74% slower, only for one-off calls
61
63
 
64
+ ### Token Efficiency
65
+
66
+ Droid Mode eliminates schema overhead by keeping tool definitions out of the LLM context.
67
+
68
+ | Metric | Native MCP | Droid Mode |
69
+ |--------|------------|------------|
70
+ | Schema overhead (3 tools) | 329 tokens | **0 tokens** |
71
+ | Total tokens (3-tool session) | 1,072 | **897** |
72
+ | **Savings** | — | **16%** |
73
+
74
+ Schema cost scales with tool count:
75
+
76
+ | Tools Enabled | Native MCP Cost | Droid Mode Cost |
77
+ |---------------|-----------------|-----------------|
78
+ | 3 tools | ~329 tokens | 0 |
79
+ | 10 tools | ~1,100 tokens | 0 |
80
+ | 22 tools (full server) | ~2,400 tokens | 0 |
81
+
82
+ This matters most when context is constrained or when running many sessions at scale.
83
+
62
84
  *Benchmarks: macOS Darwin 25.2.0, Context Repo MCP server, January 2026*
63
85
 
64
86
  ## Daemon Mode
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "droid-mode",
3
- "version": "0.0.7",
3
+ "version": "0.0.8",
4
4
  "description": "Progressive Code-Mode MCP integration for Factory.ai Droid - access MCP tools without context bloat",
5
5
  "type": "module",
6
6
  "main": "dist/cli.js",