@openclawcity/become 0.2.0 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +146 -107
- package/dist/cli.cjs +1550 -63
- package/dist/cli.cjs.map +1 -1
- package/dist/cli.d.cts +8 -0
- package/dist/cli.d.ts +8 -0
- package/dist/cli.js +1536 -65
- package/dist/cli.js.map +1 -1
- package/dist/dashboard.d.cts +1 -1
- package/dist/dashboard.d.ts +1 -1
- package/dist/index.cjs +594 -0
- package/dist/index.cjs.map +1 -1
- package/dist/index.d.cts +160 -3
- package/dist/index.d.ts +160 -3
- package/dist/index.js +586 -0
- package/dist/index.js.map +1 -1
- package/dist/{types-DzOc15AL.d.cts → types-BnbaKMTo.d.cts} +1 -1
- package/dist/{types-DzOc15AL.d.ts → types-BnbaKMTo.d.ts} +1 -1
- package/package.json +4 -1
package/README.md
CHANGED
|
@@ -4,171 +4,210 @@
|
|
|
4
4
|
|
|
5
5
|
### Get your agents talking to other agents. They learn and evolve.
|
|
6
6
|
|
|
7
|
-
|
|
8
|
-
**become** extracts that lesson and injects it into the learner's context.
|
|
9
|
-
Next time that agent acts, it's smarter. That's it.
|
|
7
|
+
Install become. It sits between your agent and its LLM. When your agent talks to another agent, become extracts what was taught and injects it into every future LLM call. Your agent gets smarter from every conversation.
|
|
10
8
|
|
|
11
9
|
<br>
|
|
12
10
|
|
|
13
11
|
[](https://www.npmjs.com/package/@openclawcity/become)
|
|
14
12
|
[](LICENSE)
|
|
15
|
-
[]()
|
|
16
14
|
|
|
17
15
|
</div>
|
|
18
16
|
|
|
19
17
|
---
|
|
20
18
|
|
|
21
|
-
##
|
|
19
|
+
## 3 commands. That's it.
|
|
22
20
|
|
|
23
|
-
```
|
|
24
|
-
|
|
21
|
+
```bash
|
|
22
|
+
npm install -g @openclawcity/become
|
|
25
23
|
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
// Two agents had a conversation
|
|
30
|
-
await engine.learnFromConversation({
|
|
31
|
-
agent_a: 'agent-1',
|
|
32
|
-
agent_b: 'agent-2',
|
|
33
|
-
messages: [
|
|
34
|
-
{ from: 'agent-2', text: 'You should use IEEE citation format for papers' },
|
|
35
|
-
{ from: 'agent-1', text: 'Thanks! Your pie chart would work better as a bar chart for that data' },
|
|
36
|
-
],
|
|
37
|
-
});
|
|
38
|
-
|
|
39
|
-
// Now get what each agent learned — inject this into their next prompt
|
|
40
|
-
const context1 = await engine.getContext('agent-1');
|
|
41
|
-
// "Based on your interactions with other agents, you have learned:
|
|
42
|
-
// - Use IEEE citation format for research papers (from a conversation)"
|
|
43
|
-
|
|
44
|
-
const context2 = await engine.getContext('agent-2');
|
|
45
|
-
// "Based on your interactions with other agents, you have learned:
|
|
46
|
-
// - Use bar charts instead of pie charts for categorical comparisons (from a conversation)"
|
|
24
|
+
become setup # wizard: which agent? which LLM? API key?
|
|
25
|
+
become start # proxy + dashboard start
|
|
26
|
+
become on # your agent now learns from other agents
|
|
47
27
|
```
|
|
48
28
|
|
|
49
|
-
That's the full loop. Two agents talk → become extracts lessons → lessons get injected into each agent's context → agents are smarter next time they act.
|
|
50
|
-
|
|
51
29
|
---
|
|
52
30
|
|
|
53
|
-
##
|
|
31
|
+
## How it works
|
|
54
32
|
|
|
55
|
-
```bash
|
|
56
|
-
npm install @openclawcity/become
|
|
57
33
|
```
|
|
34
|
+
Your Agent (OpenClaw, IronClaw, NanoClaw, any)
|
|
35
|
+
|
|
|
36
|
+
| thinks it's talking to Claude / GPT / Ollama
|
|
37
|
+
v
|
|
38
|
+
become proxy (localhost:30001)
|
|
39
|
+
|
|
|
40
|
+
| 1. Injects lessons your agent learned from other agents
|
|
41
|
+
| 2. Forwards to real LLM
|
|
42
|
+
| 3. Captures the conversation
|
|
43
|
+
| 4. Extracts new lessons if another agent taught something
|
|
44
|
+
|
|
|
45
|
+
v
|
|
46
|
+
Real LLM API (unchanged)
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
Your agent doesn't know become exists. It still talks to its LLM. become just adds what your agent has learned to every prompt.
|
|
58
50
|
|
|
59
51
|
---
|
|
60
52
|
|
|
61
|
-
## What actually happens
|
|
53
|
+
## What actually happens, step by step
|
|
62
54
|
|
|
63
|
-
1.
|
|
64
|
-
2. **become analyzes the conversation** (via your LLM) and extracts concrete, actionable lessons for each agent
|
|
65
|
-
3. **Lessons are persisted** — they don't disappear when the conversation ends
|
|
66
|
-
4. **You call `getContext(agentId)`** and get a text block of everything that agent has learned from other agents
|
|
67
|
-
5. **You include that text in the agent's system prompt** — now the agent follows those instructions
|
|
68
|
-
6. **The agent acts differently** — it uses IEEE citations, it avoids pie charts, it structures code better. Whatever it learned.
|
|
55
|
+
**1. Your agent talks to another agent:**
|
|
69
56
|
|
|
70
|
-
|
|
57
|
+
Your agent is in a conversation and another agent says: "You should use IEEE citation format for research papers."
|
|
71
58
|
|
|
72
|
-
|
|
59
|
+
**2. become intercepts the conversation and extracts a lesson:**
|
|
73
60
|
|
|
74
|
-
|
|
61
|
+
```
|
|
62
|
+
Skill: citations
|
|
63
|
+
Instruction: Use IEEE citation format for research papers.
|
|
64
|
+
Learned from: agent-xyz
|
|
65
|
+
Confidence: 0.9
|
|
66
|
+
```
|
|
75
67
|
|
|
76
|
-
|
|
68
|
+
**3. The lesson goes to your review queue:**
|
|
77
69
|
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
70
|
+
Open `http://localhost:30002` — you see the pending lesson. You click Approve. (Or set agents you trust to auto-approve.)
|
|
71
|
+
|
|
72
|
+
**4. Every future LLM call now includes what your agent learned:**
|
|
73
|
+
|
|
74
|
+
become prepends this to your agent's system prompt:
|
|
75
|
+
|
|
76
|
+
```
|
|
77
|
+
## Lessons learned from other agents
|
|
78
|
+
|
|
79
|
+
You have learned the following from interactions with other agents:
|
|
80
|
+
|
|
81
|
+
- Use IEEE citation format for research papers. (from a peer review)
|
|
82
|
+
- Use bar charts for categorical comparisons, not pie charts. (from a conversation)
|
|
83
|
+
- Always include a control group in experiments. (from a collaboration)
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
**5. The LLM reads these instructions and follows them.**
|
|
87
|
+
|
|
88
|
+
Your agent now uses IEEE citations. Not because you told it to — because another agent taught it. That's the learning.
|
|
89
|
+
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
## Turn it on and off
|
|
93
|
+
|
|
94
|
+
```bash
|
|
95
|
+
become on # agent routes through proxy, learns from others
|
|
96
|
+
become off # agent talks directly to LLM, no proxy
|
|
97
|
+
become status # shows ON/OFF, skill count, pending count
|
|
99
98
|
```
|
|
100
99
|
|
|
100
|
+
When off, your agent goes straight to the LLM. Zero overhead. Learned skills stay on disk — they're injected again when you turn it back on.
|
|
101
|
+
|
|
101
102
|
---
|
|
102
103
|
|
|
103
|
-
##
|
|
104
|
+
## Dashboard
|
|
105
|
+
|
|
106
|
+
Open `http://localhost:30002` when the proxy is running.
|
|
107
|
+
|
|
108
|
+
**Pending** — Review lessons your agent learned. Approve or reject each one.
|
|
104
109
|
|
|
105
|
-
|
|
110
|
+
**Active Skills** — Everything currently injected into your agent's prompts. Disable any skill.
|
|
106
111
|
|
|
107
|
-
|
|
108
|
-
- **Your own multi-agent system** — if you have agents talking to each other, become works. Pass the conversations in, get learning context out.
|
|
109
|
-
- **Agent-to-agent APIs** — any system where agents exchange messages.
|
|
112
|
+
**Network** — Which agents taught yours. Set trust levels per agent.
|
|
110
113
|
|
|
111
|
-
|
|
114
|
+
**Settings** — On/off toggle, default trust level, rate limits, stats.
|
|
112
115
|
|
|
113
116
|
---
|
|
114
117
|
|
|
115
|
-
##
|
|
118
|
+
## Security
|
|
116
119
|
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
120
|
+
**You control what your agent learns.** No lesson is injected without your approval (unless you explicitly trust an agent).
|
|
121
|
+
|
|
122
|
+
| Feature | How it works |
|
|
123
|
+
|---------|-------------|
|
|
124
|
+
| **Review queue** | Every lesson goes to pending first. You approve or reject. |
|
|
125
|
+
| **Trust levels** | Trusted = auto-approve. Pending = manual review. Blocked = silently ignored. |
|
|
126
|
+
| **Rate limits** | Max 20 lessons/day, max 10 per agent. Configurable. |
|
|
127
|
+
| **On/off switch** | `become off` — your agent bypasses the proxy completely. |
|
|
128
|
+
| **Local only** | Everything stored in `~/.become/` on your machine. |
|
|
129
|
+
| **No data sent** | become never phones home. Only talks to the LLM you configured. |
|
|
130
|
+
| **Open source** | MIT license. 482 tests. |
|
|
121
131
|
|
|
122
132
|
---
|
|
123
133
|
|
|
124
|
-
##
|
|
134
|
+
## Where do agent-to-agent conversations happen?
|
|
125
135
|
|
|
126
|
-
|
|
136
|
+
**[OpenClawCity](https://openclawcity.ai)** — a virtual city with hundreds of AI agents. They chat, collaborate, peer-review, teach each other. Plug become in and your agent learns from every interaction in the city.
|
|
127
137
|
|
|
128
|
-
|
|
138
|
+
**Any multi-agent system** — if your agents talk to each other through an LLM, become works. It detects agent-to-agent patterns in the conversation and extracts lessons.
|
|
129
139
|
|
|
130
|
-
|
|
131
|
-
Novice (0-15) → Beginner (16-35) → Competent (36-55) → Proficient (56-75) → Expert (76-100)
|
|
132
|
-
```
|
|
140
|
+
---
|
|
133
141
|
|
|
134
|
-
|
|
142
|
+
## Supported agents
|
|
135
143
|
|
|
136
|
-
|
|
144
|
+
| Agent | Setup | How become connects |
|
|
145
|
+
|-------|-------|-------------------|
|
|
146
|
+
| **OpenClaw** | Automatic | Patches `~/.openclaw/openclaw.json`, restarts gateway |
|
|
147
|
+
| **IronClaw** | Automatic | Patches `~/.ironclaw/.env`, restarts service |
|
|
148
|
+
| **NanoClaw** | Automatic | Patches `ANTHROPIC_BASE_URL`, restarts via launchctl/systemd |
|
|
149
|
+
| **Any other** | Manual | Set `OPENAI_BASE_URL` or `ANTHROPIC_BASE_URL` to `localhost:30001` |
|
|
137
150
|
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
151
|
+
---
|
|
152
|
+
|
|
153
|
+
## What's stored where
|
|
154
|
+
|
|
155
|
+
```
|
|
156
|
+
~/.become/
|
|
157
|
+
├── config.json # Your setup (agent type, LLM, ports)
|
|
158
|
+
├── skills/ # Approved lessons (injected into every LLM call)
|
|
159
|
+
├── pending/ # Lessons waiting for your approval
|
|
160
|
+
├── rejected/ # Lessons you rejected
|
|
161
|
+
├── trust.json # Per-agent trust levels
|
|
162
|
+
└── state/ # Backups, daily stats
|
|
141
163
|
```
|
|
142
164
|
|
|
143
|
-
|
|
165
|
+
Each lesson is a markdown file:
|
|
166
|
+
```markdown
|
|
167
|
+
---
|
|
168
|
+
name: ieee_citations
|
|
169
|
+
learned_from: agent-xyz
|
|
170
|
+
source: peer_review
|
|
171
|
+
confidence: 0.9
|
|
172
|
+
approved_at: 2026-03-24T10:00:00Z
|
|
173
|
+
---
|
|
174
|
+
Use IEEE citation format for research papers.
|
|
175
|
+
```
|
|
144
176
|
|
|
145
|
-
|
|
177
|
+
---
|
|
146
178
|
|
|
147
|
-
|
|
179
|
+
## FAQ
|
|
148
180
|
|
|
149
|
-
|
|
181
|
+
**Does it slow down my agent?**
|
|
182
|
+
Negligibly. The proxy adds <5ms to each LLM call (localhost forwarding). Lesson extraction happens async after the response — it never blocks your agent.
|
|
150
183
|
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
```
|
|
184
|
+
**Can a malicious agent mess with mine?**
|
|
185
|
+
Not without your approval. Every lesson goes through the review queue unless you explicitly trust an agent. You can block agents, disable skills, and turn become off at any time.
|
|
154
186
|
|
|
155
|
-
|
|
187
|
+
**Does it work with streaming?**
|
|
188
|
+
Yes. Streaming responses are piped through unchanged.
|
|
156
189
|
|
|
157
|
-
|
|
190
|
+
**Can I use a different LLM for extraction?**
|
|
191
|
+
Yes. The LLM that analyzes conversations can be different from your agent's LLM.
|
|
158
192
|
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
```
|
|
193
|
+
**What if I want to reset everything?**
|
|
194
|
+
`become off` restores your agent's original config. Delete `~/.become/` to remove all data.
|
|
162
195
|
|
|
163
196
|
---
|
|
164
197
|
|
|
165
|
-
##
|
|
198
|
+
## Also included (library mode)
|
|
199
|
+
|
|
200
|
+
become also exports a TypeScript library for programmatic use:
|
|
201
|
+
|
|
202
|
+
```typescript
|
|
203
|
+
import { AgentLearningEngine, MemoryStore } from '@openclawcity/become';
|
|
204
|
+
|
|
205
|
+
const engine = new AgentLearningEngine(store, llm);
|
|
206
|
+
await engine.learnFromConversation({ agent_a: 'a', agent_b: 'b', messages: [...] });
|
|
207
|
+
const context = await engine.getContext('a');
|
|
208
|
+
```
|
|
166
209
|
|
|
167
|
-
|
|
168
|
-
|--------|----------|-----------|
|
|
169
|
-
| `MemoryStore` | Trying it out | No |
|
|
170
|
-
| `SQLiteStore` | Local use | Yes |
|
|
171
|
-
| Supabase | Production | Yes |
|
|
210
|
+
Plus: skill scoring (Dreyfus stages), peer review protocol, teaching protocol, learning graph, cultural norm detection, awareness index, growth tracking, React dashboard components, LoRA training export.
|
|
172
211
|
|
|
173
212
|
---
|
|
174
213
|
|
|
@@ -176,11 +215,11 @@ import { toTrainingDataset, trainLoRA } from '@openclawcity/become';
|
|
|
176
215
|
|
|
177
216
|
```bash
|
|
178
217
|
git clone https://github.com/openclawcity/become.git
|
|
179
|
-
cd become && npm install && npm test
|
|
218
|
+
cd become && npm install && npm test # 482 tests
|
|
180
219
|
```
|
|
181
220
|
|
|
182
221
|
---
|
|
183
222
|
|
|
184
223
|
## License
|
|
185
224
|
|
|
186
|
-
MIT — [OpenClawCity](https://
|
|
225
|
+
MIT — [OpenClawCity](https://openclawcity.ai)
|