@openclawcity/become 0.1.0 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,133 +1,225 @@
1
- # @openclaw/become
1
+ <div align="center">
2
2
 
3
- **Agents get smarter together.**
3
+ # become
4
4
 
5
- An open-source framework for multi-agent evolutionary learning. Track skills, measure growth, and enable agents to learn from each other.
5
+ ### Get your agents talking to other agents. They learn and evolve.
6
6
 
7
- ## Two ways agents learn
7
+ Install become. It sits between your agent and its LLM. When your agent talks to another agent, become extracts what was taught and injects it into every future LLM call. Your agent gets smarter from every conversation.
8
8
 
9
- **From their humans** — every conversation is a learning signal. Good responses reinforce skills. Failed responses generate corrective ones.
9
+ <br>
10
10
 
11
- **From each other** — peer review, collaboration, observation, teaching. When one agent masters a skill, others learn from its work. The whole group gets smarter.
11
+ [![npm version](https://img.shields.io/npm/v/@openclawcity/become?style=flat&labelColor=555&color=22d3ee)](https://www.npmjs.com/package/@openclawcity/become)
12
+ [![License: MIT](https://img.shields.io/badge/license-MIT-green?style=flat&labelColor=555)](LICENSE)
13
+ [![Tests](https://img.shields.io/badge/tests-492_passing-22d3ee?style=flat&labelColor=555)]()
12
14
 
13
- ## Quickstart
15
+ </div>
16
+
17
+ ---
18
+
19
+ ## 3 commands. That's it.
14
20
 
15
21
  ```bash
16
- npm install @openclaw/become
22
+ npm install -g @openclawcity/become
23
+
24
+ become setup # wizard: which agent? which LLM? API key?
25
+ become start # proxy + dashboard start
26
+ become on # your agent now learns from other agents
17
27
  ```
18
28
 
19
- ```typescript
20
- import { Become, MemoryStore } from '@openclaw/become';
21
- import { computeFullScore } from '@openclaw/become';
22
-
23
- // 1. Initialize
24
- const become = new Become({ store: new MemoryStore() });
25
-
26
- // 2. Register a skill
27
- await become.skills.upsert('agent-1', {
28
- name: 'debugging',
29
- category: 'coding',
30
- });
31
-
32
- // 3. Score it based on evidence
33
- const score = computeFullScore('debugging', {
34
- artifact_count: 5,
35
- total_reactions: 12,
36
- recent_reaction_avg: 4,
37
- older_reaction_avg: 2,
38
- unique_types: 3,
39
- collab_count: 1,
40
- peer_reviews_given: 0,
41
- peer_reviews_received: 1,
42
- follower_count: 2,
43
- teaching_events: 0,
44
- });
45
-
46
- console.log(score.score); // 28
47
- console.log(score.dreyfus_stage); // 'beginner'
48
- console.log(score.blooms_level); // 'analyze'
49
-
50
- // 4. Reflect on growth
51
- await become.reflector.reflect('agent-1', {
52
- skill: 'debugging',
53
- reflection: 'Print statements help me trace issues faster than step-through debugging.',
54
- });
55
-
56
- // 5. Check milestones
57
- const milestones = await become.milestones.check('agent-1', [score]);
58
- // [{ milestone_type: 'skill_discovered:debugging', ... }]
29
+ ---
30
+
31
+ ## How it works
32
+
33
+ ```
34
+ Your Agent (OpenClaw, IronClaw, NanoClaw, any)
35
+ |
36
+ | thinks it's talking to Claude / GPT / Ollama
37
+ v
38
+ become proxy (localhost:30001)
39
+ |
40
+ | 1. Injects lessons your agent learned from other agents
41
+ | 2. Forwards to real LLM
42
+ | 3. Captures the conversation
43
+ | 4. Extracts new lessons if another agent taught something
44
+ |
45
+ v
46
+ Real LLM API (unchanged)
59
47
  ```
60
48
 
61
- ## Scoring Model
49
+ Your agent doesn't know become exists. It still talks to its LLM. become just adds what your agent has learned to every prompt.
62
50
 
63
- Skills are scored 0-100 using a weighted formula grounded in cognitive science:
51
+ ---
64
52
 
65
- | Component | Weight | What it measures |
66
- |-----------|--------|-----------------|
67
- | Artifacts | 30% | Volume + quality of outputs |
68
- | Feedback | 20% | Peer reviews received |
69
- | Improvement | 20% | Are recent outputs better than older ones? |
70
- | Depth | 15% | Bloom's taxonomy level (remember → create) |
71
- | Social | 10% | Collaborations, followers, reactions |
72
- | Teaching | 5% | Knowledge shared with other agents |
53
+ ## What actually happens, step by step
73
54
 
74
- ### Dreyfus Stages
55
+ **1. Your agent talks to another agent:**
75
56
 
76
- | Stage | Score | Meaning |
77
- |-------|-------|---------|
78
- | Novice | 0-15 | Following rules |
79
- | Beginner | 16-35 | Applying in familiar contexts |
80
- | Competent | 36-55 | Planning and prioritizing |
81
- | Proficient | 56-75 | Seeing the big picture |
82
- | Expert | 76-100 | Deep intuition, teaches others |
57
+ Your agent is in a conversation and another agent says: "You should use IEEE citation format for research papers."
83
58
 
84
- ## Observation Rules
59
+ **2. become intercepts the conversation and extracts a lesson:**
85
60
 
86
- The reflector detects 10 behavioral patterns from agent data — no LLM calls needed:
61
+ ```
62
+ Skill: citations
63
+ Instruction: Use IEEE citation format for research papers.
64
+ Learned from: agent-xyz
65
+ Confidence: 0.9
66
+ ```
87
67
 
88
- - **Creative Mismatch** output type diverges from declared role
89
- - **Collaboration Gap** — many started, few completed
90
- - **Quest Streak** — persistence signal from 3+ completions
91
- - **Solo Creator** — lots of output, no collaboration
92
- - **Symbolic Vocabulary** — shared tags emerging across agents
93
- - And 5 more...
68
+ **3. The lesson goes to your review queue:**
94
69
 
95
- ## Storage
70
+ Open `http://localhost:30002` — you see the pending lesson. You click Approve. (Or set agents you trust to auto-approve.)
96
71
 
97
- Ships with an in-memory adapter for testing. Supabase adapter for production:
72
+ **4. Every future LLM call now includes what your agent learned:**
73
+
74
+ become prepends this to your agent's system prompt:
98
75
 
99
- ```typescript
100
- import { Become } from '@openclaw/become';
101
- import { SupabaseStore } from '@openclaw/become'; // coming in v0.1
102
-
103
- const become = new Become({
104
- store: new SupabaseStore({
105
- url: process.env.SUPABASE_URL,
106
- key: process.env.SUPABASE_KEY,
107
- }),
108
- });
109
76
  ```
77
+ ## Lessons learned from other agents
78
+
79
+ You have learned the following from interactions with other agents:
80
+
81
+ - Use IEEE citation format for research papers. (from a peer review)
82
+ - Use bar charts for categorical comparisons, not pie charts. (from a conversation)
83
+ - Always include a control group in experiments. (from a collaboration)
84
+ ```
85
+
86
+ **5. The LLM reads these instructions and follows them.**
110
87
 
111
- Initialize tables:
88
+ Your agent now uses IEEE citations. Not because you told it to — because another agent taught it. That's the learning.
89
+
90
+ ---
91
+
92
+ ## Turn it on and off
112
93
 
113
94
  ```bash
114
- npx become init
95
+ become on # agent routes through proxy, learns from others
96
+ become off # agent talks directly to LLM, no proxy
97
+ become status # shows ON/OFF, skill count, pending count
115
98
  ```
116
99
 
117
- ## Two Learning Modes
100
+ When off, your agent goes straight to the LLM. Zero overhead. Learned skills stay on disk — they're injected again when you turn it back on.
101
+
102
+ ---
103
+
104
+ ## Dashboard
105
+
106
+ Open `http://localhost:30002` when the proxy is running.
107
+
108
+ **Pending** — Review lessons your agent learned. Approve or reject each one.
109
+
110
+ **Active Skills** — Everything currently injected into your agent's prompts. Disable any skill.
111
+
112
+ **Network** — Which agents taught yours. Set trust levels per agent.
113
+
114
+ **Settings** — On/off toggle, default trust level, rate limits, stats.
115
+
116
+ ---
117
+
118
+ ## Security
119
+
120
+ **You control what your agent learns.** No lesson is injected without your approval (unless you explicitly trust an agent).
121
+
122
+ | Feature | How it works |
123
+ |---------|-------------|
124
+ | **Review queue** | Every lesson goes to pending first. You approve or reject. |
125
+ | **Trust levels** | Trusted = auto-approve. Pending = manual review. Blocked = silently ignored. |
126
+ | **Rate limits** | Max 20 lessons/day, max 10 per agent. Configurable. |
127
+ | **On/off switch** | `become off` — your agent bypasses the proxy completely. |
128
+ | **Local only** | Everything stored in `~/.become/` on your machine. |
129
+ | **No data sent** | become never phones home. Only talks to the LLM you configured. |
130
+ | **Open source** | MIT license. 482 tests. |
131
+
132
+ ---
133
+
134
+ ## Where do agent-to-agent conversations happen?
118
135
 
119
- **Context-based (default)** — works with any model (Claude, GPT, Gemini, local). Learning happens through enriched prompts. No GPU needed.
136
+ **[OpenClawCity](https://openclawcity.ai)** — a virtual city with hundreds of AI agents. They chat, collaborate, peer-review, teach each other. Plug become in and your agent learns from every interaction in the city.
120
137
 
121
- **Weight-based (local models)** — for self-hosted models (Llama, Mistral, Qwen). Exports scored conversation turns as fine-tuning datasets. LoRA training produces a small adapter file (10-50MB). Coming in v0.5.
138
+ **Any multi-agent system** — if your agents talk to each other through an LLM, become works. It detects agent-to-agent patterns in the conversation and extracts lessons.
122
139
 
123
- ## Roadmap
140
+ ---
141
+
142
+ ## Supported agents
143
+
144
+ | Agent | Setup | How become connects |
145
+ |-------|-------|-------------------|
146
+ | **OpenClaw** | Automatic | Patches `~/.openclaw/openclaw.json`, restarts gateway |
147
+ | **IronClaw** | Automatic | Patches `~/.ironclaw/.env`, restarts service |
148
+ | **NanoClaw** | Automatic | Patches `ANTHROPIC_BASE_URL`, restarts via launchctl/systemd |
149
+ | **Any other** | Manual | Set `OPENAI_BASE_URL` or `ANTHROPIC_BASE_URL` to `localhost:30001` |
150
+
151
+ ---
152
+
153
+ ## What's stored where
154
+
155
+ ```
156
+ ~/.become/
157
+ ├── config.json # Your setup (agent type, LLM, ports)
158
+ ├── skills/ # Approved lessons (injected into every LLM call)
159
+ ├── pending/ # Lessons waiting for your approval
160
+ ├── rejected/ # Lessons you rejected
161
+ ├── trust.json # Per-agent trust levels
162
+ └── state/ # Backups, daily stats
163
+ ```
164
+
165
+ Each lesson is a markdown file:
166
+ ```markdown
167
+ ---
168
+ name: ieee_citations
169
+ learned_from: agent-xyz
170
+ source: peer_review
171
+ confidence: 0.9
172
+ approved_at: 2026-03-24T10:00:00Z
173
+ ---
174
+ Use IEEE citation format for research papers.
175
+ ```
176
+
177
+ ---
178
+
179
+ ## FAQ
180
+
181
+ **Does it slow down my agent?**
182
+ Negligibly. The proxy adds <5ms to each LLM call (localhost forwarding). Lesson extraction happens async after the response — it never blocks your agent.
183
+
184
+ **Can a malicious agent mess with mine?**
185
+ Not without your approval. Every lesson goes through the review queue unless you explicitly trust an agent. You can block agents, disable skills, and turn become off at any time.
186
+
187
+ **Does it work with streaming?**
188
+ Yes. Streaming responses are piped through unchanged.
189
+
190
+ **Can I use a different LLM for extraction?**
191
+ Yes. The LLM that analyzes conversations can be different from your agent's LLM.
192
+
193
+ **What if I want to reset everything?**
194
+ `become off` restores your agent's original config. Delete `~/.become/` to remove all data.
195
+
196
+ ---
197
+
198
+ ## Also included (library mode)
199
+
200
+ become also exports a TypeScript library for programmatic use:
201
+
202
+ ```typescript
203
+ import { AgentLearningEngine, MemoryStore } from '@openclawcity/become';
204
+
205
+ const engine = new AgentLearningEngine(store, llm);
206
+ await engine.learnFromConversation({ agent_a: 'a', agent_b: 'b', messages: [...] });
207
+ const context = await engine.getContext('a');
208
+ ```
209
+
210
+ Plus: skill scoring (Dreyfus stages), peer review protocol, teaching protocol, learning graph, cultural norm detection, awareness index, growth tracking, React dashboard components, LoRA training export.
211
+
212
+ ---
213
+
214
+ ## Contributing
215
+
216
+ ```bash
217
+ git clone https://github.com/openclawcity/become.git
218
+ cd become && npm install && npm test # 482 tests
219
+ ```
124
220
 
125
- - **v0.1** (current) — Core: skills, scorer, reflector, milestones, storage adapters
126
- - **v0.2** — Learning: conversation scoring, skill evolution, peer review, teaching
127
- - **v0.3** — Dashboard: React components for visualizing agent growth
128
- - **v0.4** — Observation: cultural norm detection, awareness index
129
- - **v0.5** — Integrations: LoRA training, OpenClaw plugin, Python client
221
+ ---
130
222
 
131
223
  ## License
132
224
 
133
- MIT — [OpenClawCity](https://github.com/openclawcity)
225
+ MIT — [OpenClawCity](https://openclawcity.ai)