holomime 1.1.0 → 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,15 +5,17 @@
5
5
  <h1 align="center">holomime</h1>
6
6
 
7
7
  <p align="center">
8
- Behavioral alignment infrastructure for AI agents.<br />
9
- Detect drift. Run therapy sessions. Export training data. Ship agents that stay in character.<br />
8
+ Self-improving behavioral alignment for AI agents.<br />
9
+ Every correction trains the next version. Every session compounds. Your agents get better at being themselves &mdash; automatically.<br />
10
10
  <em>Works with OpenTelemetry, Anthropic, OpenAI, ChatGPT, Claude, and any JSONL source.</em>
11
11
  </p>
12
12
 
13
13
  <p align="center">
14
14
  <a href="https://www.npmjs.com/package/holomime"><img src="https://img.shields.io/npm/v/holomime.svg" alt="npm version" /></a>
15
- <a href="https://github.com/holomime/holomime/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/holomime.svg" alt="license" /></a>
15
+ <a href="https://github.com/productstein/Holomime/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/holomime.svg" alt="license" /></a>
16
16
  <a href="https://holomime.dev"><img src="https://img.shields.io/badge/docs-holomime.dev-blue" alt="docs" /></a>
17
+ <a href="https://holomime.dev/blog"><img src="https://img.shields.io/badge/blog-holomime.dev%2Fblog-purple" alt="blog" /></a>
18
+ <a href="https://holomime.dev/research"><img src="https://img.shields.io/badge/research-paper-orange" alt="research" /></a>
17
19
  </p>
18
20
 
19
21
  ---
@@ -36,6 +38,26 @@ holomime profile
36
38
  holomime profile --format md --output .personality.md
37
39
  ```
38
40
 
41
+ ## The Self-Improvement Loop
42
+
43
+ HoloMime isn't a one-shot evaluation. It's a compounding behavioral flywheel:
44
+
45
+ ```
46
+ ┌──────────────────────────────────────────────────┐
47
+ │ │
48
+ ▼ │
49
+ Diagnose ──→ Refine ──→ Export DPO ──→ Fine-tune ──→ Evaluate
50
+ 80+ signals dual-LLM preference OpenAI / before/after
51
+ 7 detectors therapy pairs HuggingFace grade (A-F)
52
+ ```
53
+
54
+ Each cycle through the loop:
55
+ - **Generates training data** -- every therapist correction becomes a DPO preference pair automatically
56
+ - **Reduces drift** -- the fine-tuned model needs fewer corrections next cycle
57
+ - **Compounds** -- the 100th alignment session is exponentially more valuable than the first
58
+
59
+ Run it manually with `holomime session`, automatically with `holomime autopilot`, or recursively with `holomime evolve` (loops until behavior converges). Agents can even self-diagnose mid-conversation via the MCP server.
60
+
39
61
  ## Framework Integrations
40
62
 
41
63
  Holomime analyzes conversations from any LLM framework. Auto-detection works out of the box, or specify a format explicitly.
@@ -196,20 +218,28 @@ Supports DPO, RLHF, Alpaca, HuggingFace, and OpenAI fine-tuning formats. See [sc
196
218
 
197
219
  ## Architecture
198
220
 
221
+ The pipeline is a closed loop -- output feeds back as input, compounding with every cycle:
222
+
199
223
  ```
200
- .personality.json <- The spec (Big Five + behavioral dimensions)
201
- |
202
- holomime diagnose <- 7 rule-based detectors (no LLM)
203
- |
204
- holomime session <- Dual-LLM refinement (therapist + patient)
205
- |
206
- holomime export <- DPO / RLHF / Alpaca / HuggingFace training data
207
- |
208
- holomime train <- Fine-tune (OpenAI or HuggingFace TRL)
209
- |
210
- holomime eval <- Behavioral Alignment Score (A-F)
211
- |
212
- .personality.json <- Updated with fine-tuned model reference
224
+ .personality.json ─────────────────────────────────────────────────┐
225
+ │ │
226
+ ▼ │
227
+ holomime diagnose 7 rule-based detectors (no LLM) │
228
+ │ │
229
+ ▼ │
230
+ holomime session Dual-LLM refinement (therapist + patient) │
231
+ │ │
232
+ ▼ │
233
+ holomime export DPO / RLHF / Alpaca / HuggingFace pairs │
234
+ │ │
235
+ ▼ │
236
+ holomime train Fine-tune (OpenAI or HuggingFace TRL) │
237
+ │ │
238
+ ▼ │
239
+ holomime eval Behavioral Alignment Score (A-F) │
240
+ │ │
241
+ └──────────────────────────────────────────────────────────────┘
242
+ Updated .personality.json (loop restarts)
213
243
  ```
214
244
 
215
245
  ## MCP Server
@@ -238,6 +268,13 @@ See [Behavioral Alignment for Autonomous AI Agents](paper/behavioral-alignment.m
238
268
 
239
269
  Benchmark results: [BENCHMARK_RESULTS.md](BENCHMARK_RESULTS.md)
240
270
 
271
+ ## Resources
272
+
273
+ - [Integration Docs](https://holomime.dev/docs) -- Export instructions and code examples for all 7 formats
274
+ - [Blog](https://holomime.dev/blog) -- Articles on behavioral alignment, AGENTS.md, and agent personality
275
+ - [Research Paper](https://holomime.dev/research) -- Behavioral Alignment for Autonomous AI Agents
276
+ - [Pricing](https://holomime.dev/#pricing) -- Free tier + Pro license details
277
+
241
278
  ## Contributing
242
279
 
243
280
  See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, project structure, and how to submit changes.