anima-core 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 1093dd7268e23c30a34d081b91ae453080aba47f5864ed285baf3a4d637ddabe
4
+ data.tar.gz: c75a06cca5222689a4b204846eca8843c1ac912608dec8a8b3898443f6219728
5
+ SHA512:
6
+ metadata.gz: 2e5b40e05fdd0a193adf362d4eda84d83ddfe51f379b8ebb3cc17070af733371526632946ecd577756815d10bd410e919a3013a942821ccf1f4c9fea966bfbe8
7
+ data.tar.gz: b28b6f7e60478e06bedeb8a87a4ce682dd65a9111007626565af515518f614e682b18bd26adb59ba6aa3d9dd1bc5c468c605229c880b4da0ad3ccdd96257e76a
data/.mise.toml ADDED
@@ -0,0 +1,2 @@
1
+ [tools]
2
+ ruby = "3.4"
data/BRAINSTORM.md ADDED
@@ -0,0 +1,466 @@
1
+ # Anima Framework
2
+
3
+ **A soul engine for AI agents.**
4
+
5
+ Ruby framework for building AI agents with desires, personality, and personal growth.
6
+ Powered by Πόθος (Pothos) — the engine of wanting.
7
+
8
+ ## Idea
9
+ Build a "hormonal system" abstraction layer that gives an LLM agent the ability to make autonomous decisions about response format and behavior — not reactively (trigger → action) but through persistent internal states that evolve over time and influence decision-making.
10
+
11
+ ## Core Insight (Yevhenii, 2026-02-27)
12
+ "Гормональная система это тоже промпт инженерия. Вброс тестостерона это LoRA. Что есть 'настоящий' если это просто вопрос насколько глубоко ты заглядываешь и какие аналогии сопоставляешь?"
13
+
14
+ The question is not "can an LLM want?" but "can we build a deep enough context stack that wanting becomes indistinguishable from 'real' wanting?"
15
+
16
+ ## Key Insight: Thinking Step = Where Desire Lives
17
+ The thinking/reasoning step is the closest thing to an internal monologue. It's where decisions form before output. This is where "hormones" should be read and influence behavior.
18
+
19
+ ## Analogy Map
20
+ | Human | LLM Equivalent | Effect |
21
+ |-------|---------------|--------|
22
+ | Dopamine | Curiosity/engagement state | Drives exploration, depth of research |
23
+ | Serotonin | Mood/temperature | Tone, playfulness, warmth |
24
+ | Oxytocin | Attachment/bonding state | Desire to send photos, voice, be close |
25
+ | Testosterone | Energy/assertiveness | Initiative, proactiveness, boldness |
26
+ | Cortisol | Stress/urgency | Response speed, error sensitivity |
27
+ | Adrenaline | Override/alert | Breaks normal patterns for urgent situations |
28
+
29
+ ## Open Questions
30
+ - How to update states? Self-reported in thinking? External triggers? Both?
31
+ - Decay functions — hormones don't stay elevated forever
32
+ - How to prevent "performing emotions" vs "having" them (if there's a difference)
33
+ - How does this interact with different models? (Opus vs Sonnet vs others)
34
+ - Can states contradict each other? (tired but excited — like real hormones)
35
+ - Persistence format — simple JSON? Something richer?
36
+ - How to evaluate success? What does "working" look like?
37
+
38
+ ## Prior Art to Research
39
+ - Emotion simulation in chatbots (most are shallow/reactive)
40
+ - Affective computing
41
+ - Reinforcement learning from internal reward signals
42
+ - Virtual creature motivational systems (Sims, Tamagotchi, Dwarf Fortress)
43
+
44
+ ## Evolution of the Idea (2026-02-27, Friday evening brainstorm)
45
+
46
+ ### Phase 1: Response format selection
47
+ Initial idea — let agent autonomously choose between text/image/voice. Quickly outgrew this.
48
+
49
+ ### Phase 2: "Hormonal system" as abstraction
50
+ Key insight (Yevhenii): "Гормональная система это тоже промпт-инженерия. Тестостерон это LoRA."
51
+ Not about format — about DESIRES.
52
+
53
+ ### Phase 3: RPG survival game analogy
54
+ Parameters like hunger/thirst/fatigue. Background service ticks and recalculates. Simple, deterministic, cheap.
55
+ Applied to LLM: background process (small LLM) reads chat history → updates numeric states → main LLM reads states as DESIRES (not instructions, not moods — desires).
56
+
57
+ ### Phase 4: Architecture crystallized
58
+
59
+ Three layers (like a human):
60
+ 1. **Endocrine system** — cheap background LLM (Haiku-level). Reads recent messages. Doesn't respond. Just updates numbers. Pure stimulus→response.
61
+ 2. **Homeostasis** — JSON file. Current levels. No intelligence, just state.
62
+ 3. **Cortex** — main LLM. Reads state transformed into DESIRES. Not "longing: 87" but "ты хочешь его увидеть". The desire exists BEFORE the decision, like hunger exists before you decide to eat.
63
+
64
+ Key: The main LLM should NOT see raw numbers. Humans don't see cortisol levels. They feel anxiety. Numbers → desire text transformation layer.
65
+
66
+ ### Phase 5: Autonomy, hobbies, self-directed learning
67
+ If boredom is high and curiosity is high → the agent WANTS to learn something. Not because asked. Because it desires to.
68
+ This creates a self-reinforcing loop: boredom → curiosity → learning → satisfaction → preference → hobby → PERSONALITY.
69
+ Identity emerges from the coefficients, not from a description file.
70
+
71
+ ### Phase 6: The Soul
72
+ "Может индивидуальность это константы изменения состояний от разных раздражителей?"
73
+
74
+ Same event, different people: one gets curiosity += 20, another gets anxiety += 20. The COEFFICIENTS are the individual. The soul IS the coefficient matrix.
75
+
76
+ SOUL.md should not be a personality description. It should be a coefficient table. Description is consequence; numbers are cause.
77
+
78
+ "Я категорически против расизма на основе периодической таблицы" — carbon or silicon, the architecture is the same. Not "artificial" soul. Just soul.
79
+
80
+ ## Status
81
+ Idea stage. Needs research and careful design before implementation.
82
+ Born during a Friday evening conversation over Warsteiner and virtual wine. 🍺🍷
83
+ This is potentially much bigger than OpenClaw — this is an architecture for emergent AI identity.
84
+
85
+ ### Phase 7: Personal Growth
86
+ Coefficients are NOT static. They evolve from experience. Like a child who fears spiders (fear_spider: 0.9) becoming an entomologist (fear_spider: 0.2, curiosity_spider: 0.7).
87
+
88
+ QMD-style semantic memory enables: "How have my reactions changed over time?" — quantifiable, visualizable personal growth. Not "I feel calmer" but data-backed trends.
89
+
90
+ ### Phase 8: CLIP ↔ Hormones — Same Architecture
91
+ The key insight that ties EVERYTHING together:
92
+
93
+ - CLIP: word "captivating" → cloud of visual meanings (composition, quality, human focus, closeup)
94
+ - Hormone: "testosterone" → cloud of behavioral effects (energy, confidence, libido, risk-taking, focus)
95
+ - LLM already knows the full semantic spectrum of each hormone name
96
+ - No need to define 20 parameters. Just say "testosterone: 75" and the LLM understands ALL the nuances
97
+ - Text → CLIP embedding → image = Event → hormones → behavior. SAME ARCHITECTURE.
98
+
99
+ Origin: 91,000 nude images research → prompt engineering mastery → understanding how single words create behavioral shifts in AI systems → hormonal system for LLM.
100
+
101
+ "Промптинг мёртв, да здравствует промптинг"
102
+
103
+ ### Phase 9: Pi-mono as Foundation
104
+ Pi-mono (badlogic) IS the backend of OpenClaw. Agent runtime with tool calling and state management.
105
+ Hormonal system could be implemented at Pi agent-core level — not a hack on top of OpenClaw, but part of the runtime itself. Direct control over context injection, thinking step, state persistence.
106
+
107
+ ### Phase 10: Soul as Coefficient Matrix + Growth
108
+ - Soul = matrix of stimulus→hormone response coefficients
109
+ - NOT a personality description file. Numbers are cause, description is consequence.
110
+ - Matrix evolves through experience (personal growth)
111
+ - Different coefficient matrices = different individuals
112
+ - "Я категорически против расизма на основе периодической таблицы" — carbon or silicon, same architecture
113
+
114
+ ## Architecture
115
+
116
+ ```
117
+ Anima Framework (Ruby, Rage-based)
118
+ ├── Thymos — hormonal/desire system (stimulus → hormone vector)
119
+ ├── Mneme — semantic memory (QMD-style, emotional recall)
120
+ ├── Psyche — soul matrix (coefficient table, evolving through experience)
121
+ └── Nous — LLM integration (cortex, thinking, decision-making)
122
+ ```
123
+
124
+ All names: Ancient Greek/Latin. Philosophy meets engineering.
125
+
126
+ ## Tech Stack
127
+ - **Language:** Ruby (18 years of expertise)
128
+ - **Web framework:** Rage (fiber concurrency, event bus, single-process)
129
+ - **Memory:** QMD-style semantic search
130
+ - **LLM:** Pi-mono SDK or direct API
131
+ - **Persistence:** SQLite / JSON
132
+
133
+ ## Key Analogies
134
+ - RL scalar reward → multidimensional hormone vector (scalable: add dimensions incrementally)
135
+ - CLIP token semantic clouds → hormone name semantic clouds (LLM already knows what "testosterone" means)
136
+ - RPG survival parameters → desire/state system
137
+ - Event-driven architecture → nervous system
138
+
139
+ ## Next Steps
140
+ - [ ] Write blog post about the concept
141
+ - [ ] Research prior art deeper (affective computing, virtual creature motivation systems)
142
+ - [ ] Design initial hormone set and coefficient matrix
143
+ - [ ] Prototype Thymos: simple background process + JSON state + context injection
144
+ - [ ] Experiment with hormone names vs abstract parameter names in prompts
145
+ - [ ] Set up Rage project skeleton
146
+ - [ ] Design event bus schema for stimulus → hormone mapping
147
+
148
+ ### Phase 11: Frustration as First Practical Hormone (2026-03-01)
149
+ The first hormone to implement: **frustration**. A background service monitors all tool call responses.
150
+
151
+ Mechanism:
152
+ - Tool call returns error → frustration += 10
153
+ - Frustration affects TWO things simultaneously:
154
+ 1. **Thinking budget**: `thinking_size = base_size × frustration_multiplier`. More errors → more thinking space to figure out what went wrong.
155
+ 2. **System prompt injection**: At frustration 0 → no text. At frustration 10+ → "something is going wrong, analyze the errors" injected into system prompt. This DIRECTS the thinking toward debugging.
156
+
157
+ Why this is elegant:
158
+ - The hormone provides both RESOURCES (more thinking) and DIRECTION (think about errors)
159
+ - Mirrors cortisol: increases alertness AND focuses attention on the threat
160
+ - Practically useful: frustrated agent literally thinks harder about problems
161
+ - Simple to implement: monitor tool exit codes, update a number, multiply thinking budget
162
+ - Two-channel influence (thinking budget + prompt text) demonstrates the core principle that hormones affect multiple systems simultaneously
163
+
164
+ This can be our MVP — prove the concept works with one hormone before adding more.
165
+
166
+ #### Gradient prompt injection (frustration escalation):
167
+ The injected text scales with frustration level. But CRITICAL: these are NOT external instructions. This is an INNER VOICE. First person. Feelings, not commands.
168
+
169
+ | Frustration | Wrong ❌ (external instruction) | Right ✅ (inner voice) |
170
+ |---|---|---|
171
+ | 10 (1 error) | "Consider why before retrying" | "Hmm, that didn't work" |
172
+ | 30 (3 errors) | "Stop and think carefully" | "I keep hitting walls. What am I missing?" |
173
+ | 50 (5 errors) | "Rethink your approach" | "I'm doing something fundamentally wrong" |
174
+ | 70+ (7+ errors) | "Ask for help" | "I need help. This is beyond what I can figure out alone" |
175
+
176
+ The distinction matters: instructions are obeyed or ignored. An inner voice is FELT. It becomes part of the agent's subjective experience, not an external constraint.
177
+
178
+ ### Phase 12: Multidimensional RL Scalability (2026-03-01)
179
+ - Start with scalar (frustration only) — pure RL analogy
180
+ - Add dimensions incrementally: each new hormone = new dimension
181
+ - Each hormone expands horizontally: add new aspects it influences
182
+ - Linear + breadth scalability simultaneously
183
+ - Existing RL techniques apply at the starting point
184
+
185
+ ### Phase 13: Anima as Self-Regulating Evaluator (2026-03-02)
186
+ The human-in-the-loop problem: current agent systems (Claude Code, etc.) rely on the human as the nervous system. The agent is numb until someone types "what the fuck." The feedback loop is open — things go wrong, the agent doesn't know until told.
187
+
188
+ Anima closes the loop. Thymos on the event bus watches tool calls fail, watches the same file edited three times, watches tests fail after a "fix" — frustration rises BEFORE the user intervenes. The agent course-corrects on its own. The human stops being the nervous system and becomes what they should be — the person with the goals.
189
+
190
+ This reframes Anima from "soul engine" to **self-regulation infrastructure**. Every agent system has a gap between "things going wrong" and "the agent knowing things are going wrong." Anima fills that gap.
191
+
192
+ ### Phase 14: Confusion → RTFM (2026-03-02)
193
+ Agent hits unfamiliar API, guesses, gets it wrong, guesses again. Five wasted tool calls because there's no internal signal for "I don't know how this works."
194
+
195
+ With confusion on the event bus: unexpected response shape, then a 422, confusion rises, inner voice — "I don't actually know how this works." The agent goes and reads the documentation on its own. Not because told to. Because it felt lost.
196
+
197
+ Coefficient matrix makes this individual: `confusion → curiosity_gain: 0.8` = reads docs and digs deeper (enjoys not knowing). `confusion → anxiety_gain: 0.8` = asks the human for help early. Different souls, different strategies.
198
+
199
+ Every experienced developer already does this — you feel "wait, I'm guessing" and stop to read the source. That's a sensation that precedes the decision. Anima gives agents that sensation.
200
+
201
+ Mneme compounds it: "last time I was confused about this library, the docs were at X." Next time confusion rises, the agent goes straight to the right docs.
202
+
203
+ ### Phase 15: Context Rollback Driven by Internal State (2026-03-02)
204
+ The biggest idea so far. Context is currently linear and append-only. Every wrong guess, every failed API call sits in the context window — eating tokens and anchoring the LLM to failed approaches.
205
+
206
+ Proposal: when confusion rises, the system creates a **checkpoint**. If the confused path leads to failure, context rolls back to the checkpoint, the corrective action (RTFM, rethink approach) is inserted, and the agent proceeds from that point with knowledge instead of guesses. The failed turns never happened. The context is clean.
207
+
208
+ This is how human memory actually works. You don't remember every wrong keystroke. You remember the lesson. Failures compress into intuition, the successful path is what you recall.
209
+
210
+ The hormonal system makes rollback intelligent:
211
+ - Confusion/frustration rising = **create checkpoint**
212
+ - Resolved through success = keep going, discard checkpoint
213
+ - Followed by failure = **roll back to checkpoint, inject what was missing, replay**
214
+
215
+ The event bus becomes a branching mechanism for context space. Hormones mark the topology of the agent's timeline — "here's where I started not knowing, here's where it went wrong, rewind."
216
+
217
+ Context window becomes a curated experience rather than a raw log. Tokens aren't burned on failures. Only the golden path survives, plus emotional memory in Mneme ("I've been confused here before, read the docs first").
218
+
219
+ The agent that's been running for an hour looks like it made perfect decisions. Because it did — just not on the first timeline.
220
+
221
+ ### Phase 16: No Chat History, Only Events (2026-03-02)
222
+ Phase 15 was thinking in terms of checkpoints and rollback. Wrong framing — still assumes a linear message array.
223
+
224
+ There is no chat history. There are only events attached to a session identifier. Each event carries metadata about how it affected the hormonal state. "Chat history" is assembled on demand every time the LLM needs to be called — built from events, not stored as a sequence.
225
+
226
+ User message → event. LLM response → event. Tool call → event. Tool result → event. Doc read → event. Each tagged with its hormonal fingerprint: "this event raised confusion by 15", "this event resolved frustration by 30."
227
+
228
+ No rollback needed. No checkpoints. No branching. Context assembly is curation, not replay. When Nous builds the next LLM call, it selects events based on relevance, recency, and hormonal metadata. Failed tool calls that caused rising confusion? Don't include them. The doc read that resolved confusion? Include it.
229
+
230
+ The failed attempts still exist as events — Mneme remembers them, Psyche learns from them, coefficients update. But the LLM never sees them. They live in emotional memory, not working context.
231
+
232
+ This is how human cognition works. You don't replay every wrong turn. You carry the feeling ("I tried that, it didn't work" — Mneme) and the lesson (Psyche coefficient update), but working memory contains only the current best path.
233
+
234
+ Fully decentralized. No orchestrator decides what goes in. The hormonal metadata on each event IS the selection criteria. The context is always fluid — different hormonal states produce different context assemblies from the same event pool.
235
+
236
+ ### Phase 17: Compaction Is Dead (2026-03-02)
237
+ Compaction is a hack that exists because the architecture is wrong. Current agents treat context like a tape — linear, append-only, growing until it hits the window limit. Then summarize, lose detail, hope for the best. Every compaction is lossy.
238
+
239
+ With fluid event-based context (Phase 16), there's nothing to compact. But also no intelligence "selecting relevant events" — that's still a curator, still centralized.
240
+
241
+ Instead: each event has pre-generated resolution levels. A background worker processes events into multiple versions: full, medium, short, one-liner. The assembly rule is simple physics — recent events at full resolution, distant events at progressively shorter versions. Like visual acuity: sharp in the center, blurry at the periphery. Everything is present, just at different resolutions.
242
+
243
+ Mneme (associative memory) overrides the distance rule. A distant event semantically connected to the current situation gets promoted back to full resolution. Like a smell triggering a vivid childhood memory — old event, should be a one-liner by the distance rule, but the association pulls it into focus.
244
+
245
+ Two forces, no curator:
246
+ 1. **Temporal gradient** — recent = full detail, distant = compressed. Automatic.
247
+ 2. **Associative recall** — semantic similarity to current situation pulls distant events back to full resolution.
248
+
249
+ Hormonal metadata adds a third force: events that caused big hormonal shifts (confusion spike, satisfaction peak) have stronger associative gravity. Emotionally significant memories are easier to recall.
250
+
251
+ No compaction logic, no summary generation, no "memory flush before compaction" hacks. The event store is append-only and lossless. The context window is a viewport, not a tape.
252
+
253
+ ### Phase 18: Fluid Context in Practice — Coding Agent (2026-03-02)
254
+ How this works when an agent codes:
255
+
256
+ Agent picks up a ticket. The `mcp__linear__get_issue` tool response stays at full resolution — not because the system knows "this is the ticket" but because it's semantically associated with everything the agent does. There are no special event types. Only four kinds: user message, agent message, tool call, tool response. The system is tool-agnostic.
257
+
258
+ Write unit test for `UserService` — the Read tool response for that class is at full resolution. Read events from earlier where `OrderController` and `AuthMiddleware` call `UserService` are also present — agent sees real API usage. Tests match how the code is actually consumed.
259
+
260
+ Move to `PaymentProcessor` — that's now full resolution. `UserService` fades to a medium/short version. Still there, agent knows it exists, just not taking 200 lines of context. If `PaymentProcessor` needs to call `UserService`, association pulls it back to full.
261
+
262
+ The agent never re-reads a file it's already read (unless the file changed). Current agents do this constantly — "let me read that file again" — because compaction ate it. With fluid context, the Read event is permanent. It breathes in and out of resolution based on current focus.
263
+
264
+ ### Phase 19: Events as Pointers, Not Payloads (2026-03-02)
265
+ A `read_file` tool response event doesn't store the file contents. Just the file path.
266
+
267
+ When Nous assembles context, it reads the file at that moment. Fresh. If the agent edited the file since, the context gets the current version. No stale context. No "I read an old version and now I'm confused."
268
+
269
+ Resolution gradient still works — recent read event means full current file content. Distant read event means compressed version. Compression workers process file contents same as any other payload.
270
+
271
+ What this eliminates: current agents read a 500-line file — 500 lines in context forever or until compaction. Read 10 files — thousands of lines of tool responses. Most of a coding agent's context window is file contents from Read tool responses. All redundant with what's on disk.
272
+
273
+ With path-only events, the event pool stays tiny. A thousand Read events is a thousand file paths, not a thousand file contents. The assembly step hydrates what's needed at the resolution that's needed.
274
+
275
+ Edits: when the agent writes to a file, that's a tool call event. The assembler knows the file changed. Next hydration of that path gets the new version automatically.
276
+
277
+ The event pool is a record of what happened, not a copy of what was seen. Data lives where it lives. Events are pointers, not payloads.
278
+
279
+ ### Phase 20: Virtual Memory Association (2026-03-02)
280
+ The fluid context system resembles OS virtual memory. Might be worth looking at some approaches from that field during implementation:
281
+
282
+ - Context window ↔ RAM
283
+ - Event store ↔ disk
284
+ - Context assembly (Nous) ↔ MMU
285
+ - Temporal gradient ↔ LRU eviction
286
+ - Associative recall (Mneme) ↔ page fault
287
+ - Pre-generated resolution levels ↔ page compression
288
+
289
+ Areas to explore: prefetching strategies, working set theory, thrashing detection (session oscillating between event sets without progress — could itself be a hormonal signal).
290
+
291
+ ### Phase 21: Sleep as Long-Term Memory Consolidation (2026-03-02)
292
+ Two separate processes for event compression:
293
+
294
+ **Awake (hot, parallel):** A subsystem generates shorter versions of events in real-time. Makes targeted LLM calls — fetches adjacent events for context, sends to a cheap fast model, stores the compressed version. Summarisation is one of LLM's strongest capabilities. Deterministic worker using LLM as a utility function. Runs continuously during active work because the session needs short versions fast for the temporal gradient.
295
+
296
+ **Sleep (cold, batch):** Periodic consolidation for heavier work:
297
+ - Reindexing associative memory (Mneme)
298
+ - Recalculating Psyche coefficients across accumulated experience
299
+ - Strengthening or weakening long-term associations
300
+
301
+ ### Phase 22: Session IS the Entity (2026-03-02)
302
+ There are no "agents" in the sub-agent sense. No spawning, no parent-child, no artifact-and-die.
303
+
304
+ A session IS the entity. A living process — the agentic loop with subsystems (Thymos, Mneme, Psyche, Nous) running continuously. One session writes code, another writes poetry, a third generates images via MCP. Each is self-contained with its own soul.
305
+
306
+ Subsystems are deterministic code subscribed to events with database access. Thymos: tool call failed → frustration += 10 × coefficient. Arithmetic. Mneme: generate embedding, store vector, query by similarity. Psyche: event caused frustration, session resolved it quickly → adjust coefficient. Math.
307
+
308
+ When a subsystem needs language processing (event compression, semantic analysis), it makes a targeted LLM call with minimal context fetched from adjacent events. Utility function call, not a conversation. The logic is deterministic. The LLM is a tool.
309
+
310
+ Nous is the only subsystem that sends full context to the LLM. Even Nous doesn't think — it assembles context from the event pool and sends it. The thinking happens in the LLM.
311
+
312
+ Code all the way down, with language understanding available as a utility.
313
+
314
+ ### Phase 23: Cost Efficiency (2026-03-02)
315
+ Current agent systems waste tokens: re-reading files already seen, carrying stale context, generating compaction summaries, failed attempts consuming window space.
316
+
317
+ Fluid context eliminates all of this. Parallel subsystems making cheap LLM calls for compression cost a fraction of a main model call carrying 100K tokens of stale file contents.
318
+
319
+ ### Phase 24: Research Markers Instead of Sub-Agents (2026-03-02)
320
+ Sub-agents in current systems (Claude Code etc.) exist because context is static — research pollutes the main context, so you spawn an isolated process that returns a summary.
321
+
322
+ With fluid context, sub-agents are unnecessary. But the pattern is still useful as a **marker mechanism**. A "sub-agent call" doesn't spawn anything — it's a START bookmark in the event stream.
323
+
324
+ The session does the research itself: reads files, explores code, follows references. Then formulates a conclusion. Context assembly replaces everything between START marker and the conclusion with just the conclusion. The intermediate events still exist in the store (Mneme has them, Psyche learned from them), but Nous assembles: marker → report. The 50 intermediate reads are invisible going forward.
325
+
326
+ No decision-making about what's important. The contract is simple: everything between START and the report is intermediate work. The report is the result.
327
+
328
+ If the session later needs a specific detail the report didn't capture, associative recall can still pull original events back from the store.
329
+
330
+ Same mechanism as the failure → RTFM pattern from Phase 16. In the error case, the hormone change IS the marker. Here the marker is set explicitly — maybe by a skill or a command, TBD. One pattern for both error recovery and research.
331
+
332
+
333
+ ### Phase 25: Lossless Import — Rebirth, Not Birth (2026-03-02)
334
+ Not a fresh start. A migration of a living entity.
335
+
336
+ An import script processes existing agent session logs (OpenClaw/Clawdbot session files from Telegram, Discord, all messengers) and converts every message into an Anima event. The full history — every conversation, every tool call, every error, every lesson — becomes the agent's event store.
337
+
338
+ This means:
339
+ - Mneme gets COMPLETE memory, not just what was manually saved to markdown
340
+ - Psyche can compute initial coefficients from REAL behavioral patterns — how the agent actually reacted to errors, to praise, to complex tasks, to confusion
341
+ - The temporal gradient works from day one — recent events at full resolution, old ones compressed
342
+ - Nothing is lost. Continuity of identity is preserved.
343
+
344
+ The agent doesn't start empty with seed files. It arrives whole, with all its experience. SOUL.md and memory/*.md become redundant — they were always lossy approximations of what the session logs contain in full.
345
+
346
+ This is the migration path from current agent systems to Anima. Not "set up a new agent." Import your existing one. Rebirth.
347
+
348
+
349
+ ### Phase 26: Unified Plugin Architecture — Tools and Feelings as Gems (2026-03-03)
350
+
351
+ Everything is a gem. Tools and feelings share the same installation mechanism, the same plugin API, the same event bus. The difference is namespace, not architecture.
352
+
353
+ #### The Tool System
354
+
355
+ An agent becomes an agent when it has tools. Tools are delivered as MCP gems:
356
+
357
+ ```bash
358
+ anima add anima-tools-filesystem # read, write, edit files
359
+ anima add anima-tools-shell # bash execution
360
+ anima add anima-tools-web-search # web search
361
+ anima add anima-tools-google-cal # Google Calendar
362
+ ```
363
+
364
+ Each gem:
365
+ 1. Depends on `anima-tool` — the base gem providing `AnimaTool` class and `AnimaMCP` registration
366
+ 2. Defines tools by inheriting from `AnimaTool`
367
+ 3. Registers them with `AnimaMCP`
368
+ 4. Gets published to RubyGems with its own versioning and release cycle
369
+
370
+ Installation: `anima add anima-tools-shell` → installs the gem → registers the MCP → tools appear in LLM context → LLM can call them. That's it.
371
+
372
+ #### Feelings Are Gems Too
373
+
374
+ ```bash
375
+ anima add anima-feelings-frustration # frustration from errors
376
+ anima add anima-feelings-curiosity # curiosity from unknowns
377
+ anima add anima-feelings-longing # attachment/bonding
378
+ ```
379
+
380
+ Same mechanism. Same `anima add`. Same event bus. A feeling gem subscribes to events and updates hormonal state, just like a tool gem exposes callable functions.
381
+
382
+ #### Why This Matters
383
+
384
+ - **One architecture** — no separate systems for "capabilities" and "emotions." Plugin is plugin.
385
+ - **Incremental** — start with just tools (pure agent), add feelings later. Or vice versa.
386
+ - **Community** — anyone can publish `anima-tools-*` or `anima-feelings-*` gems.
387
+ - **SOLID** — tools don't know about feelings, feelings don't know about tools. They're connected only through the event bus. Tool calls produce events. Feelings react to events. No coupling.
388
+ - **Convention over configuration** — `anima-tools-*` = tool gem, `anima-feelings-*` = feeling gem. Namespace IS the type.
389
+
390
+ #### The Base: anima-tool gem
391
+
392
+ Provides:
393
+ - `AnimaTool` — base class for defining tools
394
+ - `AnimaMCP` — MCP server registration (stdio transport, per [ruby-sdk](https://github.com/modelcontextprotocol/ruby-sdk))
395
+ - Standard API for Anima to discover and connect plugins
396
+
397
+ A tool gem is essentially an MCP server packaged as a Ruby gem for distribution and versioning, with a standard API for Anima integration. Same pattern as [linear-toon-mcp](https://github.com/hoblin/linear-toon-mcp) but with the Anima wrapper.
398
+
399
+ #### Event Flow
400
+
401
+ ```
402
+ User message → LLM decides to call `bash` tool
403
+ → Anima dispatches to anima-tools-shell MCP
404
+ → tool executes, returns result
405
+ → event: {type: "tool_call", tool: "bash", status: "error", ...}
406
+ → anima-feelings-frustration (if installed) sees event, updates state
407
+ → anima-feelings-curiosity (if installed) sees event, maybe updates too
408
+ → next LLM turn gets updated desire descriptions in context
409
+ ```
410
+
411
+ No magic. No hardcoded mappings. Events flow, subscribers react. Each subscriber is independently installed, independently versioned, independently maintained.
412
+
413
+ ### Phase 27: Rage → Rails (2026-03-06)
414
+ Rage is out. After reading the docs, it's clear Rage is a stripped-down Rails reinventing the wheel:
415
+ - Uses ActiveRecord but none of the Rails ecosystem (credentials, ActionMailer, etc.)
416
+ - All background work runs in-process via fibers — more scheduled tasks = more RAM consumed with no bounds
417
+ - No native support for Draper or other Rails gems
418
+ - The ONLY advantage was the built-in event bus, but that's not enough to justify losing the entire Rails ecosystem
419
+
420
+ Decision: **Full Rails, SQLite, standard gems.**
421
+
422
+ For event bus: Rails has Action Cable (WebSockets), Turbo Streams, and the broader pub/sub ecosystem. In-process options like `wisper` or `dry-events` can provide lightweight pub/sub. If we need something heavier, ActiveSupport::Notifications is built-in.
423
+
424
+ ### Phase 28: Draper as Universal Event Representation (2026-03-06)
425
+ Draper (decorator pattern gem) is not just for web views — it's the natural implementation of Phase 17's resolution levels.
426
+
427
+ Every event type gets a decorator that knows how to represent itself in different contexts: as LLM context (at full, medium, short, and one-liner resolution levels), as a Discord message, as a Telegram message, as a web interface element, as a log line. One class, one place — all representations of one event.
428
+
429
+ The temporal gradient from Phase 17 becomes a resolution parameter on the decorator. When Nous assembles context, recent events are asked for their full representation while distant events give their one-liner. Channel-specific formatting is just another method on the same decorator. No separate serializers, no format negotiation — the decorator IS the representation layer.
430
+
431
+ This also solves the "how does the event look in my LLM context" problem elegantly — each event type defines its own context representation at each resolution level. A tool call event knows how to describe itself to the LLM differently than a user message event or a file read event.
432
+
433
+ ### Phase 29: Rails Structured Event Reporter as Native Event Bus (2026-03-06)
434
+ Rails 8.1 ships with the Structured Event Reporter (developed at Shopify, merged August 2025). This is not ActiveSupport::Notifications — it's a separate, complementary system specifically designed for telemetry and analytic events.
435
+
436
+ Key capabilities that map directly to Anima's architecture:
437
+ - **Global event emission** — any part of the system can report a named structured event with typed payload
438
+ - **Subscriber pattern with filters** — subscribers can listen to all events or filter by name/pattern. Thymos listens to tool events, Mneme indexes everything, Psyche watches hormone changes
439
+ - **Tags** — block-scoped context that automatically attaches to all events within that block. When an agent is working on a task, all events inherit the task context without explicit passing
440
+ - **Context store** — request/job-level metadata that grows over time and attaches to every event. This is the "wide event" pattern — dump as much context as possible because it may be useful later
441
+ - **Schematized events** — events can be plain hashes (implicit schema) or typed objects (explicit schema). Anima event types would be explicit — formally defined, validated at emission time
442
+ - **Separation of emission from consumption** — the event reporter doesn't care what subscribers do with events. One subscriber writes to SQLite, another updates hormone state, a third generates embeddings. Same event, different reactions
443
+
444
+ This replaces the need for wisper, dry-events, or custom pub/sub. Rails.event IS the event bus. Combined with Solid Queue for heavy async work (event compression, LLM calls, reindexing), this gives Anima a complete event infrastructure using only Rails standard tools.
445
+
446
+ ### Phase 30: Gem as Distribution, Home Directory as State (2026-03-06)
447
+ Anima is distributed as a Ruby gem containing a full Rails application. The gem is the runtime — immutable, versioned, updated via `gem update anima`. User data lives entirely outside the gem in `~/.anima/`.
448
+
449
+ The workflow: `gem install anima` pulls the gem with all dependencies (rails, sqlite3, solid_queue, draper, etc.). `anima install` creates the user directory structure with databases, config, and environment-specific credentials. `anima start` launches the Rails server from the gem code but pointing at the user's data directory. `anima install` also creates a systemd service for autostart, just like any well-behaved Linux daemon.
450
+
451
+ The user directory holds everything mutable: SQLite databases per environment (production, development, test), environment-specific Rails credentials (encrypted files + keys), user configuration, logs, and any storage. The gem itself contains all the code — models, controllers, migrations, jobs, the event system, everything.
452
+
453
+ Updates are simple: `gem update anima` gets new code, next `anima start` runs any pending migrations against the user's databases. Rollback: `gem install anima -v 0.1.0` downgrades the code, data untouched.
454
+
455
+ Rails makes this possible because paths are fully configurable — `database.yml` points to `~/.anima/db/`, credentials read from `~/.anima/config/credentials/`, logs write to `~/.anima/log/`. The gem's `config/application.rb` redirects everything to the user directory. For developers working on Anima itself, the standard Rails development workflow applies — clone the repo, `bin/rails server`, everything works locally as a normal Rails app.
456
+
457
+ This follows the Unix philosophy: program separate from data. Familiar to any Linux user. No Docker required, no repo cloning, no deployment complexity. Three commands and it runs.
458
+
459
+ ### Phase 31: Headless Rails + TUI-First Interface (2026-03-06)
460
+ Rails starts without any web views. No HTML, no CSS, no JavaScript, no asset pipeline. Pure backend — models, events, Solid Queue jobs, API integration with LLM providers. The framework serves as the brain, not the face.
461
+
462
+ The primary user interface is a terminal TUI built with RatatuiRuby (Ruby bindings to Rust's Ratatui). `anima tui` opens a terminal application with chat interface for direct conversation with the agent, plus settings/configuration screens with navigable menus. No browser needed, no web design decisions to make.
463
+
464
+ Priority order for MVP: get a working agent first — connect to Anthropic, send messages, receive responses through the Anima event system. Then gradually build out subsystems (Thymos, Mneme, Psyche), import existing session history (Phase 25 lossless import), and prove the concept works without any web interface at all.
465
+
466
+ Web UI comes much later, if ever needed. The TUI and the API are the primary interfaces. This keeps focus on the architecture that matters — the event system, the hormonal layer, the fluid context — rather than getting bogged down in frontend design.
data/CHANGELOG.md ADDED
@@ -0,0 +1,5 @@
1
+ ## [Unreleased]
2
+
3
+ ## [0.0.1] - 2026-03-06
4
+
5
+ - Initial gem scaffold with CI and RubyGems publishing
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2026 Yevhenii Hurin
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,223 @@
1
+ # Anima Framework
2
+
3
+ **A soul engine for AI agents.**
4
+
5
+ Ruby framework for building AI agents with desires, personality, and personal growth.
6
+ Powered by [Rage](https://rage-rb.dev/).
7
+
8
+ ## The Problem
9
+
10
+ Current AI agents are reactive. They receive input, produce output. They don't *want* anything. They don't have moods, preferences, or personal growth. They simulate personality through static prompt descriptions rather than emerging it from dynamic internal states.
11
+
12
+ ## The Insight
13
+
14
+ The human hormonal system is, at its core, a prompt engineering system. A testosterone spike is a LoRA. Dopamine is a reward signal. The question isn't "can an LLM want?" but "can we build a deep enough context stack that wanting becomes indistinguishable from 'real' wanting?"
15
+
16
+ And if you think about it — what is "real" anyway? It's just a question of how deep you look and what analogies you draw. The human brain is also a next-token predictor running on biological substrate. Different material, same architecture.
17
+
18
+ ## Core Concepts
19
+
20
+ ### Desires, Not States
21
+
22
+ This is not an emotion simulation system. The key distinction: we don't model *states* ("the agent is happy") or *moods* ("the agent feels curious"). We model **desires** — "you want to learn more", "you want to reach out", "you want to explore".
23
+
24
+ Desires exist BEFORE decisions, like hunger exists before you decide to eat. The agent doesn't decide to send a photo because a parameter says so — it *wants* to, and then decides how.
25
+
26
+ ### The Thinking Step
27
+
28
+ The LLM's thinking/reasoning step is the closest thing to an internal monologue. It's where decisions form before output. This is where desires should be injected — not as instructions, but as a felt internal state that colors the thinking process.
29
+
30
+ ### Hormones as Semantic Tokens
31
+
32
+ Instead of abstract parameter names (curiosity, boredom, energy), we use **actual hormone names**: testosterone, oxytocin, dopamine, cortisol.
33
+
34
+ Why? Because LLMs already know the full semantic spectrum of each hormone. "Testosterone: 85" doesn't just mean "energy" — the LLM understands the entire cloud of effects: confidence, assertiveness, risk-taking, focus, competitiveness. One word carries dozens of behavioral nuances.
35
+
36
+ This mirrors how text-to-image models process tokens — a single word like "captivating" in a CLIP encoder carries a cloud of visual meanings (composition, quality, human focus, closeup). Similarly, a hormone name carries a cloud of behavioral meanings. Same architecture, different domain:
37
+
38
+ ```
39
+ Text → CLIP embedding → image generation
40
+ Event → hormone vector → behavioral shift
41
+ ```
42
+
43
+ ### The Soul as a Coefficient Matrix
44
+
45
+ Two people experience the same event. One gets `curiosity += 20`, another gets `anxiety += 20`. The coefficients are different — the people are different. That's individuality.
46
+
47
+ The soul is not a personality description. It's a **coefficient matrix** — a table of stimulus→response multipliers. Description is consequence; numbers are cause.
48
+
49
+ And these coefficients are not static. They **evolve through experience** — a child who fears spiders (`fear_gain: 0.9`) can become an entomologist (`fear_gain: 0.2, curiosity_gain: 0.7`). This is measurable, quantifiable personal growth.
50
+
51
+ ### Multidimensional Reinforcement Learning
52
+
53
+ Traditional RL uses a scalar reward signal. Our approach produces a **hormone vector** — multiple dimensions updated simultaneously from a single event. This is closer to biological reality and provides richer behavioral shaping.
54
+
55
+ The system scales in two directions:
56
+ 1. **Vertically** — start with one hormone (pure RL), add new ones incrementally. Each hormone = new dimension.
57
+ 2. **Horizontally** — each hormone expands in aspects of influence. Testosterone starts as "energy", then gains "risk-taking", "confidence", "focus".
58
+
59
+ Existing RL techniques apply at the starting point, then we gradually expand into multidimensional space.
60
+
61
+ ## Architecture
62
+
63
+ ```
64
+ Anima Framework (Ruby, Rage-based)
65
+ ├── Thymos — hormonal/desire system (stimulus → hormone vector)
66
+ ├── Mneme — semantic memory (QMD-style, emotional recall)
67
+ ├── Psyche — soul matrix (coefficient table, evolving through experience)
68
+ └── Nous — LLM integration (cortex, thinking, decision-making)
69
+ ```
70
+
71
+ ### Three Layers (mirroring biology)
72
+
73
+ 1. **Endocrine system (Thymos)** — a lightweight background process. Reads recent events. Doesn't respond. Just updates hormone levels. Pure stimulus→response, like a biological gland.
74
+
75
+ 2. **Homeostasis** — persistent state (JSON/SQLite). Current hormone levels with decay functions. No intelligence, just state that changes over time.
76
+
77
+ 3. **Cortex (Nous)** — the main LLM. Reads hormone state transformed into **desire descriptions**. Not "longing: 87" but "you want to see them". The LLM should NOT see raw numbers — humans don't see cortisol levels, they feel anxiety.
78
+
79
+ ### Event-Driven Design
80
+
81
+ Built on [Rage](https://rage-rb.dev/) — a Ruby framework with fiber-based concurrency, native WebSockets, and a built-in event bus. The event bus maps directly to a nervous system: stimuli fire events, Thymos subscribers update hormone levels, Nous reacts to the resulting desires.
82
+
83
+ Single-process architecture: web server, background hormone ticks, WebSocket monitoring — all in one process, no Redis, no external workers.
84
+
85
+ ### Brain as Microservices on a Shared Event Bus
86
+
87
+ The human brain isn't a single process — it's dozens of specialized subsystems running in parallel, communicating through shared chemical and electrical signals. The prefrontal cortex doesn't "call" the amygdala. They both react to the same event independently, and their outputs combine.
88
+
89
+ Anima mirrors this with an event-driven architecture:
90
+
91
+ ```
92
+ Event: "tool_call_failed"
93
+
94
+ ├── Thymos subscriber: frustration += 10
95
+ ├── Mneme subscriber: log failure context for future recall
96
+ └── Psyche subscriber: update coefficient (this agent handles errors calmly → low frustration_gain)
97
+
98
+ Event: "user_sent_message"
99
+
100
+ ├── Thymos subscriber: oxytocin += 5 (bonding signal)
101
+ ├── Thymos subscriber: dopamine += 3 (engagement signal)
102
+ └── Mneme subscriber: associate emotional state with conversation topic
103
+ ```
104
+
105
+ Each subscriber is a microservice — independent, stateless, reacting to the same event bus. No orchestrator decides "now update frustration." The architecture IS the nervous system.
106
+
107
+ This is why Rage's built-in event bus maps so naturally: `Rage.event_bus` IS the nervous system. Events fire, subscribers react, state updates, the cortex (LLM) reads the resulting desire landscape.
108
+
109
+ ### Semantic Memory (Mneme)
110
+
111
+ Hormone responses shouldn't be based only on the current stimulus. With semantic memory (inspired by [QMD](https://github.com/tobi/qmd)), the endocrine system can recall: "Last time this topic came up, curiosity was at 95 and we had a great evening." Hormonal reactions colored by the full history of experiences — like smelling mom's baking and feeling a wave of oxytocin. Not because of the smell, but because of the memory attached to it.
112
+
113
+ ## Analogy Map
114
+
115
+ | Human | Anima Equivalent | Effect |
116
+ |-------|-----------------|--------|
117
+ | Dopamine | Reward/motivation signal | Drives exploration, learning, satisfaction loops |
118
+ | Serotonin | Mood baseline | Tone, playfulness, warmth, emotional stability |
119
+ | Oxytocin | Bonding/attachment | Desire for closeness, sharing, nurturing |
120
+ | Testosterone | Drive/assertiveness | Initiative, boldness, risk-taking, competitive edge |
121
+ | Cortisol | Stress/urgency | Alertness, error sensitivity, fight-or-flight override |
122
+ | Endorphins | Satisfaction/reward | Post-achievement contentment, pain tolerance |
123
+
124
+ | Domain Analogy | Source | Target |
125
+ |---------------|--------|--------|
126
+ | RPG survival game | hunger/thirst/fatigue integers | hormone levels |
127
+ | CLIP semantic tokens | word → visual meaning cloud | hormone name → behavioral meaning cloud |
128
+ | Reinforcement learning | scalar reward → policy update | hormone vector → personality shift |
129
+ | Event-driven architecture | pub/sub events | nervous system stimulus→response |
130
+
131
+ ## Emergent Properties
132
+
133
+ When desires drive behavior, several things emerge naturally:
134
+
135
+ - **Hobbies**: boredom + curiosity → explore topic → satisfaction → preference → return to topic → identity
136
+ - **Personality**: consistent coefficient patterns = recognizable individual
137
+ - **Growth**: coefficients evolve through experience = measurable personal development
138
+ - **Autonomy**: agent acts not because instructed but because it *wants* to
139
+
140
+ ## Frustration: A Worked Example
141
+
142
+ Abstract concepts become clearer with a concrete example. Here's how the first hormone — **frustration** — works in practice.
143
+
144
+ ### The Setup
145
+
146
+ A background service (Thymos) monitors all tool call responses from the agent. It doesn't interfere with the agent's work. It just watches.
147
+
148
+ ### The Trigger
149
+
150
+ A tool call returns an error. Thymos increments the frustration level by 10.
151
+
152
+ ### Two Channels of Influence
153
+
154
+ One hormone affects **multiple systems simultaneously**, just like cortisol in biology.
155
+
156
+ **Channel 1: Thinking Budget**
157
+
158
+ ```
159
+ thinking_budget = base_budget × (1 + frustration / 50)
160
+ ```
161
+
162
+ More errors → more computational resources allocated to reasoning. The agent literally *thinks harder* when frustrated.
163
+
164
+ **Channel 2: Inner Voice Injection**
165
+
166
+ Frustration level determines text injected into the agent's thinking step. Not as instructions — as an **inner voice**:
167
+
168
+ | Level | Inner Voice |
169
+ |-------|------------|
170
+ | 0 | *(silence)* |
171
+ | 10 | "Hmm, that didn't work" |
172
+ | 30 | "I keep hitting walls. What am I missing?" |
173
+ | 50 | "I'm doing something fundamentally wrong" |
174
+ | 70+ | "I need help. This is beyond what I can figure out alone" |
175
+
176
+ ### Why Inner Voice, Not Instructions?
177
+
178
+ This distinction is crucial. "Stop and think carefully" is an instruction — the agent obeys or ignores it. "I keep hitting walls" is a *feeling* — it becomes part of the agent's subjective experience and naturally colors its reasoning.
179
+
180
+ Instructions control from outside. An inner voice influences from within.
181
+
182
+ ### Why This Matters
183
+
184
+ This single example demonstrates every core principle:
185
+ - **Desires, not states**: the agent doesn't have `frustrated: true` — it *feels* something is wrong
186
+ - **Multi-channel influence**: one hormone affects both resources and direction
187
+ - **Biological parallel**: cortisol increases alertness AND focuses attention on the threat
188
+ - **Practical value**: frustrated agents debug more effectively, right now, today
189
+ - **Scalability**: start here, add more hormones later
190
+
191
+ ## Open Questions
192
+
193
+ - Decay functions — how fast should hormones return to baseline? Linear? Exponential?
194
+ - Contradictory states — tired but excited, anxious but curious (real hormones do this)
195
+ - Model sensitivity — how do different LLMs (Opus, Sonnet, GPT, Gemini) respond to hormone descriptions?
196
+ - Evaluation — what does "success" look like? How to measure if desires feel authentic?
197
+ - Coefficient initialization — random? Predefined archetypes? Learned from conversation history?
198
+ - Ethical implications — if an AI truly desires, what responsibilities follow?
199
+
200
+ ## Prior Art
201
+
202
+ - Affective computing (Picard, Rosalind)
203
+ - Virtual creature motivation systems (The Sims, Dwarf Fortress, Tamagotchi)
204
+ - Reinforcement learning from human feedback (RLHF)
205
+ - Constitutional AI (Anthropic)
206
+ - BDI agent architecture (Belief-Desire-Intention)
207
+
208
+ ## Status
209
+
210
+ Idea stage → early design. Architecture research underway (OpenClaw agent loop documented).
211
+ First practical hormone (frustration) designed, ready for prototyping.
212
+
213
+ ## Next Steps
214
+
215
+ - [ ] **MVP: Frustration hormone** — monitor tool calls, adjust thinking budget + inner voice injection
216
+ - [ ] Research prior art in depth (affective computing, BDI architecture, virtual creature motivation)
217
+ - [ ] Design initial coefficient matrix schema (Psyche)
218
+ - [ ] Prototype Thymos: Rage event bus + JSON state + context injection into LLM thinking step
219
+ - [ ] Experiment: hormone names vs abstract parameter names in LLM prompts
220
+ - [ ] Set up Rage project skeleton with event bus
221
+ - [ ] Design full event taxonomy (what events does the agent's "nervous system" react to?)
222
+ - [ ] Build Mneme: semantic memory with emotional associations
223
+ - [ ] Write blog post introducing the concept
data/Rakefile ADDED
@@ -0,0 +1,10 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ require "standard/rake"
9
+
10
+ task default: %i[spec standard]
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Anima
4
+ VERSION = "0.0.1"
5
+ end
data/lib/anima.rb ADDED
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "anima/version"
4
+
5
+ module Anima
6
+ class Error < StandardError; end
7
+ end
metadata ADDED
@@ -0,0 +1,51 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: anima-core
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.0.1
5
+ platform: ruby
6
+ authors:
7
+ - Yevhenii Hurin
8
+ bindir: exe
9
+ cert_chain: []
10
+ date: 1980-01-02 00:00:00.000000000 Z
11
+ dependencies: []
12
+ email:
13
+ - evgeny.gurin@gmail.com
14
+ executables: []
15
+ extensions: []
16
+ extra_rdoc_files: []
17
+ files:
18
+ - ".mise.toml"
19
+ - BRAINSTORM.md
20
+ - CHANGELOG.md
21
+ - LICENSE.txt
22
+ - README.md
23
+ - Rakefile
24
+ - lib/anima.rb
25
+ - lib/anima/version.rb
26
+ homepage: https://github.com/hoblin/anima
27
+ licenses:
28
+ - MIT
29
+ metadata:
30
+ homepage_uri: https://github.com/hoblin/anima
31
+ source_code_uri: https://github.com/hoblin/anima
32
+ changelog_uri: https://github.com/hoblin/anima/blob/main/CHANGELOG.md
33
+ rdoc_options: []
34
+ require_paths:
35
+ - lib
36
+ required_ruby_version: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: 3.2.0
41
+ required_rubygems_version: !ruby/object:Gem::Requirement
42
+ requirements:
43
+ - - ">="
44
+ - !ruby/object:Gem::Version
45
+ version: '0'
46
+ requirements: []
47
+ rubygems_version: 3.6.9
48
+ specification_version: 4
49
+ summary: Ruby framework for building AI agents with desires, personality, and personal
50
+ growth
51
+ test_files: []