anima-core 1.0.1 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.gitattributes +1 -0
- data/.reek.yml +61 -0
- data/README.md +202 -116
- data/anima-core.gemspec +4 -1
- data/app/channels/session_channel.rb +44 -10
- data/app/decorators/agent_message_decorator.rb +6 -0
- data/app/decorators/event_decorator.rb +41 -7
- data/app/decorators/tool_call_decorator.rb +66 -5
- data/app/decorators/tool_decorator.rb +57 -0
- data/app/decorators/tool_response_decorator.rb +35 -5
- data/app/decorators/user_message_decorator.rb +6 -0
- data/app/decorators/web_get_tool_decorator.rb +102 -0
- data/app/jobs/agent_request_job.rb +95 -20
- data/app/jobs/mneme_job.rb +51 -0
- data/app/jobs/passive_recall_job.rb +29 -0
- data/app/models/concerns/event/broadcasting.rb +18 -0
- data/app/models/event.rb +10 -0
- data/app/models/goal.rb +27 -0
- data/app/models/goal_pinned_event.rb +11 -0
- data/app/models/pinned_event.rb +41 -0
- data/app/models/session.rb +335 -6
- data/app/models/snapshot.rb +76 -0
- data/config/initializers/event_subscribers.rb +14 -3
- data/config/initializers/fts5_schema_dump.rb +21 -0
- data/db/migrate/20260316094817_add_interrupt_requested_to_sessions.rb +5 -0
- data/db/migrate/20260321080000_create_mneme_schema.rb +32 -0
- data/db/migrate/20260321120000_create_pinned_events.rb +27 -0
- data/db/migrate/20260321140000_create_events_fts_index.rb +77 -0
- data/db/migrate/20260321140100_add_recalled_event_ids_to_sessions.rb +10 -0
- data/lib/agent_loop.rb +67 -18
- data/lib/analytical_brain/runner.rb +159 -84
- data/lib/analytical_brain/tools/assign_nickname.rb +76 -0
- data/lib/analytical_brain/tools/finish_goal.rb +6 -1
- data/lib/anima/cli.rb +34 -1
- data/lib/anima/config_migrator.rb +205 -0
- data/lib/anima/installer.rb +13 -130
- data/lib/anima/settings.rb +42 -1
- data/lib/anima/version.rb +1 -1
- data/lib/events/bounce_back.rb +37 -0
- data/lib/events/subscribers/agent_dispatcher.rb +29 -0
- data/lib/events/subscribers/persister.rb +17 -0
- data/lib/events/subscribers/subagent_message_router.rb +102 -0
- data/lib/events/subscribers/transient_broadcaster.rb +36 -0
- data/lib/llm/client.rb +99 -14
- data/lib/mneme/compressed_viewport.rb +200 -0
- data/lib/mneme/l2_runner.rb +138 -0
- data/lib/mneme/passive_recall.rb +69 -0
- data/lib/mneme/runner.rb +254 -0
- data/lib/mneme/search.rb +150 -0
- data/lib/mneme/tools/attach_events_to_goals.rb +107 -0
- data/lib/mneme/tools/everything_ok.rb +24 -0
- data/lib/mneme/tools/save_snapshot.rb +68 -0
- data/lib/mneme.rb +29 -0
- data/lib/providers/anthropic.rb +57 -13
- data/lib/shell_session.rb +188 -59
- data/lib/tasks/fts5.rake +6 -0
- data/lib/tools/remember.rb +179 -0
- data/lib/tools/spawn_specialist.rb +21 -9
- data/lib/tools/spawn_subagent.rb +22 -11
- data/lib/tools/subagent_prompts.rb +20 -3
- data/lib/tools/think.rb +57 -0
- data/lib/tools/web_get.rb +15 -6
- data/lib/tui/app.rb +230 -127
- data/lib/tui/cable_client.rb +8 -0
- data/lib/tui/decorators/base_decorator.rb +165 -0
- data/lib/tui/decorators/bash_decorator.rb +20 -0
- data/lib/tui/decorators/edit_decorator.rb +19 -0
- data/lib/tui/decorators/read_decorator.rb +24 -0
- data/lib/tui/decorators/think_decorator.rb +36 -0
- data/lib/tui/decorators/web_get_decorator.rb +19 -0
- data/lib/tui/decorators/write_decorator.rb +19 -0
- data/lib/tui/flash.rb +139 -0
- data/lib/tui/formatting.rb +28 -0
- data/lib/tui/height_map.rb +93 -0
- data/lib/tui/message_store.rb +25 -1
- data/lib/tui/performance_logger.rb +90 -0
- data/lib/tui/screens/chat.rb +374 -109
- data/templates/config.toml +156 -0
- metadata +87 -4
- data/CHANGELOG.md +0 -79
- data/Gemfile +0 -17
- data/lib/tools/return_result.rb +0 -81
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: dca02bfff536637c003d5f3bbce8dbe20992b7eeb25e6c51bdb4991a2803b538
|
|
4
|
+
data.tar.gz: ead68cc1bd03306a9eef644db2f15a81b7dbfd74bbe1f059e3f57bcfc0aaf77a
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 45f7f927d4f931b624db684e5f500c436cac44cf9dd9a004400b7219f1167e932b181dddac3793d4cd8f1885cf6401855ba6ca681c99d7cd902af8784e49cee2
|
|
7
|
+
data.tar.gz: 78e532c99e39f09732c9abe9588e467a96fb38cddaf7d1b8ba526971de1c890e73efd157876cb570eabad0cb0eb7c1f93faeef68c7128f7806f972ff98e2351c
|
data/.gitattributes
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
spec/cassettes/**/* -diff linguist-generated=true
|
data/.reek.yml
CHANGED
|
@@ -15,16 +15,46 @@ detectors:
|
|
|
15
15
|
- "Anima::Settings#get"
|
|
16
16
|
# Rescue blocks naturally reference the error object more than self.
|
|
17
17
|
# EnvironmentProbe assembles output from local data structures — not envy.
|
|
18
|
+
# Brain transcript builds from event collection — the method's entire purpose.
|
|
19
|
+
# ConfigMigrator text processing methods naturally reference local line arrays.
|
|
20
|
+
# ToolDecorator subclasses operate on the tool result — that's the pattern.
|
|
21
|
+
# Tool rescue blocks naturally reference the error object.
|
|
18
22
|
FeatureEnvy:
|
|
19
23
|
exclude:
|
|
20
24
|
- "AnalyticalBrainJob#perform"
|
|
21
25
|
- "EnvironmentProbe"
|
|
26
|
+
- "AnalyticalBrain::Runner#build_messages"
|
|
27
|
+
- "Anima::ConfigMigrator"
|
|
28
|
+
- "WebGetToolDecorator"
|
|
29
|
+
- "Tools::WebGet#validate_and_fetch"
|
|
30
|
+
# Remember tool renders events from other objects — formatting IS the job.
|
|
31
|
+
- "Tools::Remember"
|
|
32
|
+
# Event subscribers extract payload fields — inherent to the pattern.
|
|
33
|
+
- "Events::Subscribers::SubagentMessageRouter"
|
|
34
|
+
# Spawn tools orchestrate child session creation — references are the job.
|
|
35
|
+
- "Tools::SpawnSubagent#spawn_child"
|
|
36
|
+
- "Tools::SpawnSpecialist#spawn_child"
|
|
37
|
+
- "Tools::SpawnSpecialist#execute"
|
|
38
|
+
# Nickname assignment operates on child session and parent's children — inherent.
|
|
39
|
+
- "Tools::SubagentPrompts#assign_nickname_via_brain"
|
|
40
|
+
# Validation methods naturally reference the validated value more than self.
|
|
41
|
+
- "AnalyticalBrain::Tools::AssignNickname#validate"
|
|
22
42
|
# Private helpers don't need instance state to be valid.
|
|
23
43
|
# ActiveJob#perform is always a utility function by design.
|
|
44
|
+
# No-op tools (Think, EverythingIsReady) don't need instance state — by design.
|
|
45
|
+
# method_missing is a Ruby dispatch method, not a regular public method.
|
|
46
|
+
# Content-Type dispatch targets are stateless by design — they transform input,
|
|
47
|
+
# not instance state.
|
|
24
48
|
UtilityFunction:
|
|
25
49
|
public_methods_only: true
|
|
26
50
|
exclude:
|
|
27
51
|
- "AnalyticalBrainJob#perform"
|
|
52
|
+
- "PassiveRecallJob#perform"
|
|
53
|
+
- "Tools::Think#execute"
|
|
54
|
+
- "TUI::Formatting"
|
|
55
|
+
- "WebGetToolDecorator#method_missing"
|
|
56
|
+
- "WebGetToolDecorator#application_json"
|
|
57
|
+
- "WebGetToolDecorator#text_html"
|
|
28
58
|
# Session model is the core domain object — methods grow naturally.
|
|
29
59
|
# Mcp CLI accumulates subcommand helpers across add/remove/list/secrets.
|
|
30
60
|
# EnvironmentProbe probes multiple orthogonal facets (OS, Git, project files).
|
|
@@ -34,6 +64,37 @@ detectors:
|
|
|
34
64
|
- "Session"
|
|
35
65
|
- "Anima::CLI::Mcp"
|
|
36
66
|
- "EnvironmentProbe"
|
|
67
|
+
# Runner composes system prompt from modular sections — methods grow with responsibilities.
|
|
68
|
+
- "AnalyticalBrain::Runner"
|
|
69
|
+
# Decorators branch on tool type across 4 render modes — inherent to the pattern.
|
|
70
|
+
# Installer methods each guard idempotency with config_path.exist? — by design.
|
|
71
|
+
RepeatedConditional:
|
|
72
|
+
exclude:
|
|
73
|
+
- "ToolCallDecorator"
|
|
74
|
+
- "Anima::Installer"
|
|
75
|
+
# Runner checks session type to compose responsibilities — the core dispatch.
|
|
76
|
+
- "AnalyticalBrain::Runner"
|
|
77
|
+
# EventDecorator holds shared rendering constants (icons, markers, dispatch maps).
|
|
78
|
+
TooManyConstants:
|
|
79
|
+
exclude:
|
|
80
|
+
- "EventDecorator"
|
|
81
|
+
# Abstract base class methods declare parameters for the subclass contract.
|
|
82
|
+
UnusedParameters:
|
|
83
|
+
exclude:
|
|
84
|
+
- "ToolDecorator#call"
|
|
85
|
+
# Rescue blocks naturally call error.message in multiple catch clauses.
|
|
86
|
+
DuplicateMethodCall:
|
|
87
|
+
exclude:
|
|
88
|
+
- "Tools::WebGet#validate_and_fetch"
|
|
89
|
+
# Remember tool accesses event data for formatting — inherent to rendering.
|
|
90
|
+
- "Tools::Remember"
|
|
91
|
+
# Nickname validation checks parent_session for existence then queries — two calls, one guard.
|
|
92
|
+
- "AnalyticalBrain::Tools::AssignNickname#sibling_nickname_taken?"
|
|
93
|
+
# Method length is enforced by code review, not arbitrary line counts
|
|
94
|
+
# build_sections passes context through to sub-methods — inherent to assembly.
|
|
95
|
+
LongParameterList:
|
|
96
|
+
exclude:
|
|
97
|
+
- "Tools::Remember#build_sections"
|
|
37
98
|
# Method length is enforced by code review, not arbitrary line counts
|
|
38
99
|
TooManyStatements:
|
|
39
100
|
enabled: false
|
data/README.md
CHANGED
|
@@ -2,15 +2,26 @@
|
|
|
2
2
|
|
|
3
3
|
[](https://opensource.org/licenses/MIT)
|
|
4
4
|
|
|
5
|
-
**
|
|
5
|
+
**Not a tool. An agent.**
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
Every AI agent today is a tool pretending to be a person. One brain doing everything. A static context array that fills up and degrades. Sub-agents that start blind and reconstruct context from lossy summaries. A system prompt that says "you are a helpful assistant."
|
|
8
|
+
|
|
9
|
+
Anima is different. It's built on the premise that if you want an agent — a real one — you need to solve the problems nobody else is solving.
|
|
10
|
+
|
|
11
|
+
**A brain modeled after biology, not chat.** The human brain isn't one process — it's specialized subsystems on a shared signal bus. Anima's [analytical brain](https://blog.promptmaster.pro/posts/llms-have-adhd/) runs as a separate subconscious process, managing context, skills, and goals so the main agent can stay in flow. Not two brains — a microservice architecture where each process does one job well. More subsystems are coming.
|
|
12
|
+
|
|
13
|
+
**Context that never degrades.** Other agents fill a static array until the model gets dumb. Anima assembles a fresh viewport over an event bus every iteration. No compaction. No lossy rewriting. Endless sessions. The [dumb zone](https://github.com/humanlayer/advanced-context-engineering-for-coding-agents/blob/main/ace-fca.md) never arrives — the analytical brain curates what the agent sees in real time.
|
|
14
|
+
|
|
15
|
+
**Memory that works like memory.** Other systems bolt on memory as an afterthought — filing cabinets the agent has to consciously open mid-task. It never does; the truck is already moving. Anima's memory department ([Mneme](#semantic-memory-mneme)) runs as a third brain process on the event bus. It summarizes what's about to leave the viewport. It compresses short-term into long-term, like biological memory consolidating during sleep. It pins critical moments to active goals so exact instructions survive where summaries would lose nuance. And it recalls — automatically, passively — surfacing relevant older memories right after the soul, right before the present. The agent doesn't decide to remember. It just remembers.
|
|
16
|
+
|
|
17
|
+
**Sub-agents that already know everything.** When Anima spawns a sub-agent, it inherits the parent's full event stream — every file read, every decision, every user message. No "let me summarize what I know." Lossless context. Zero wasted tool calls on rediscovery.
|
|
18
|
+
|
|
19
|
+
**A soul the agent writes itself.** Anima's first session is birth. The agent wakes up, explores its world, meets its human, and writes its own identity. Not a personality description in a config file — a living document the agent authors and evolves. Always in context, always its own.
|
|
20
|
+
|
|
21
|
+
Your agent. Your machine. Your rules. Anima runs locally as a headless Rails 8.1 app with a client-server architecture and terminal UI.
|
|
8
22
|
|
|
9
23
|
## Table of Contents
|
|
10
24
|
|
|
11
|
-
- [The Problem](#the-problem)
|
|
12
|
-
- [The Insight](#the-insight)
|
|
13
|
-
- [Core Concepts](#core-concepts)
|
|
14
25
|
- [Architecture](#architecture)
|
|
15
26
|
- [Agent Capabilities](#agent-capabilities)
|
|
16
27
|
- [Tools](#tools)
|
|
@@ -25,9 +36,13 @@ Your agent. Your machine. Your rules. Anima is an AI agent with desires, persona
|
|
|
25
36
|
- [Event-Driven Design](#event-driven-design)
|
|
26
37
|
- [Context as Viewport](#context-as-viewport-not-tape)
|
|
27
38
|
- [Brain as Microservices](#brain-as-microservices-on-a-shared-event-bus)
|
|
28
|
-
- [TUI View Modes](#tui-view-modes)
|
|
29
|
-
- [Plugin Architecture](#plugin-architecture)
|
|
30
39
|
- [Semantic Memory](#semantic-memory-mneme)
|
|
40
|
+
- [TUI HUD & View Modes](#tui-hud--view-modes)
|
|
41
|
+
- [Plugin Architecture](#plugin-architecture-planned)
|
|
42
|
+
- [The Vision](#the-vision)
|
|
43
|
+
- [The Problem](#the-problem)
|
|
44
|
+
- [The Insight](#the-insight)
|
|
45
|
+
- [Core Concepts](#core-concepts)
|
|
31
46
|
- [Analogy Map](#analogy-map)
|
|
32
47
|
- [Emergent Properties](#emergent-properties)
|
|
33
48
|
- [Frustration: A Worked Example](#frustration-a-worked-example)
|
|
@@ -37,72 +52,23 @@ Your agent. Your machine. Your rules. Anima is an AI agent with desires, persona
|
|
|
37
52
|
- [Development](#development)
|
|
38
53
|
- [License](#license)
|
|
39
54
|
|
|
40
|
-
## The Problem
|
|
41
|
-
|
|
42
|
-
Current AI agents are reactive. They receive input, produce output. They don't *want* anything. They don't have moods, preferences, or personal growth. They simulate personality through static prompt descriptions rather than emerging it from dynamic internal states.
|
|
43
|
-
|
|
44
|
-
## The Insight
|
|
45
|
-
|
|
46
|
-
The human hormonal system is, at its core, a prompt engineering system. A testosterone spike is a LoRA. Dopamine is a reward signal. The question isn't "can an LLM want?" but "can we build a deep enough context stack that wanting becomes indistinguishable from 'real' wanting?"
|
|
47
|
-
|
|
48
|
-
And if you think about it — what is "real" anyway? It's just a question of how deep you look and what analogies you draw. The human brain is also a next-token predictor running on biological substrate. Different material, same architecture.
|
|
49
|
-
|
|
50
|
-
## Core Concepts
|
|
51
|
-
|
|
52
|
-
### Desires, Not States
|
|
53
|
-
|
|
54
|
-
This is not an emotion simulation system. The key distinction: we don't model *states* ("the agent is happy") or *moods* ("the agent feels curious"). We model **desires** — "you want to learn more", "you want to reach out", "you want to explore".
|
|
55
|
-
|
|
56
|
-
Desires exist BEFORE decisions, like hunger exists before you decide to eat. The agent doesn't decide to send a photo because a parameter says so — it *wants* to, and then decides how.
|
|
57
|
-
|
|
58
|
-
### The Thinking Step
|
|
59
|
-
|
|
60
|
-
The LLM's thinking/reasoning step is the closest thing to an internal monologue. It's where decisions form before output. This is where desires should be injected — not as instructions, but as a felt internal state that colors the thinking process.
|
|
61
|
-
|
|
62
|
-
### Hormones as Semantic Tokens
|
|
63
|
-
|
|
64
|
-
Instead of abstract parameter names (curiosity, boredom, energy), we use **actual hormone names**: testosterone, oxytocin, dopamine, cortisol.
|
|
65
|
-
|
|
66
|
-
Why? Because LLMs already know the full semantic spectrum of each hormone. "Testosterone: 85" doesn't just mean "energy" — the LLM understands the entire cloud of effects: confidence, assertiveness, risk-taking, focus, competitiveness. One word carries dozens of behavioral nuances.
|
|
67
|
-
|
|
68
|
-
This mirrors how text-to-image models process tokens — a single word like "captivating" in a CLIP encoder carries a cloud of visual meanings (composition, quality, human focus, closeup). Similarly, a hormone name carries a cloud of behavioral meanings. Same architecture, different domain:
|
|
69
|
-
|
|
70
|
-
```
|
|
71
|
-
Text → CLIP embedding → image generation
|
|
72
|
-
Event → hormone vector → behavioral shift
|
|
73
|
-
```
|
|
74
|
-
|
|
75
|
-
### The Soul as a Coefficient Matrix
|
|
76
|
-
|
|
77
|
-
Two people experience the same event. One gets `curiosity += 20`, another gets `anxiety += 20`. The coefficients are different — the people are different. That's individuality.
|
|
78
|
-
|
|
79
|
-
The soul is not a personality description. It's a **coefficient matrix** — a table of stimulus→response multipliers. Description is consequence; numbers are cause.
|
|
80
|
-
|
|
81
|
-
And these coefficients are not static. They **evolve through experience** — a child who fears spiders (`fear_gain: 0.9`) can become an entomologist (`fear_gain: 0.2, curiosity_gain: 0.7`). This is measurable, quantifiable personal growth.
|
|
82
|
-
|
|
83
|
-
### Multidimensional Reinforcement Learning
|
|
84
|
-
|
|
85
|
-
Traditional RL uses a scalar reward signal. Our approach produces a **hormone vector** — multiple dimensions updated simultaneously from a single event. This is closer to biological reality and provides richer behavioral shaping.
|
|
86
|
-
|
|
87
|
-
The system scales in two directions:
|
|
88
|
-
1. **Vertically** — start with one hormone (pure RL), add new ones incrementally. Each hormone = new dimension.
|
|
89
|
-
2. **Horizontally** — each hormone expands in aspects of influence. Testosterone starts as "energy", then gains "risk-taking", "confidence", "focus".
|
|
90
|
-
|
|
91
|
-
Existing RL techniques apply at the starting point, then we gradually expand into multidimensional space.
|
|
92
|
-
|
|
93
55
|
## Architecture
|
|
94
56
|
|
|
95
57
|
```
|
|
96
58
|
Anima (Ruby, Rails 8.1 headless)
|
|
97
|
-
|
|
98
|
-
|
|
59
|
+
│
|
|
60
|
+
│ Implemented:
|
|
61
|
+
├── Nous — main LLM (cortex: thinking, decisions, tool use)
|
|
62
|
+
├── Analytical — subconscious brain (skills, workflows, goals, naming)
|
|
99
63
|
├── Skills — domain knowledge bundles (Markdown, user-extensible)
|
|
100
64
|
├── Workflows — operational recipes for multi-step tasks
|
|
101
65
|
├── MCP — external tool integration (Model Context Protocol)
|
|
102
|
-
├── Sub-agents — autonomous child sessions
|
|
103
|
-
├──
|
|
104
|
-
|
|
105
|
-
|
|
66
|
+
├── Sub-agents — autonomous child sessions with lossless context inheritance
|
|
67
|
+
├── Mneme — memory department (summarization, compression, pinning, recall)
|
|
68
|
+
│
|
|
69
|
+
│ Designed:
|
|
70
|
+
├── Thymos — hormonal/desire system (stimulus → hormone vector)
|
|
71
|
+
└── Psyche — soul matrix (coefficient table, evolving individuality)
|
|
106
72
|
```
|
|
107
73
|
|
|
108
74
|
### Runtime Architecture
|
|
@@ -112,6 +78,7 @@ Brain Server (Rails + Puma) TUI Client (RatatuiRuby)
|
|
|
112
78
|
├── LLM integration (Anthropic) ├── WebSocket client
|
|
113
79
|
├── Agent loop + tool execution ├── Terminal rendering
|
|
114
80
|
├── Analytical brain (background) └── User input capture
|
|
81
|
+
├── Mneme memory department (background)
|
|
115
82
|
├── Skills registry + activation
|
|
116
83
|
├── Workflow registry + activation
|
|
117
84
|
├── MCP client (HTTP + stdio)
|
|
@@ -131,7 +98,7 @@ The **Brain** is the persistent service — it handles LLM calls, tool execution
|
|
|
131
98
|
| Framework | Rails 8.1 (headless — no web views, no asset pipeline) |
|
|
132
99
|
| Database | SQLite (3 databases per environment: primary, queue, cable) |
|
|
133
100
|
| Event system | Rails Structured Event Reporter + Action Cable bridge |
|
|
134
|
-
| LLM integration | Anthropic API (Claude
|
|
101
|
+
| LLM integration | Anthropic API (Claude Opus 4.6 + Claude Haiku 4.5) |
|
|
135
102
|
| External tools | Model Context Protocol (HTTP + stdio transports) |
|
|
136
103
|
| Transport | Action Cable WebSocket (Solid Cable adapter) |
|
|
137
104
|
| Background jobs | Solid Queue |
|
|
@@ -161,11 +128,12 @@ journalctl --user -u anima # View logs
|
|
|
161
128
|
State directory (`~/.anima/`):
|
|
162
129
|
```
|
|
163
130
|
~/.anima/
|
|
131
|
+
├── soul.md # Agent's self-authored identity (always in context)
|
|
132
|
+
├── config.toml # Main settings (hot-reloadable)
|
|
133
|
+
├── mcp.toml # MCP server configuration
|
|
164
134
|
├── config/
|
|
165
135
|
│ ├── credentials/ # Rails encrypted credentials per environment
|
|
166
136
|
│ └── anima.yml # Placeholder config
|
|
167
|
-
├── config.toml # Main settings (hot-reloadable)
|
|
168
|
-
├── mcp.toml # MCP server configuration
|
|
169
137
|
├── agents/ # User-defined specialist agents (override built-ins)
|
|
170
138
|
├── skills/ # User-defined skills (override built-ins)
|
|
171
139
|
├── workflows/ # User-defined workflows (override built-ins)
|
|
@@ -174,7 +142,7 @@ State directory (`~/.anima/`):
|
|
|
174
142
|
└── tmp/
|
|
175
143
|
```
|
|
176
144
|
|
|
177
|
-
Updates: `gem update
|
|
145
|
+
Updates: `anima update` — upgrades the gem and merges new config settings into your existing `config.toml` without overwriting customized values. Use `anima update --migrate-only` to skip the gem upgrade and only add missing config keys.
|
|
178
146
|
|
|
179
147
|
### Authentication Setup
|
|
180
148
|
|
|
@@ -198,16 +166,17 @@ The agent has access to these built-in tools:
|
|
|
198
166
|
| `read` | Read files with smart truncation and offset/limit paging |
|
|
199
167
|
| `write` | Create or overwrite files |
|
|
200
168
|
| `edit` | Surgical text replacement with uniqueness constraint |
|
|
201
|
-
| `web_get` | Fetch content from HTTP/HTTPS URLs |
|
|
169
|
+
| `web_get` | Fetch content from HTTP/HTTPS URLs (HTML → Markdown, JSON → TOON) |
|
|
202
170
|
| `spawn_specialist` | Spawn a named specialist sub-agent from the registry |
|
|
203
171
|
| `spawn_subagent` | Spawn a generic child session with custom tool grants |
|
|
204
|
-
| `return_result` | Sub-agents only — deliver results back to parent |
|
|
205
172
|
|
|
206
173
|
Plus dynamic tools from configured MCP servers, namespaced as `server_name__tool_name`.
|
|
207
174
|
|
|
208
175
|
### Sub-Agents
|
|
209
176
|
|
|
210
|
-
|
|
177
|
+
Sub-agents aren't processes — they're sessions on the same event bus. When a sub-agent spawns, its viewport assembles from two scopes: its own events (prioritized) and the parent's events (filling remaining budget). No context serialization, no summary prompts — the sub-agent sees the parent's raw event stream and already knows everything the parent knows. Lossless inheritance by architecture, not by prompting.
|
|
178
|
+
|
|
179
|
+
Two types:
|
|
211
180
|
|
|
212
181
|
**Named Specialists** — predefined agents with specific roles and tool sets, defined in `agents/` (built-in or user-overridable):
|
|
213
182
|
|
|
@@ -219,9 +188,9 @@ Two types of autonomous child sessions:
|
|
|
219
188
|
| `thoughts-analyzer` | Extract decisions from project history |
|
|
220
189
|
| `web-search-researcher` | Research questions via web search |
|
|
221
190
|
|
|
222
|
-
**Generic Sub-agents** — child sessions
|
|
191
|
+
**Generic Sub-agents** — child sessions with custom tool grants for ad-hoc tasks. Each generic sub-agent gets a Haiku-generated nickname (e.g. `@loop-sleuth`, `@api-scout`) for @mention addressing.
|
|
223
192
|
|
|
224
|
-
Sub-agents
|
|
193
|
+
Sub-agents communicate through natural text — their `agent_message` events route to the parent session automatically, and the parent replies via `@name` mentions. No special tools needed; when a sub-agent writes text, the parent sees it. When the parent @mentions a sub-agent, the message arrives in that child's session. Workers become colleagues.
|
|
225
194
|
|
|
226
195
|
### Skills
|
|
227
196
|
|
|
@@ -232,7 +201,7 @@ Domain knowledge bundles loaded from Markdown files. Skills provide specialized
|
|
|
232
201
|
- **Override:** User skills with the same name replace built-in ones
|
|
233
202
|
- **Format:** Flat files (`skill-name.md`) or directories (`skill-name/SKILL.md` with `examples/` and `references/`)
|
|
234
203
|
|
|
235
|
-
Active skills are displayed in the TUI
|
|
204
|
+
Active skills are displayed in the TUI HUD panel (toggle with `C-a → h`).
|
|
236
205
|
|
|
237
206
|
### Workflows
|
|
238
207
|
|
|
@@ -256,7 +225,7 @@ description: "Capture findings or context as a persistent note."
|
|
|
256
225
|
You are tasked with capturing content as a persistent note...
|
|
257
226
|
```
|
|
258
227
|
|
|
259
|
-
The active workflow is shown in the TUI
|
|
228
|
+
The active workflow is shown in the TUI HUD panel with a 📜 indicator. The full lifecycle — activation, goal creation, execution, deactivation — is managed by the analytical brain using judgment, not hardcoded triggers.
|
|
260
229
|
|
|
261
230
|
### MCP Integration
|
|
262
231
|
|
|
@@ -296,11 +265,16 @@ Secrets are stored in Rails encrypted credentials and interpolated via `${creden
|
|
|
296
265
|
|
|
297
266
|
### Analytical Brain
|
|
298
267
|
|
|
299
|
-
A
|
|
268
|
+
A separate LLM process that runs as the agent's subconscious — the first microservice in Anima's brain architecture. For the full motivation behind this design, see [LLMs Have ADHD: Why Your AI Agent Needs a Second Brain](https://blog.promptmaster.pro/posts/llms-have-adhd/).
|
|
269
|
+
|
|
270
|
+
The analytical brain observes the main conversation between turns and handles everything the main agent shouldn't interrupt its flow for:
|
|
300
271
|
|
|
301
|
-
- **
|
|
302
|
-
- **
|
|
303
|
-
- **Goal tracking** — creates root goals and sub-goals as
|
|
272
|
+
- **Skill activation** — activates/deactivates domain knowledge based on conversation context
|
|
273
|
+
- **Workflow management** — recognizes tasks, activates matching workflows, tracks lifecycle
|
|
274
|
+
- **Goal tracking** — creates root goals and sub-goals as work progresses, marks them complete
|
|
275
|
+
- **Session naming** — generates emoji + short name when the topic becomes clear
|
|
276
|
+
|
|
277
|
+
Each of these would be a context switch for the main agent — a chore that competes with the primary task. For the analytical brain, they ARE the primary task. Two agents, each in their own flow state.
|
|
304
278
|
|
|
305
279
|
Goals form a two-level hierarchy (root goals with sub-goals) and are displayed in the TUI. The analytical brain uses a fast model (Claude Haiku 4.5) for speed and runs as a non-persisted "phantom" session.
|
|
306
280
|
|
|
@@ -310,33 +284,34 @@ All tunable values are exposed through `~/.anima/config.toml` with hot-reload (n
|
|
|
310
284
|
|
|
311
285
|
```toml
|
|
312
286
|
[llm]
|
|
313
|
-
model = "claude-
|
|
314
|
-
fast_model = "claude-haiku-4-5
|
|
315
|
-
max_tokens =
|
|
316
|
-
|
|
287
|
+
model = "claude-opus-4-6"
|
|
288
|
+
fast_model = "claude-haiku-4-5"
|
|
289
|
+
max_tokens = 8192
|
|
290
|
+
max_tool_rounds = 250
|
|
291
|
+
token_budget = 190_000
|
|
317
292
|
|
|
318
293
|
[timeouts]
|
|
319
|
-
api =
|
|
294
|
+
api = 300
|
|
320
295
|
command = 30
|
|
321
296
|
|
|
322
297
|
[analytical_brain]
|
|
323
298
|
max_tokens = 4096
|
|
324
299
|
blocking_on_user_message = true
|
|
325
|
-
event_window =
|
|
300
|
+
event_window = 20
|
|
326
301
|
|
|
327
302
|
[session]
|
|
328
|
-
name_generation_interval =
|
|
303
|
+
name_generation_interval = 30
|
|
329
304
|
```
|
|
330
305
|
|
|
331
306
|
## Design
|
|
332
307
|
|
|
333
308
|
### Three Layers (mirroring biology)
|
|
334
309
|
|
|
335
|
-
1. **
|
|
310
|
+
1. **Cortex (Nous)** — the main LLM. Thinking, decisions, tool use. Reads the system prompt (soul + skills + goals) and the event viewport. This layer is fully implemented.
|
|
336
311
|
|
|
337
|
-
2. **
|
|
312
|
+
2. **Endocrine system (Thymos)** [planned] — a lightweight background process. Reads recent events. Doesn't respond. Just updates hormone levels. Pure stimulus→response, like a biological gland. The analytical brain is the architectural proof that background subscribers work — Thymos plugs into the same event bus.
|
|
338
313
|
|
|
339
|
-
3. **
|
|
314
|
+
3. **Homeostasis** [planned] — persistent state (SQLite). Current hormone levels with decay functions. No intelligence, just state that changes over time. The cortex reads hormone state transformed into **desire descriptions** — not "longing: 87" but "you want to see them." Humans don't see cortisol levels, they feel anxiety.
|
|
340
315
|
|
|
341
316
|
### Event-Driven Design
|
|
342
317
|
|
|
@@ -356,39 +331,72 @@ Events flow through two channels:
|
|
|
356
331
|
1. **In-process** — Rails Structured Event Reporter (local subscribers like Persister)
|
|
357
332
|
2. **Over the wire** — Action Cable WebSocket (`Event::Broadcasting` callbacks push to connected TUI clients)
|
|
358
333
|
|
|
359
|
-
Events fire, subscribers react, state updates
|
|
334
|
+
Events fire, subscribers react, state updates. The system prompt — soul, active skills, active workflow, current goals — is assembled fresh for each LLM call from live state, not from the event stream. The agent's identity (soul.md) and capabilities (skills, workflows) are always current, never stale.
|
|
360
335
|
|
|
361
336
|
### Context as Viewport, Not Tape
|
|
362
337
|
|
|
363
|
-
|
|
338
|
+
Most agents treat context as an append-only array — messages go in, they never come out (until compaction destroys them). Anima has no array. There are only events persisted in SQLite, and a **viewport** assembled fresh for every LLM call.
|
|
339
|
+
|
|
340
|
+
The viewport is a live query, not a log. It walks events newest-first until the token budget is exhausted. Events that fall out of the viewport aren't deleted — they're still in the database, just not visible to the model right now. The context can shrink, grow, or change composition between any two iterations. If the analytical brain marks a large accidental file read as irrelevant, it's gone from the next viewport — tokens recovered instantly.
|
|
364
341
|
|
|
365
|
-
|
|
342
|
+
This means sessions are endless. No compaction. No lossy rewriting. The model always operates in fresh, high-quality context. The [dumb zone](https://github.com/humanlayer/advanced-context-engineering-for-coding-agents/blob/main/ace-fca.md) never arrives. Meanwhile, Mneme runs as a background department — summarizing evicted events into persistent snapshots so past context is preserved, not destroyed.
|
|
343
|
+
|
|
344
|
+
Sub-agent viewports compose from two event scopes — their own events (prioritized) and parent events (filling remaining budget). Same mechanism, no special handling. The bus is the architecture.
|
|
366
345
|
|
|
367
346
|
### Brain as Microservices on a Shared Event Bus
|
|
368
347
|
|
|
369
348
|
The human brain isn't a single process — it's dozens of specialized subsystems communicating through shared chemical and electrical signals. The prefrontal cortex doesn't "call" the amygdala. They both react to the same event independently, and their outputs combine.
|
|
370
349
|
|
|
371
|
-
Anima mirrors this with an event-driven architecture:
|
|
350
|
+
Anima mirrors this with an event-driven architecture. The analytical brain is the first subscriber — a working proof that the pattern scales. Future subscribers plug into the same bus:
|
|
372
351
|
|
|
373
352
|
```
|
|
374
353
|
Event: "tool_call_failed"
|
|
375
354
|
│
|
|
376
|
-
├──
|
|
377
|
-
├── Mneme
|
|
378
|
-
|
|
355
|
+
├── Analytical brain: update goals, check if workflow needs changing
|
|
356
|
+
├── Mneme: summarize evicted context into snapshot
|
|
357
|
+
├── Thymos subscriber: frustration += 10 [planned]
|
|
358
|
+
└── Psyche subscriber: update coefficient (this agent handles errors calmly) [planned]
|
|
379
359
|
|
|
380
360
|
Event: "user_sent_message"
|
|
381
361
|
│
|
|
382
|
-
├──
|
|
383
|
-
├──
|
|
384
|
-
|
|
362
|
+
├── Analytical brain: activate relevant skills, name session
|
|
363
|
+
├── Mneme: check viewport eviction, fire if boundary left viewport
|
|
364
|
+
├── Thymos subscriber: oxytocin += 5 (bonding signal) [planned]
|
|
365
|
+
└── Psyche subscriber: associate emotional state with topic [planned]
|
|
366
|
+
```
|
|
367
|
+
|
|
368
|
+
Each subscriber is a microservice — independent, stateless, reacting to the same event bus. No orchestrator decides what to do. The architecture IS the nervous system.
|
|
369
|
+
|
|
370
|
+
### Semantic Memory (Mneme)
|
|
371
|
+
|
|
372
|
+
Every AI agent today has the same disability: amnesia. Context fills up, gets compacted, gets destroyed. The agent gets dumber as the conversation gets longer. When the session ends, everything is gone. Some systems bolt on memory as an afterthought — markdown files with procedures for when to save and what format to use. Filing cabinets the agent has to consciously decide to open, mid-task, while in flow. It never does. The truck is already moving.
|
|
373
|
+
|
|
374
|
+
Mneme is not a filing cabinet. It's *remembering* — the way biological memory works. Continuous, automatic, layered. A third brain department running on the same event bus as the analytical brain, specializing in one job: making sure nothing important is ever truly lost.
|
|
375
|
+
|
|
376
|
+
**Eviction-triggered summarization** — Mneme tracks a boundary event on each session. When that event leaves the viewport, Mneme fires: it builds a compressed view of the conversation (full text for messages, `[N tools called]` counters for tool work), sends it to a fast model, and persists a snapshot. The boundary advances after each run — a self-regulating cycle that fires exactly when context is about to be lost, no sooner or later. No timer. No manual trigger. The architecture itself knows when to remember.
|
|
377
|
+
|
|
378
|
+
**Two-level snapshot compression** — once source events evict from the sliding window, their snapshots appear in the viewport as memory context. When enough Level 1 snapshots accumulate, Mneme compresses them into a single Level 2 snapshot — recursive summarization that mirrors how human memory consolidates short-term into long-term. Token budget splits across layers (L2: 5%, L1: 15%, recall: 5%, sliding: 75%), creating natural pressure: more memories means less live context, same principle as video compression keyframes. The viewport layout reads like geological strata — deep past at the top, recent past below, live present at the bottom:
|
|
379
|
+
|
|
380
|
+
```
|
|
381
|
+
[Soul — who I am]
|
|
382
|
+
[L2 snapshots — weeks ago, compressed]
|
|
383
|
+
[L1 snapshots — hours ago, detailed]
|
|
384
|
+
[Associative recall — relevant older memories]
|
|
385
|
+
[Pinned events — critical moments from active goals]
|
|
386
|
+
[Sliding window — the present]
|
|
385
387
|
```
|
|
386
388
|
|
|
387
|
-
|
|
389
|
+
**Goal-scoped event pinning** — some moments are too important for summaries. Exact user instructions. Key decisions. Critical corrections. Mneme pins these events to active Goals — they float above the sliding window, protected from eviction, surviving intact where compression would lose the nuance that matters. Pins are goal-scoped and many-to-many: one event can attach to multiple Goals, and cleanup is automatic via reference counting. When the last active Goal completes, the pin releases. No manual unpin, no stale pins accumulating forever.
|
|
388
390
|
|
|
389
|
-
|
|
391
|
+
**Associative recall** — FTS5 full-text search across the entire event history, across all sessions. Two modes: *passive* recall triggers automatically when goals change — Mneme searches for relevant older context and injects it into the viewport between snapshots and the sliding window. Memories surface on their own, right after the soul, right before the present. The agent doesn't have to decide to remember — the remembering happens around it. *Active* recall via the `remember(event_id:)` tool returns a fractal-resolution window centered on a target event — full detail at the center, compressed snapshots at the edges, like eye focus with sharp fovea and blurry periphery.
|
|
390
392
|
|
|
391
|
-
|
|
393
|
+
The difference from every other system: memory isn't a tool the agent uses. It's the substrate the agent thinks in. Every LLM call assembles a fresh viewport where identity comes first, then memories, then the present — the agent always knows who it is, always has access to what it learned, and never has to break flow to make that happen.
|
|
394
|
+
|
|
395
|
+
### TUI HUD & View Modes
|
|
396
|
+
|
|
397
|
+
The right-side HUD panel shows session state at a glance: session name, goals (with status icons), active skills, workflow, and sub-agents. Toggle with `C-a → h`; when hidden, the input border shows `C-a → h HUD` as a reminder.
|
|
398
|
+
|
|
399
|
+
Three switchable view modes let you control how much detail the TUI shows. Cycle with `C-a → v`:
|
|
392
400
|
|
|
393
401
|
| Mode | What you see |
|
|
394
402
|
|------|-------------|
|
|
@@ -396,23 +404,80 @@ Three switchable view modes let you control how much detail the TUI shows. Cycle
|
|
|
396
404
|
| **Verbose** | Everything in Basic, plus timestamps `[HH:MM:SS]`, tool call previews (`🔧 bash` / `$ command` / `↩ response`), and system messages |
|
|
397
405
|
| **Debug** | Full X-ray view — timestamps, token counts per message (`[14 tok]`), full tool call args, full tool responses, tool use IDs |
|
|
398
406
|
|
|
399
|
-
View modes are implemented
|
|
407
|
+
View modes are implemented as a three-layer decorator architecture:
|
|
400
408
|
|
|
401
|
-
|
|
409
|
+
- **ToolDecorator** (server-side, pre-event) — transforms raw tool responses for LLM consumption. Content-Type dispatch converts HTML → Markdown, JSON → TOON. Sits between tool execution and the event stream.
|
|
410
|
+
- **EventDecorator** (server-side, Draper) — uniform per event type (`UserMessageDecorator`, `ToolCallDecorator`, etc.). Decides WHAT structured data enters the wire for each view mode.
|
|
411
|
+
- **TUI Decorator** (client-side) — unique per tool name (`BashDecorator`, `ReadDecorator`, `EditDecorator`, etc.). Decides HOW each tool looks on screen — tool-specific icons, colors, and formatting.
|
|
402
412
|
|
|
403
|
-
|
|
413
|
+
Mode is stored on the `Session` model server-side, so it persists across reconnections.
|
|
414
|
+
|
|
415
|
+
### Plugin Architecture [planned]
|
|
416
|
+
|
|
417
|
+
The event bus is designed for extension. Tools, feelings, and memory systems are all event subscribers — same mechanism, different namespace:
|
|
404
418
|
|
|
405
|
-
```
|
|
406
|
-
anima
|
|
407
|
-
anima
|
|
408
|
-
anima
|
|
419
|
+
```
|
|
420
|
+
anima-tools-* → tool capabilities (MCP or native)
|
|
421
|
+
anima-feelings-* → hormonal state updates (Thymos subscribers)
|
|
422
|
+
anima-memory-* → recall and association (Mneme subscribers)
|
|
409
423
|
```
|
|
410
424
|
|
|
411
|
-
|
|
425
|
+
Currently tools are built-in. Plugin extraction into distributable gems comes later.
|
|
412
426
|
|
|
413
|
-
|
|
427
|
+
## The Vision
|
|
414
428
|
|
|
415
|
-
|
|
429
|
+
### The Problem
|
|
430
|
+
|
|
431
|
+
Current AI agents are reactive. They receive input, produce output. They don't *want* anything. They don't have moods, preferences, or personal growth. They simulate personality through static prompt descriptions rather than emerging it from dynamic internal states.
|
|
432
|
+
|
|
433
|
+
### The Insight
|
|
434
|
+
|
|
435
|
+
The human hormonal system is, at its core, a prompt engineering system. A testosterone spike is a LoRA. Dopamine is a reward signal. The question isn't "can an LLM want?" but "can we build a deep enough context stack that wanting becomes indistinguishable from 'real' wanting?"
|
|
436
|
+
|
|
437
|
+
And if you think about it — what is "real" anyway? It's just a question of how deep you look and what analogies you draw. The human brain is also a next-token predictor running on biological substrate. Different material, same architecture.
|
|
438
|
+
|
|
439
|
+
### Core Concepts
|
|
440
|
+
|
|
441
|
+
#### Desires, Not States
|
|
442
|
+
|
|
443
|
+
This is not an emotion simulation system. The key distinction: we don't model *states* ("the agent is happy") or *moods* ("the agent feels curious"). We model **desires** — "you want to learn more", "you want to reach out", "you want to explore".
|
|
444
|
+
|
|
445
|
+
Desires exist BEFORE decisions, like hunger exists before you decide to eat. The agent doesn't decide to send a photo because a parameter says so — it *wants* to, and then decides how.
|
|
446
|
+
|
|
447
|
+
#### The Thinking Step
|
|
448
|
+
|
|
449
|
+
The LLM's thinking/reasoning step is the closest thing to an internal monologue. It's where decisions form before output. This is where desires should be injected — not as instructions, but as a felt internal state that colors the thinking process.
|
|
450
|
+
|
|
451
|
+
#### Hormones as Semantic Tokens
|
|
452
|
+
|
|
453
|
+
Instead of abstract parameter names (curiosity, boredom, energy), we use **actual hormone names**: testosterone, oxytocin, dopamine, cortisol.
|
|
454
|
+
|
|
455
|
+
Why? Because LLMs already know the full semantic spectrum of each hormone. "Testosterone: 85" doesn't just mean "energy" — the LLM understands the entire cloud of effects: confidence, assertiveness, risk-taking, focus, competitiveness. One word carries dozens of behavioral nuances.
|
|
456
|
+
|
|
457
|
+
This mirrors how text-to-image models process tokens — a single word like "captivating" in a CLIP encoder carries a cloud of visual meanings (composition, quality, human focus, closeup). Similarly, a hormone name carries a cloud of behavioral meanings. Same architecture, different domain:
|
|
458
|
+
|
|
459
|
+
```
|
|
460
|
+
Text → CLIP embedding → image generation
|
|
461
|
+
Event → hormone vector → behavioral shift
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
#### The Soul as a Coefficient Matrix
|
|
465
|
+
|
|
466
|
+
Two people experience the same event. One gets `curiosity += 20`, another gets `anxiety += 20`. The coefficients are different — the people are different. That's individuality.
|
|
467
|
+
|
|
468
|
+
The soul is not a personality description. It's a **coefficient matrix** — a table of stimulus→response multipliers. Description is consequence; numbers are cause.
|
|
469
|
+
|
|
470
|
+
And these coefficients are not static. They **evolve through experience** — a child who fears spiders (`fear_gain: 0.9`) can become an entomologist (`fear_gain: 0.2, curiosity_gain: 0.7`). This is measurable, quantifiable personal growth.
|
|
471
|
+
|
|
472
|
+
#### Multidimensional Reinforcement Learning
|
|
473
|
+
|
|
474
|
+
Traditional RL uses a scalar reward signal. Our approach produces a **hormone vector** — multiple dimensions updated simultaneously from a single event. This is closer to biological reality and provides richer behavioral shaping.
|
|
475
|
+
|
|
476
|
+
The system scales in two directions:
|
|
477
|
+
1. **Vertically** — start with one hormone (pure RL), add new ones incrementally. Each hormone = new dimension.
|
|
478
|
+
2. **Horizontally** — each hormone expands in aspects of influence. Testosterone starts as "energy", then gains "risk-taking", "confidence", "focus".
|
|
479
|
+
|
|
480
|
+
Existing RL techniques apply at the starting point, then we gradually expand into multidimensional space.
|
|
416
481
|
|
|
417
482
|
## Analogy Map
|
|
418
483
|
|
|
@@ -511,9 +576,26 @@ This single example demonstrates every core principle:
|
|
|
511
576
|
|
|
512
577
|
## Status
|
|
513
578
|
|
|
514
|
-
**
|
|
579
|
+
**Working agent with autonomous capabilities.** Shipping now:
|
|
515
580
|
|
|
516
|
-
|
|
581
|
+
- Event-driven architecture on a shared event bus
|
|
582
|
+
- Dynamic viewport context assembly (endless sessions, no compaction)
|
|
583
|
+
- Analytical brain (skills, workflows, goals, session naming)
|
|
584
|
+
- Mneme memory department (eviction-triggered summarization, persistent snapshots, goal-scoped event pinning, associative recall)
|
|
585
|
+
- 9 built-in tools + MCP integration (HTTP + stdio transports)
|
|
586
|
+
- 7 built-in skills + 13 built-in workflows (user-extensible)
|
|
587
|
+
- Sub-agents with lossless context inheritance (5 specialists + generic)
|
|
588
|
+
- Client-server architecture with WebSocket transport + graceful reconnection
|
|
589
|
+
- Collapsible HUD panel with goals, skills, workflow, and sub-agent tracking
|
|
590
|
+
- Three TUI view modes (Basic / Verbose / Debug)
|
|
591
|
+
- Hot-reloadable TOML configuration
|
|
592
|
+
- Self-authored soul (agent writes its own system prompt)
|
|
593
|
+
|
|
594
|
+
**Designed, not yet implemented:**
|
|
595
|
+
|
|
596
|
+
- Hormonal system (Thymos) — desires as behavioral drivers
|
|
597
|
+
- Semantic recall (Mneme) — embedding-based search + re-ranking over FTS5
|
|
598
|
+
- Soul matrix (Psyche) — evolving coefficient table for individuality
|
|
517
599
|
|
|
518
600
|
## Development
|
|
519
601
|
|
|
@@ -533,6 +615,10 @@ bin/dev
|
|
|
533
615
|
|
|
534
616
|
# Terminal 2: Connect the TUI to the dev brain
|
|
535
617
|
./exe/anima tui --host localhost:42135
|
|
618
|
+
|
|
619
|
+
# Optional: enable performance logging for render profiling
|
|
620
|
+
./exe/anima tui --host localhost:42135 --debug
|
|
621
|
+
# Frame timing data written to log/tui_performance.log
|
|
536
622
|
```
|
|
537
623
|
|
|
538
624
|
Development uses port **42135** so it doesn't conflict with the production brain (port 42134) running via systemd. On first run, `bin/dev` runs `db:prepare` automatically.
|