ei-tui 0.1.3 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -20,7 +20,7 @@ There's no other usage, debugging, analytics, tracking, or history information s
20
20
 
21
21
  If there's a problem with the system, you need to tell me here on GitHub, or on Bluesky, or Discord, or whatever. There's no "report a bug" button, no "DONATE" link in the app.
22
22
 
23
- Don't get me wrong -I absolutely want to fix whatever problem you run into, or hear about the feature you want - but your Ei system, and the data you build with it, is yours.
23
+ Don't get me wrong - I absolutely want to fix whatever problem you run into, or hear about the feature you want - but your Ei system, and the data you build with it, is yours.
24
24
 
25
25
  That's what "Local First" means.
26
26
 
@@ -36,15 +36,14 @@ That I can't decrypt.
36
36
 
37
37
  Even if I wanted to (I definitely do not), I wouldn't be able to divulge your information because **You** are the only one that can generate the key. It's not a public/private keypair, it's not a "handshake".
38
38
 
39
- It's your data - I have no right to it, and neither does anyone else except you.
39
+ It's *your* data - I have no right to it, and neither does anyone else except you.
40
40
 
41
41
  ## What's a Persona?
42
42
 
43
43
  At the core of the technology, LLM "Agents" are made up of two or three components, depending on who you ask:
44
44
 
45
45
  1. System Prompt
46
- 2. User Prompt
47
- a. Which can be broken into "Messages", but they're still basically the User Prompt
46
+ 2. User Prompt (which can be broken into "Messages", but they're still basically the User Prompt)
48
47
 
49
48
  The "System Prompt" is the part where you usually say
50
49
 
@@ -55,7 +54,7 @@ The "User Prompt" is the part where you put your messages
55
54
  > user: "OMG ARE YOU REALLY A PIRATE?!"
56
55
  > assistant: "Yar."
57
56
 
58
- A "Persona" is the combination of these two pieces of data, plus some _personality_. The reason I didn't call it an "Agent" is because Personas aren't static<sup>1</sup> - they'll grow and adapt as you talk to them. See the [Core Readme](core/README.md) for more information!
57
+ A "Persona" is the combination of these two pieces of data, plus some _personality_. The reason I didn't call it an "Agent" is because Personas aren't static<sup>1</sup> - they'll grow and adapt as you talk to them. See the [Core Readme](src/README.md) for more information!
59
58
 
60
59
  > <sup>1</sup>: By default. You can make them static.
61
60
 
@@ -83,7 +82,7 @@ Optionally, users can opt into a server-side data sync. This is ideal for users
83
82
 
84
83
  ### Web
85
84
 
86
- When you access Ei via https://ei.flare576.com, your browser will download the assets and walk you through onboarding. If you're running a Local LLM on port :1234 it will auto-detect it, otherwise it will allowing you to enter it.
85
+ When you access Ei via https://ei.flare576.com, your browser will download the assets and walk you through onboarding. If you're running a Local LLM on port :1234 it will auto-detect it, otherwise it prompts you to enter one.
87
86
 
88
87
  Then you'll land on the chat interface. As you enter messages, they'll go to *YOUR* server. As Ei discovers information about you, summaries will be built with *YOUR* server, and data will be stored to *YOUR* LocalStorage in *YOUR* browser.
89
88
 
@@ -107,43 +106,28 @@ More information (including commands) can be found in the [TUI Readme](tui/READM
107
106
 
108
107
  ### Opencode
109
108
 
110
- Opencode saves all of its sessions locally, either in a JSON structure or, if you're running the latest version, in a SQLite DB. If you enable the integration, Ei will pull all of the conversational parts of those sessions and summarize them, pulling out details, quotes, and keeping the summaries up-to-date.
109
+ Ei gives OpenCode a persistent memory. Yes, this is a dynamic, perpetual RAG I didn't plan it that way, but here we are.
111
110
 
112
- Then, Opencode can call into Ei and pull those details back out.
111
+ Opencode saves all of its sessions locally, either in a JSON structure or, if you're running the latest version, in a SQLite DB. If you enable the integration, Ei will pull all of the conversational parts of those sessions and summarize them, pulling out details, quotes, and keeping the summaries up-to-date.
113
112
 
114
- Yes, I did make a dynamic, perpetual RAG. No, I didn't do it on purpose; that's why you always have a side-project or two going. See [TUI Readme](tui/README.md)
113
+ Then, Opencode can call into Ei and pull those details back out. That's why you always have a side-project or two going. See [TUI Readme](tui/README.md)
115
114
 
116
115
  ## Technical Details
117
116
 
118
117
  This project is separated into five (5) logical parts:
119
118
 
120
- 1. Ei Core
121
- a. Location: `/src`
122
- b. Purpose: Shared between TUI and Web, it's The event-driven core of Ei, housing:
123
- i. Business logic
124
- ii. Prompts
125
- iii. Integrations
126
- 2. Ei Online
127
- a. Location: `/web`
128
- b. Purpose: Provides a web interface for Ei.
129
- c. Deployed to: https://ei.flare576.com
130
- 3. Ei Terminal User Interface (TUI)
131
- a. Location: `/tui`
132
- b. Purpose: Provides a TUI interface for Ei
133
- c. Deployed to: NPM for you to install
134
- 4. Ei API
135
- a. Location: `/api`
136
- b. Purpose: Provides remote sync for Ei.
137
- c. Deployed to: https://ei.flare576.com/api
138
- 5. Ei Command Line Interface (CLI)
139
- a. Location: `/src/cli`
140
- b. Purpose: Provides a CLI interface for Opencode to use as a tool
141
- c. Technically, ships with the TUI
119
+ | Part | Location | Purpose | Deployed To |
120
+ |------|----------|---------|-------------|
121
+ | Ei Core | `/src` | Shared between TUI and Web. The event-driven core of Ei, housing business logic, prompts, and integrations. | (library) |
122
+ | Ei Online | `/web` | Web interface for Ei. | https://ei.flare576.com |
123
+ | Ei Terminal UI (TUI) | `/tui` | TUI interface for Ei. | NPM for you to install |
124
+ | Ei API | `/api` | Remote sync for Ei. | https://ei.flare576.com/api |
125
+ | Ei CLI | `/src/cli` | CLI interface for Opencode to use as a tool. Technically ships with the TUI. | (ships with TUI) |
142
126
 
143
127
  ## Requirements
144
- [Bun](https://bun.sh) runtime (>=1.0.0)
145
- Local LLM provider (LM Studio, Ollama, etc.)
146
- * OR API access to a remote LLM host (Anthropic, OpenAI, Bedrock, your uncle's LLM farm, etc.)
128
+
129
+ - [Bun](https://bun.sh) runtime (>=1.0.0)
130
+ - A local LLM (LM Studio, Ollama, etc.) OR API access to a cloud provider (Anthropic, OpenAI, Bedrock, your uncle's LLM farm, etc.)
147
131
 
148
132
  ## LM Studio Setup
149
133
 
@@ -165,6 +149,19 @@ npm run build # Compile TypeScript
165
149
  npm run test # Run tests
166
150
  ```
167
151
 
152
+ ## Releases
153
+
154
+ Tag a version to publish automatically:
155
+
156
+ ```bash
157
+ # bump version in package.json
158
+ git commit -am "chore: bump to v0.1.4"
159
+ git tag v0.1.4
160
+ git push && git push --tags
161
+ ```
162
+
163
+ GitHub Actions picks up the tag and publishes to npm with provenance via OIDC. No stored secrets.
164
+
168
165
  ## Project Structure
169
166
 
170
167
  See `AGENTS.md` for detailed architecture and contribution guidelines.
package/package.json CHANGED
@@ -1,7 +1,11 @@
1
1
  {
2
2
  "name": "ei-tui",
3
- "version": "0.1.3",
3
+ "version": "0.1.4",
4
4
  "author": "Flare576",
5
+ "repository": {
6
+ "type": "git",
7
+ "url": "git+https://github.com/flare576/ei.git"
8
+ },
5
9
  "engines": {
6
10
  "bun": ">=1.0.0"
7
11
  },
package/src/README.md CHANGED
@@ -22,8 +22,12 @@ As the user uses the system, it tries to keep track of several data points for t
22
22
  + 0.0: The user never EVER wants to talk about or hear about the subject
23
23
  + 1.0: Every message to and from the user should be about this person, place, or thing
24
24
  * Current: How much the user has talked or heard about a subject, where:
25
- + 0.0: Obi-Wan Kenobi ...now thats a name Ive not heard in a long time
25
+ + 0.0: Obi-Wan Kenobi ...now that's a name I've not heard in a long time
26
26
  + 1.0: The user just spent 4 hours talking about Star Wars
27
+ - Strength: The system will try to gauge how strongly you exhibit a Trait
28
+ * 1.0 on "Visual Learner" would mean that you've said or shown that it is the absolute best way for you to learn
29
+ * 0.0 on "Public Speaker" would mean you've said or shown that you have no desire, aptitude, or willingness to present
30
+ - Validated: "Facts" have proven almost as hard to get right as Traits, so I added a way for Ei and you to mark the ones that are true as "Validated"
27
31
 
28
32
  Each of those types represents a piece of what the system "knows" about the person, and all but "Traits" are kept up-to-date as the person chats with Personas, but not on always on every message. On each message to a Persona, a check is made:
29
33
 
@@ -94,3 +98,83 @@ This is largely tracked by exposure, but expiration is dictated by an Agent.
94
98
 
95
99
  After we've removed irrelevant topics, this is the Agent's opportunity to add NEW topics that might be of interest to the Persona (and the user). Again, it's a prompt to an agent if the Persona doesn't have its full capacity of Topics.
96
100
 
101
+ # Opencode Importer
102
+
103
+ The current implementation of the importer is very, very simple. You could probably just read the code to get the idea, but, essentially:
104
+
105
+ 1. If there's any queue, skip
106
+ 2. Look at the `last_extraction_ts` and find the OpenCode Session with the last_updated time closest to it
107
+ 3. Check if we've already imported this session, and what the cutoff is for "already seen" messages
108
+ 4. Wipe the Personas history and mark them as archived
109
+ 5. Write the messages to the Persona, marking old messages as `[p,r,o,f]` processed
110
+ 6. Queue the extract
111
+ 7. Bump the `last_extraction_ts` to that session's last_updated timestamp
112
+
113
+ That's it. We slowly process through OpenCode's backlog, chronologically, until we finally catch up. Once we do, we just parse each message/exchange and extract for now. I think eventually I'll add a gate that says "if last_updated is less than 24h, skip this extraction" so that each session has a better chance of being "Complete" and offering the extraction process a whole look.
114
+
115
+ ## Failed Approaches
116
+
117
+ In the spirit of "Right down what you tried, so you don't try it again," here are the other ways we've attempted to pull in the data.
118
+
119
+ # V1 - Greedy
120
+
121
+ The first mechanism used a three-phase process. First, we'd pull **every** message from OpenCode into Ei, assigning them to personas. This was before we had markers on each message to indicate if they'd been processed for `[Facts, tRaits, People, tOpics]` (the `[f,r,p,o]`, but I usually switch the order to `[p,r,o,f]` because I can't remember the other order). On subsequent runs, we'd just tack the latest messages onto the end of the conversation.
122
+
123
+ Second, we'd queue up Topic updates and quote retrieval starting with the most recent messages. The goal was to get the latest, hottest quotes out of the system as soon as possible so the user gets "immediate value"
124
+
125
+ Last, we'd start queuing up Fact, Trait, and People updates from the oldest records.
126
+
127
+ ## Why Didn't This Work
128
+
129
+ You ever try to tell an LLM a story by starting at the end and working your way backward? Don't bother, it doesn't work. Trying to process the last messages first was an attempt to get visible value as soon as possible, but that value was a lie.
130
+
131
+ Oh, and that initial queue for quotes was about 450 items.
132
+
133
+ # V2 - Clever
134
+
135
+ The second approach ended up being roughly 800 lines of "Clever" code. I call it that because Opus found at least 4 edge cases in the logic that we added dials and trackers around, so this quote applied:
136
+
137
+ > Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
138
+ > -- Brian Kernighan, 1974
139
+
140
+ The approach broke the messages into a "recent" timeframe and the rest. We had just added message roll-off to the main personas with a rule set of "Always keep at least 200 messages, but after that roll off any message older than 14 days," so we used the same approach for Opencode Agent personas.
141
+
142
+ We loaded the last 14 days of messages... Which was tricky because we _also_ wanted to keep messages tied to their "sessions" for context, so if a user added a message to a session from last year, we'd need to pull old messages (that we already processed) and the new messages (that we hadn't) into a block that the system could parse correctly.
143
+
144
+ Oh, and the "older than 14 days" messages we had to process in, too, so we needed _another_ timestamp tracker for that...
145
+
146
+ And sometimes that process will _also_ be split in the same way as the recent messages...
147
+
148
+ Aaaand the most recent 14 days of messages, queued for all four data types, resulted in an initial queue of 300 items **scans**, and once each of those scans found 10 Topics to talk about, the queue jumped to 3,000 instantly.
149
+
150
+ # The Processing Loop
151
+
152
+ The processor runs a tight loop every 100ms. On each tick it checks: is the queue idle? Is there a request waiting? If yes, it dequeues the highest-priority item and hands it to the LLM. One at a time. Always.
153
+
154
+ This is intentional. Concurrent LLM calls sound appealing until you're watching a persona give contradictory answers because two extractions ran in parallel on the same data. Boring serialization beats exciting race conditions.
155
+
156
+ # Context Windows
157
+
158
+ Personas don't send their entire message history to the LLM. By default, only messages from the last 8 hours are included (`context_window_hours`, configurable per persona). Older messages are still stored — they're just not in the prompt.
159
+
160
+ Message rolloff works differently: messages are kept until there are at least 200 of them _and_ any are older than 14 days. So a persona you chat with daily will roll off old messages gradually; one you chat with twice a year will keep everything.
161
+
162
+ # Embeddings
163
+
164
+ Every fact, trait, person, topic, and quote gets a vector embedding when it's created or updated. The model is `all-MiniLM-L6-v2` (384 dimensions) — small enough to run locally, accurate enough to be useful.
165
+
166
+ When building a system prompt for a response, the system doesn't just dump all your data into the context. It uses cosine similarity against the current message to find the most relevant items — up to 15 per type, with a 0.3 similarity threshold. Everything below the threshold gets left out.
167
+
168
+ The model runs via `fastembed` in Bun/Node and `@huggingface/transformers` in the browser. Both are loaded lazily so the bundler doesn't have a bad day.
169
+
170
+ # Encryption
171
+
172
+ The sync feature uses AES-GCM-256 with a key derived via PBKDF2 (310,000 iterations) from `username:passphrase`. The key never leaves your device. The server receives an encrypted blob it can't read.
173
+
174
+ There's a subtle trick for the user ID: to identify your data on the server without sending credentials, the system encrypts a fixed known plaintext (`"the_answer_is_42"`) with a fixed IV using your key. Same credentials always produce the same ciphertext, which becomes your server-side ID. No account, no lookup table — your identity _is_ your credentials.
175
+
176
+ # Heartbeats
177
+
178
+ Each persona has a heartbeat timer. If a persona hasn't had activity for 30 minutes (configurable via `heartbeat_delay_ms`), the system queues a check-in prompt. The persona "wakes up," considers what's been going on, and may have something to say.
179
+
180
+ Whether they do is up to them and the prompt. Some personas are chatty. Some are not.
package/src/cli/README.md CHANGED
@@ -1,10 +1,7 @@
1
1
  # The CLI
2
-
3
- It's actually super straight-forward
4
-
5
2
  ```sh
6
3
  ei # Start the TUI
7
- ei "query string" # Return up to 10 "query string" across all types
4
+ ei "query string" # Return up to 10 results across all types
8
5
  ei -n 5 "query string" # Return up to 5 results
9
6
  ei facts -n 5 "query string" # Return up to 5 facts
10
7
  ei traits -n 5 "query string" # Return up to 5 traits
@@ -13,10 +10,12 @@ ei topics -n 5 "query string" # Return up to 5 topics
13
10
  ei quotes -n 5 "query string" # Return up to 5 quotes
14
11
  ei --id <id> # Look up a specific entity by ID
15
12
  echo <id> | ei --id # Look up entity by ID from stdin
13
+ ei --install # Install the Ei tool for OpenCode
16
14
  ```
17
15
 
18
- # An Agentic Tool
16
+ Type aliases: `fact`, `trait`, `person`, `topic`, `quote` all work (singular or plural).
19
17
 
18
+ # An Agentic Tool
20
19
 
21
20
  The `--id` flag is designed for piping. For example, search for a topic and then fetch the full entity:
22
21
 
@@ -24,24 +23,33 @@ The `--id` flag is designed for piping. For example, search for a topic and then
24
23
  ei "memory leak" | jq '.[0].id' | ei --id
25
24
  ```
26
25
 
27
- To register Ei as an explicit OpenCode tool (optional — agents can also just call `ei` via shell):
26
+ # OpenCode Integration
28
27
 
29
- ```bash
30
- mkdir -p ~/.config/opencode/tools
28
+ ## Quick Install
29
+
30
+ ```sh
31
+ ei --install
31
32
  ```
32
33
 
33
- Create `~/.config/opencode/tools/ei.ts`:
34
+ This writes `~/.config/opencode/tools/ei.ts` with a complete tool definition. Restart OpenCode to activate.
34
35
 
35
- ```typescript
36
- import { tool } from "@opencode-ai/plugin"
36
+ ## What the Tool Provides
37
37
 
38
- export default tool({
39
- description: "Search the user's Ei knowledge base for facts, people, topics, traits, and quotes",
40
- args: {
41
- query: tool.schema.string().describe("Search query"),
42
- },
43
- async execute(args) {
44
- return await Bun.$`ei ${args.query}`.text()
45
- },
46
- })
47
- ```
38
+ The installed tool gives OpenCode agents access to all five data types with proper Zod-validated args:
39
+
40
+ | Arg | Type | Description |
41
+ |-----|------|-------------|
42
+ | `query` | string (required) | Search text, or entity ID when `lookup=true` |
43
+ | `type` | enum (optional) | `facts` \| `traits` \| `people` \| `topics` \| `quotes` — omit for balanced results |
44
+ | `limit` | number (optional) | Max results, default 10 |
45
+ | `lookup` | boolean (optional) | If true, fetch single entity by ID |
46
+
47
+ ## Output Shapes
48
+
49
+ All search commands return arrays. Each result includes a `type` field.
50
+
51
+ **Fact / Trait / Person / Topic**: `{ type, id, name, description, sentiment, ...type-specific fields }`
52
+
53
+ **Quote**: `{ type, id, text, speaker, timestamp, linked_items[] }`
54
+
55
+ **ID lookup** (`lookup: true`): single object (not an array) with the same shape.
@@ -1,4 +1,5 @@
1
1
  import type { StorageState, Quote, Fact, Trait, Person, Topic } from "../core/types";
2
+ import { crossFind } from "../core/utils/index.ts";
2
3
  import { join } from "path";
3
4
  import { readFile } from "fs/promises";
4
5
  import { getEmbeddingService, findTopK } from "../core/embedding-service";
@@ -249,21 +250,8 @@ export async function lookupById(id: string): Promise<({ type: string } & Record
249
250
  return null;
250
251
  }
251
252
 
252
- const collections: Array<{ type: string; source: Array<{ id: string; [k: string]: unknown }> }> = [
253
- { type: "fact", source: state.human.facts },
254
- { type: "trait", source: state.human.traits },
255
- { type: "person", source: state.human.people },
256
- { type: "topic", source: state.human.topics },
257
- { type: "quote", source: state.human.quotes },
258
- ];
259
-
260
- for (const { type, source } of collections) {
261
- const entity = source.find(item => item.id === id);
262
- if (entity) {
263
- const { embedding, ...rest } = entity;
264
- return { type, ...rest };
265
- }
266
- }
267
-
268
- return null;
253
+ const found = crossFind(id, state.human);
254
+ if (!found) return null;
255
+ const { type, embedding, ...rest } = found;
256
+ return { type, ...rest };
269
257
  }
package/src/cli.ts CHANGED
@@ -12,6 +12,7 @@
12
12
  */
13
13
 
14
14
  import { parseArgs } from "util";
15
+ import { join } from "path";
15
16
  import { retrieveBalanced, lookupById } from "./cli/retrieval";
16
17
 
17
18
  const TYPE_ALIASES: Record<string, string> = {
@@ -50,6 +51,7 @@ Types:
50
51
  Options:
51
52
  --number, -n Maximum number of results (default: 10)
52
53
  --id Look up entity by ID (accepts value or stdin)
54
+ --install Write the Ei tool file to ~/.config/opencode/tools/
53
55
  --help, -h Show this help message
54
56
 
55
57
  Examples:
@@ -62,6 +64,68 @@ Examples:
62
64
  `);
63
65
  }
64
66
 
67
+ function buildOpenCodeToolContent(): string {
68
+ const lines = [
69
+ 'import { tool } from "@opencode-ai/plugin"',
70
+ '',
71
+ 'export default tool({',
72
+ ' description: [',
73
+ ' "Search the user\'s Ei knowledge base \u2014 a persistent memory store built from conversations.",',
74
+ ' "Returns facts, personality traits, people, topics of interest, and quotes.",',
75
+ ' "Use this to recall anything about the user: preferences, relationships, or past discussions.",',
76
+ ' "Results include entity IDs that can be passed back with lookup=true to get full detail.",',
77
+ ' ].join(" "),',
78
+ ' args: {',
79
+ ' query: tool.schema.string().describe(',
80
+ ' "Search text, or an entity ID when lookup=true. Supports natural language."',
81
+ ' ),',
82
+ ' type: tool.schema',
83
+ ' .enum(["facts", "traits", "people", "topics", "quotes"])',
84
+ ' .optional()',
85
+ ' .describe(',
86
+ ' "Filter to a specific data type. Omit to search all types (balanced across all 5)."',
87
+ ' ),',
88
+ ' limit: tool.schema',
89
+ ' .number()',
90
+ ' .int()',
91
+ ' .positive()',
92
+ ' .default(10)',
93
+ ' .optional()',
94
+ ' .describe("Maximum number of results to return. Default: 10."),',
95
+ ' lookup: tool.schema',
96
+ ' .boolean()',
97
+ ' .optional()',
98
+ ' .describe(',
99
+ ' "If true, treat query as an entity ID and return that single entity in full detail."',
100
+ ' ),',
101
+ ' },',
102
+ ' async execute(args) {',
103
+ ' const cmd: string[] = ["ei"];',
104
+ ' if (args.lookup) {',
105
+ ' cmd.push("--id", args.query);',
106
+ ' } else {',
107
+ ' if (args.type) cmd.push(args.type);',
108
+ ' if (args.limit && args.limit !== 10) cmd.push("-n", String(args.limit));',
109
+ ' cmd.push(args.query);',
110
+ ' }',
111
+ ' return Bun.$`${cmd}`.text();',
112
+ ' },',
113
+ '})',
114
+ '',
115
+ ];
116
+ return lines.join('\n');
117
+ }
118
+
119
+ async function installOpenCodeTool(): Promise<void> {
120
+ const toolsDir = join(process.env.HOME || "~", ".config", "opencode", "tools");
121
+ const toolPath = join(toolsDir, "ei.ts");
122
+
123
+ await Bun.$`mkdir -p ${toolsDir}`;
124
+ await Bun.write(toolPath, buildOpenCodeToolContent());
125
+ console.log(`✓ Installed Ei tool to ${toolPath}`);
126
+ console.log(` Restart OpenCode to activate.`);
127
+ }
128
+
65
129
  async function main(): Promise<void> {
66
130
  const args = process.argv.slice(2);
67
131
 
@@ -82,6 +146,11 @@ async function main(): Promise<void> {
82
146
  process.exit(0);
83
147
  }
84
148
 
149
+ if (args[0] === "--install") {
150
+ await installOpenCodeTool();
151
+ process.exit(0);
152
+ }
153
+
85
154
 
86
155
  // Handle --id flag: look up entity by ID
87
156
  const idFlagIndex = args.indexOf("--id");