@aleph-ai/tinyaleph 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +278 -0
- package/backends/cryptographic/index.js +196 -0
- package/backends/index.js +15 -0
- package/backends/interface.js +89 -0
- package/backends/scientific/index.js +272 -0
- package/backends/semantic/index.js +527 -0
- package/backends/semantic/surface.js +393 -0
- package/backends/semantic/two-layer.js +375 -0
- package/core/fano.js +127 -0
- package/core/hilbert.js +564 -0
- package/core/hypercomplex.js +141 -0
- package/core/index.js +133 -0
- package/core/llm.js +132 -0
- package/core/prime.js +184 -0
- package/core/resonance.js +695 -0
- package/core/rformer-tf.js +1086 -0
- package/core/rformer.js +806 -0
- package/core/sieve.js +350 -0
- package/data.json +8163 -0
- package/docs/EXAMPLES_PLAN.md +293 -0
- package/docs/README.md +159 -0
- package/docs/design/ALEPH_CHAT_ARCHITECTURE.md +499 -0
- package/docs/guide/01-quickstart.md +298 -0
- package/docs/guide/02-semantic-computing.md +409 -0
- package/docs/guide/03-cryptographic.md +420 -0
- package/docs/guide/04-scientific.md +494 -0
- package/docs/guide/05-llm-integration.md +568 -0
- package/docs/guide/06-advanced.md +996 -0
- package/docs/guide/README.md +188 -0
- package/docs/reference/01-core.md +695 -0
- package/docs/reference/02-physics.md +601 -0
- package/docs/reference/03-backends.md +892 -0
- package/docs/reference/04-engine.md +632 -0
- package/docs/reference/README.md +252 -0
- package/docs/theory/01-prime-semantics.md +327 -0
- package/docs/theory/02-hypercomplex-algebra.md +421 -0
- package/docs/theory/03-phase-synchronization.md +364 -0
- package/docs/theory/04-entropy-reasoning.md +348 -0
- package/docs/theory/05-non-commutativity.md +402 -0
- package/docs/theory/06-two-layer-meaning.md +414 -0
- package/docs/theory/07-resonant-field-interface.md +419 -0
- package/docs/theory/08-semantic-sieve.md +520 -0
- package/docs/theory/09-temporal-emergence.md +298 -0
- package/docs/theory/10-quaternionic-memory.md +415 -0
- package/docs/theory/README.md +162 -0
- package/engine/aleph.js +418 -0
- package/engine/index.js +7 -0
- package/index.js +23 -0
- package/modular.js +254 -0
- package/package.json +99 -0
- package/physics/collapse.js +95 -0
- package/physics/entropy.js +88 -0
- package/physics/index.js +65 -0
- package/physics/kuramoto.js +91 -0
- package/physics/lyapunov.js +80 -0
- package/physics/oscillator.js +95 -0
- package/types/index.d.ts +575 -0
|
@@ -0,0 +1,499 @@
|
|
|
1
|
+
# AlephChat: Hybrid LLM/TinyAleph Chat Client Design
|
|
2
|
+
|
|
3
|
+
## Overview
|
|
4
|
+
|
|
5
|
+
AlephChat is a conversational AI system that combines the neural language generation of a local LLM (via LMStudio) with TinyAleph's deterministic semantic processing. The system transparently learns new vocabulary and adapts to the user's communication style while maintaining semantic coherence through hypercomplex embeddings.
|
|
6
|
+
|
|
7
|
+
## Architecture Diagram
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
11
|
+
│ AlephChat Client │
|
|
12
|
+
├─────────────────────────────────────────────────────────────────────────────┤
|
|
13
|
+
│ │
|
|
14
|
+
│ ┌───────────────┐ ┌──────────────────┐ ┌───────────────────┐ │
|
|
15
|
+
│ │ User Input │────▶│ PromptEnhancer │────▶│ LMStudio Client │ │
|
|
16
|
+
│ └───────────────┘ └────────┬─────────┘ └─────────┬─────────┘ │
|
|
17
|
+
│ │ │ │
|
|
18
|
+
│ ▼ ▼ │
|
|
19
|
+
│ ┌───────────────────────────────────────────────────────────────────┐ │
|
|
20
|
+
│ │ AlephSemanticCore │ │
|
|
21
|
+
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌──────────┐ │ │
|
|
22
|
+
│ │ │ Vocabulary │ │ Style │ │ Topic │ │ Concept │ │ │
|
|
23
|
+
│ │ │ Manager │ │ Profiler │ │ Tracker │ │ Graph │ │ │
|
|
24
|
+
│ │ └─────────────┘ └─────────────┘ └─────────────┘ └──────────┘ │ │
|
|
25
|
+
│ └───────────────────────────────────────────────────────────────────┘ │
|
|
26
|
+
│ │ │ │
|
|
27
|
+
│ ▼ ▼ │
|
|
28
|
+
│ ┌───────────────────────────────────────────────────────────────────┐ │
|
|
29
|
+
│ │ Context Memory │ │
|
|
30
|
+
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────────┐│ │
|
|
31
|
+
│ │ │ Immediate │ │ Session │ │ Persistent ││ │
|
|
32
|
+
│ │ │ Buffer │ │ Memory │ │ Memory ││ │
|
|
33
|
+
│ │ │ (5-10 msg) │ │ (current) │ │ (JSON file store) ││ │
|
|
34
|
+
│ │ └─────────────┘ └─────────────┘ └─────────────────────────────┘│ │
|
|
35
|
+
│ └───────────────────────────────────────────────────────────────────┘ │
|
|
36
|
+
│ │ │
|
|
37
|
+
│ ▼ │
|
|
38
|
+
│ ┌───────────────┐ ┌──────────────────┐ ┌───────────────────┐ │
|
|
39
|
+
│ │ LLM Response │◀────│ResponseProcessor │◀────│ LLM Streaming │ │
|
|
40
|
+
│ └───────────────┘ └──────────────────┘ └───────────────────┘ │
|
|
41
|
+
│ │
|
|
42
|
+
└─────────────────────────────────────────────────────────────────────────────┘
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
## Core Components
|
|
46
|
+
|
|
47
|
+
### 1. AlephSemanticCore
|
|
48
|
+
|
|
49
|
+
The semantic heart of the system, built on TinyAleph's SemanticBackend:
|
|
50
|
+
|
|
51
|
+
```javascript
|
|
52
|
+
class AlephSemanticCore {
|
|
53
|
+
constructor(options) {
|
|
54
|
+
this.backend = new SemanticBackend({ dimension: options.dimension || 16 });
|
|
55
|
+
this.vocabulary = new VocabularyManager(this.backend);
|
|
56
|
+
this.styleProfiler = new StyleProfiler(this.backend);
|
|
57
|
+
this.topicTracker = new TopicTracker(this.backend);
|
|
58
|
+
this.conceptGraph = new ConceptGraph(this.backend);
|
|
59
|
+
}
|
|
60
|
+
}
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
**Sub-components:**
|
|
64
|
+
|
|
65
|
+
| Component | Purpose | Key Methods |
|
|
66
|
+
|-----------|---------|-------------|
|
|
67
|
+
| VocabularyManager | Tracks known words, learns new ones with prime encodings | `learn(word)`, `isKnown(word)`, `encode(word)` |
|
|
68
|
+
| StyleProfiler | Builds user's communication style embedding | `updateStyle(text)`, `getStyleVector()`, `matchStyle(response)` |
|
|
69
|
+
| TopicTracker | Tracks current conversation topics via hypercomplex states | `updateTopic(text)`, `getCurrentTopics()`, `getTopicRelevance(text)` |
|
|
70
|
+
| ConceptGraph | Maps relationships between concepts | `addRelation(a, rel, b)`, `query(concept)`, `findRelated(concept)` |
|
|
71
|
+
|
|
72
|
+
### 2. Context Memory
|
|
73
|
+
|
|
74
|
+
Multi-tiered memory system for conversation context:
|
|
75
|
+
|
|
76
|
+
```javascript
|
|
77
|
+
class ContextMemory {
|
|
78
|
+
constructor(options) {
|
|
79
|
+
this.immediate = new ImmediateBuffer(options.bufferSize || 10);
|
|
80
|
+
this.session = new SessionMemory();
|
|
81
|
+
this.persistent = new PersistentMemory(options.storePath);
|
|
82
|
+
this.semanticIndex = new SemanticIndex(options.backend);
|
|
83
|
+
}
|
|
84
|
+
}
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
**Memory Tiers:**
|
|
88
|
+
|
|
89
|
+
```
|
|
90
|
+
┌─────────────────────────────────────────────────────────────┐
|
|
91
|
+
│ Memory Architecture │
|
|
92
|
+
├─────────────────────────────────────────────────────────────┤
|
|
93
|
+
│ │
|
|
94
|
+
│ IMMEDIATE (in-memory ring buffer) │
|
|
95
|
+
│ ├── Last 5-10 exchanges │
|
|
96
|
+
│ ├── Full text + embeddings │
|
|
97
|
+
│ └── Used for: Direct context injection │
|
|
98
|
+
│ │
|
|
99
|
+
│ SESSION (in-memory map) │
|
|
100
|
+
│ ├── All exchanges this session │
|
|
101
|
+
│ ├── Topic summaries │
|
|
102
|
+
│ ├── Learned vocabulary this session │
|
|
103
|
+
│ └── Used for: Semantic retrieval, topic continuity │
|
|
104
|
+
│ │
|
|
105
|
+
│ PERSISTENT (JSON file) │
|
|
106
|
+
│ ├── User style profile │
|
|
107
|
+
│ ├── Learned vocabulary + primes │
|
|
108
|
+
│ ├── Concept graph │
|
|
109
|
+
│ ├── Notable conversation snippets │
|
|
110
|
+
│ └── Used for: Long-term learning, cross-session memory │
|
|
111
|
+
│ │
|
|
112
|
+
└─────────────────────────────────────────────────────────────┘
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
### 3. LMStudio Client
|
|
116
|
+
|
|
117
|
+
Interface to the local LLM via LMStudio's OpenAI-compatible API:
|
|
118
|
+
|
|
119
|
+
```javascript
|
|
120
|
+
class LMStudioClient {
|
|
121
|
+
constructor(options) {
|
|
122
|
+
this.baseUrl = options.baseUrl || 'http://localhost:1234/v1';
|
|
123
|
+
this.model = options.model || 'local-model';
|
|
124
|
+
this.temperature = options.temperature || 0.7;
|
|
125
|
+
this.maxTokens = options.maxTokens || 2048;
|
|
126
|
+
}
|
|
127
|
+
|
|
128
|
+
async chat(messages, options) { /* ... */ }
|
|
129
|
+
async *streamChat(messages, options) { /* ... */ }
|
|
130
|
+
async listModels() { /* ... */ }
|
|
131
|
+
}
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
### 4. PromptEnhancer
|
|
135
|
+
|
|
136
|
+
Enhances user prompts with semantic context before sending to LLM:
|
|
137
|
+
|
|
138
|
+
```javascript
|
|
139
|
+
class PromptEnhancer {
|
|
140
|
+
enhance(userInput, context) {
|
|
141
|
+
return {
|
|
142
|
+
systemPrompt: this.buildSystemPrompt(context),
|
|
143
|
+
userPrompt: userInput,
|
|
144
|
+
contextMessages: this.getRelevantContext(userInput, context),
|
|
145
|
+
styleHints: this.getStyleHints(context)
|
|
146
|
+
};
|
|
147
|
+
}
|
|
148
|
+
}
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
**Enhancement Process:**
|
|
152
|
+
|
|
153
|
+
```
|
|
154
|
+
User Input: "Tell me more about neural networks"
|
|
155
|
+
│
|
|
156
|
+
▼
|
|
157
|
+
┌─────────────────────────────────────────────────────────────┐
|
|
158
|
+
│ PromptEnhancer │
|
|
159
|
+
├─────────────────────────────────────────────────────────────┤
|
|
160
|
+
│ │
|
|
161
|
+
│ 1. Semantic Analysis │
|
|
162
|
+
│ ├── Encode input to hypercomplex state │
|
|
163
|
+
│ ├── Compute topic relevance scores │
|
|
164
|
+
│ └── Identify key concepts │
|
|
165
|
+
│ │
|
|
166
|
+
│ 2. Context Retrieval │
|
|
167
|
+
│ ├── Immediate: Last N relevant exchanges │
|
|
168
|
+
│ ├── Session: Semantically similar past discussions │
|
|
169
|
+
│ └── Persistent: Related concepts from knowledge graph │
|
|
170
|
+
│ │
|
|
171
|
+
│ 3. Style Adaptation │
|
|
172
|
+
│ ├── Match response length preference │
|
|
173
|
+
│ ├── Technical level adjustment │
|
|
174
|
+
│ └── Tone alignment │
|
|
175
|
+
│ │
|
|
176
|
+
│ 4. Prompt Construction │
|
|
177
|
+
│ └── System + Context + User → Enhanced Messages │
|
|
178
|
+
│ │
|
|
179
|
+
└─────────────────────────────────────────────────────────────┘
|
|
180
|
+
│
|
|
181
|
+
▼
|
|
182
|
+
Enhanced Messages: [
|
|
183
|
+
{ role: "system", content: "You are a helpful assistant..." },
|
|
184
|
+
{ role: "assistant", content: "Previously discussed: ML basics..." },
|
|
185
|
+
{ role: "user", content: "Tell me more about neural networks" }
|
|
186
|
+
]
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
### 5. ResponseProcessor
|
|
190
|
+
|
|
191
|
+
Post-processes LLM responses to extract learning opportunities:
|
|
192
|
+
|
|
193
|
+
```javascript
|
|
194
|
+
class ResponseProcessor {
|
|
195
|
+
process(response, userInput, context) {
|
|
196
|
+
// Extract new vocabulary
|
|
197
|
+
const newWords = this.extractNewVocabulary(response);
|
|
198
|
+
|
|
199
|
+
// Verify semantic coherence
|
|
200
|
+
const coherence = this.checkCoherence(response, userInput);
|
|
201
|
+
|
|
202
|
+
// Extract concepts for graph
|
|
203
|
+
const concepts = this.extractConcepts(response);
|
|
204
|
+
|
|
205
|
+
return {
|
|
206
|
+
response,
|
|
207
|
+
newWords,
|
|
208
|
+
coherence,
|
|
209
|
+
concepts,
|
|
210
|
+
shouldLearn: coherence.score > 0.6
|
|
211
|
+
};
|
|
212
|
+
}
|
|
213
|
+
}
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
## Data Flow
|
|
217
|
+
|
|
218
|
+
### Conversation Turn Flow
|
|
219
|
+
|
|
220
|
+
```
|
|
221
|
+
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
222
|
+
│ Conversation Turn Flow │
|
|
223
|
+
├─────────────────────────────────────────────────────────────────────────────┤
|
|
224
|
+
│ │
|
|
225
|
+
│ 1. USER INPUT │
|
|
226
|
+
│ │ │
|
|
227
|
+
│ ▼ │
|
|
228
|
+
│ 2. PRE-PROCESSING │
|
|
229
|
+
│ ├── Encode to hypercomplex state │
|
|
230
|
+
│ ├── Extract keywords for vocab check │
|
|
231
|
+
│ ├── Compute topic embedding │
|
|
232
|
+
│ └── Check for new vocabulary │
|
|
233
|
+
│ │ │
|
|
234
|
+
│ ▼ │
|
|
235
|
+
│ 3. CONTEXT RETRIEVAL │
|
|
236
|
+
│ ├── Get immediate buffer │
|
|
237
|
+
│ ├── Semantic search session memory │
|
|
238
|
+
│ └── Query concept graph for relevant knowledge │
|
|
239
|
+
│ │ │
|
|
240
|
+
│ ▼ │
|
|
241
|
+
│ 4. PROMPT ENHANCEMENT │
|
|
242
|
+
│ ├── Build system prompt with style hints │
|
|
243
|
+
│ ├── Inject relevant context │
|
|
244
|
+
│ └── Add semantic grounding │
|
|
245
|
+
│ │ │
|
|
246
|
+
│ ▼ │
|
|
247
|
+
│ 5. LLM GENERATION │
|
|
248
|
+
│ ├── Send to LMStudio │
|
|
249
|
+
│ └── Stream response tokens │
|
|
250
|
+
│ │ │
|
|
251
|
+
│ ▼ │
|
|
252
|
+
│ 6. POST-PROCESSING │
|
|
253
|
+
│ ├── Extract new vocabulary │
|
|
254
|
+
│ ├── Verify semantic coherence │
|
|
255
|
+
│ ├── Extract concepts │
|
|
256
|
+
│ └── Update style profile │
|
|
257
|
+
│ │ │
|
|
258
|
+
│ ▼ │
|
|
259
|
+
│ 7. LEARNING │
|
|
260
|
+
│ ├── Add new words to vocabulary │
|
|
261
|
+
│ ├── Update topic tracker │
|
|
262
|
+
│ ├── Update concept graph │
|
|
263
|
+
│ └── Refine style profile │
|
|
264
|
+
│ │ │
|
|
265
|
+
│ ▼ │
|
|
266
|
+
│ 8. MEMORY UPDATE │
|
|
267
|
+
│ ├── Add exchange to immediate buffer │
|
|
268
|
+
│ ├── Index in session memory │
|
|
269
|
+
│ └── Persist notable learnings │
|
|
270
|
+
│ │
|
|
271
|
+
└─────────────────────────────────────────────────────────────────────────────┘
|
|
272
|
+
```
|
|
273
|
+
|
|
274
|
+
## Transparent Learning
|
|
275
|
+
|
|
276
|
+
### Vocabulary Learning
|
|
277
|
+
|
|
278
|
+
The system automatically detects and learns new words:
|
|
279
|
+
|
|
280
|
+
```javascript
|
|
281
|
+
class VocabularyManager {
|
|
282
|
+
learn(word) {
|
|
283
|
+
if (this.isKnown(word)) return;
|
|
284
|
+
|
|
285
|
+
// Generate prime encoding
|
|
286
|
+
const primes = this.backend.encode(word);
|
|
287
|
+
|
|
288
|
+
// Create hypercomplex embedding
|
|
289
|
+
const embedding = this.backend.textToOrderedState(word);
|
|
290
|
+
|
|
291
|
+
// Store with metadata
|
|
292
|
+
this.vocabulary.set(word, {
|
|
293
|
+
primes,
|
|
294
|
+
embedding,
|
|
295
|
+
firstSeen: Date.now(),
|
|
296
|
+
frequency: 1,
|
|
297
|
+
contexts: []
|
|
298
|
+
});
|
|
299
|
+
|
|
300
|
+
console.log(`📚 Learned new word: "${word}"`);
|
|
301
|
+
}
|
|
302
|
+
}
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
### Style Profiling
|
|
306
|
+
|
|
307
|
+
The system builds a profile of the user's communication style:
|
|
308
|
+
|
|
309
|
+
```javascript
|
|
310
|
+
class StyleProfiler {
|
|
311
|
+
updateStyle(userText) {
|
|
312
|
+
const embedding = this.backend.textToOrderedState(userText);
|
|
313
|
+
|
|
314
|
+
// Running average with exponential decay
|
|
315
|
+
const alpha = 0.1; // Learning rate
|
|
316
|
+
for (let i = 0; i < this.styleVector.length; i++) {
|
|
317
|
+
this.styleVector[i] = (1 - alpha) * this.styleVector[i] + alpha * embedding.c[i];
|
|
318
|
+
}
|
|
319
|
+
|
|
320
|
+
// Update style metrics
|
|
321
|
+
this.metrics.avgLength = this.updateAvg(this.metrics.avgLength, userText.length);
|
|
322
|
+
this.metrics.technicalLevel = this.estimateTechnicalLevel(userText);
|
|
323
|
+
this.metrics.formalityScore = this.estimateFormality(userText);
|
|
324
|
+
}
|
|
325
|
+
}
|
|
326
|
+
```
|
|
327
|
+
|
|
328
|
+
### Concept Graph Updates
|
|
329
|
+
|
|
330
|
+
Extract and store concept relationships:
|
|
331
|
+
|
|
332
|
+
```javascript
|
|
333
|
+
class ConceptGraph {
|
|
334
|
+
extractAndStore(text) {
|
|
335
|
+
const concepts = this.extractConcepts(text);
|
|
336
|
+
|
|
337
|
+
// Create embeddings for each concept
|
|
338
|
+
for (const concept of concepts) {
|
|
339
|
+
const embedding = this.backend.textToOrderedState(concept);
|
|
340
|
+
this.nodes.set(concept, embedding);
|
|
341
|
+
}
|
|
342
|
+
|
|
343
|
+
// Infer relationships from proximity
|
|
344
|
+
for (let i = 0; i < concepts.length - 1; i++) {
|
|
345
|
+
this.addRelation(concepts[i], 'related_to', concepts[i + 1]);
|
|
346
|
+
}
|
|
347
|
+
}
|
|
348
|
+
}
|
|
349
|
+
```
|
|
350
|
+
|
|
351
|
+
## File Structure
|
|
352
|
+
|
|
353
|
+
```
|
|
354
|
+
aleph-chat/
|
|
355
|
+
├── index.js # Main entry point & CLI
|
|
356
|
+
├── lib/
|
|
357
|
+
│ ├── core.js # AlephSemanticCore
|
|
358
|
+
│ ├── memory.js # ContextMemory system
|
|
359
|
+
│ ├── lmstudio.js # LMStudio API client
|
|
360
|
+
│ ├── enhancer.js # PromptEnhancer
|
|
361
|
+
│ ├── processor.js # ResponseProcessor
|
|
362
|
+
│ ├── vocabulary.js # VocabularyManager
|
|
363
|
+
│ ├── style.js # StyleProfiler
|
|
364
|
+
│ ├── topics.js # TopicTracker
|
|
365
|
+
│ └── concepts.js # ConceptGraph
|
|
366
|
+
├── data/
|
|
367
|
+
│ ├── vocabulary.json # Learned vocabulary
|
|
368
|
+
│ ├── style-profile.json # User style data
|
|
369
|
+
│ └── concepts.json # Concept graph
|
|
370
|
+
└── README.md
|
|
371
|
+
```
|
|
372
|
+
|
|
373
|
+
## Example Usage Session
|
|
374
|
+
|
|
375
|
+
```
|
|
376
|
+
┌─────────────────────────────────────────────────────────────┐
|
|
377
|
+
│ AlephChat Session │
|
|
378
|
+
├─────────────────────────────────────────────────────────────┤
|
|
379
|
+
│ │
|
|
380
|
+
│ $ npm run chat │
|
|
381
|
+
│ │
|
|
382
|
+
│ ┌─────────────────────────────────────────────────────────┐│
|
|
383
|
+
│ │ 🌟 AlephChat v1.0 ││
|
|
384
|
+
│ │ Connected to LMStudio: mistral-7b-instruct ││
|
|
385
|
+
│ │ Vocabulary: 1,247 words | Style: learning ││
|
|
386
|
+
│ └─────────────────────────────────────────────────────────┘│
|
|
387
|
+
│ │
|
|
388
|
+
│ You: What's the difference between ML and DL? │
|
|
389
|
+
│ [📚 New term detected: "DL"] │
|
|
390
|
+
│ [🎯 Topic: machine learning, deep learning] │
|
|
391
|
+
│ │
|
|
392
|
+
│ Aleph: Machine Learning (ML) is the broader category... │
|
|
393
|
+
│ [Coherence: 0.87 | Concepts: +3] │
|
|
394
|
+
│ │
|
|
395
|
+
│ You: Can you explain backpropagation? │
|
|
396
|
+
│ [🔗 Context: previous ML/DL discussion] │
|
|
397
|
+
│ [📚 Learning: "backpropagation"] │
|
|
398
|
+
│ │
|
|
399
|
+
│ Aleph: Building on our discussion of deep learning, │
|
|
400
|
+
│ backpropagation is the algorithm that allows... │
|
|
401
|
+
│ [Coherence: 0.92 | Topics: +neural networks] │
|
|
402
|
+
│ │
|
|
403
|
+
│ You: /status │
|
|
404
|
+
│ ┌─────────────────────────────────────────────────────────┐│
|
|
405
|
+
│ │ Session Stats: ││
|
|
406
|
+
│ │ Exchanges: 2 ││
|
|
407
|
+
│ │ New words learned: 2 (DL, backpropagation) ││
|
|
408
|
+
│ │ Topics: ML, DL, neural networks ││
|
|
409
|
+
│ │ Style confidence: 43% ││
|
|
410
|
+
│ │ Avg coherence: 0.895 ││
|
|
411
|
+
│ └─────────────────────────────────────────────────────────┘│
|
|
412
|
+
│ │
|
|
413
|
+
│ You: /save │
|
|
414
|
+
│ 💾 Session saved to data/ │
|
|
415
|
+
│ │
|
|
416
|
+
│ You: /quit │
|
|
417
|
+
│ 👋 Goodbye! Vocabulary updated with 2 new words. │
|
|
418
|
+
│ │
|
|
419
|
+
└─────────────────────────────────────────────────────────────┘
|
|
420
|
+
```
|
|
421
|
+
|
|
422
|
+
## Special Commands
|
|
423
|
+
|
|
424
|
+
| Command | Description |
|
|
425
|
+
|---------|-------------|
|
|
426
|
+
| `/status` | Show session statistics |
|
|
427
|
+
| `/topics` | List current conversation topics |
|
|
428
|
+
| `/vocab` | Show recently learned vocabulary |
|
|
429
|
+
| `/style` | Display user style profile |
|
|
430
|
+
| `/concepts` | Explore concept graph |
|
|
431
|
+
| `/forget <word>` | Remove word from vocabulary |
|
|
432
|
+
| `/save` | Persist current session |
|
|
433
|
+
| `/load` | Load previous session |
|
|
434
|
+
| `/clear` | Clear immediate context |
|
|
435
|
+
| `/quit` | Exit and save |
|
|
436
|
+
|
|
437
|
+
## Configuration
|
|
438
|
+
|
|
439
|
+
```javascript
|
|
440
|
+
// aleph-chat.config.js
|
|
441
|
+
module.exports = {
|
|
442
|
+
lmstudio: {
|
|
443
|
+
baseUrl: 'http://localhost:1234/v1',
|
|
444
|
+
model: 'local-model',
|
|
445
|
+
temperature: 0.7,
|
|
446
|
+
maxTokens: 2048
|
|
447
|
+
},
|
|
448
|
+
aleph: {
|
|
449
|
+
dimension: 16,
|
|
450
|
+
learningRate: 0.1,
|
|
451
|
+
coherenceThreshold: 0.6
|
|
452
|
+
},
|
|
453
|
+
memory: {
|
|
454
|
+
immediateSize: 10,
|
|
455
|
+
sessionLimit: 100,
|
|
456
|
+
persistPath: './data'
|
|
457
|
+
},
|
|
458
|
+
ui: {
|
|
459
|
+
showCoherence: true,
|
|
460
|
+
showTopics: true,
|
|
461
|
+
showLearning: true,
|
|
462
|
+
colorOutput: true
|
|
463
|
+
}
|
|
464
|
+
};
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
## Key Design Principles
|
|
468
|
+
|
|
469
|
+
1. **Semantic Grounding** - All text is encoded to hypercomplex space via TinyAleph's prime-based encoding, providing deterministic semantic signatures
|
|
470
|
+
|
|
471
|
+
2. **Transparent Learning** - The system visibly learns new vocabulary and style preferences, building a persistent profile over time
|
|
472
|
+
|
|
473
|
+
3. **Multi-Tier Memory** - Immediate buffer for context injection, session memory for semantic search, persistent storage for cross-session continuity
|
|
474
|
+
|
|
475
|
+
4. **Hybrid Architecture** - Combines TinyAleph's symbolic/mathematical processing with LLM's neural generation for enhanced coherence
|
|
476
|
+
|
|
477
|
+
5. **Local-First** - Uses LMStudio for on-device inference, keeping conversations private and enabling offline operation
|
|
478
|
+
|
|
479
|
+
## Implementation Priorities
|
|
480
|
+
|
|
481
|
+
1. **Phase 1**: Core infrastructure
|
|
482
|
+
- LMStudio client with streaming
|
|
483
|
+
- Basic AlephSemanticCore
|
|
484
|
+
- Immediate context buffer
|
|
485
|
+
|
|
486
|
+
2. **Phase 2**: Learning systems
|
|
487
|
+
- VocabularyManager
|
|
488
|
+
- StyleProfiler
|
|
489
|
+
- Session memory
|
|
490
|
+
|
|
491
|
+
3. **Phase 3**: Advanced features
|
|
492
|
+
- ConceptGraph
|
|
493
|
+
- Persistent storage
|
|
494
|
+
- Semantic search
|
|
495
|
+
|
|
496
|
+
4. **Phase 4**: Polish
|
|
497
|
+
- Rich CLI interface
|
|
498
|
+
- Commands and status
|
|
499
|
+
- Error handling
|