cognitive-engine 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +166 -223
  2. package/package.json +4 -3
package/README.md CHANGED
@@ -1,10 +1,14 @@
1
1
  # cognitive-engine
2
2
 
3
+ [![npm version](https://img.shields.io/npm/v/cognitive-engine.svg)](https://www.npmjs.com/package/cognitive-engine)
4
+ [![license](https://img.shields.io/npm/l/cognitive-engine.svg)](https://github.com/medonomator/cognitive-engine/blob/main/LICENSE)
5
+ [![TypeScript](https://img.shields.io/badge/TypeScript-strict-blue.svg)](https://www.typescriptlang.org/)
6
+
3
7
  > Not just memory. A mind.
4
8
 
5
- Pure TypeScript framework for building AI agents with real cognitive capabilities — perception, episodic memory, BDI reasoning, Thompson Sampling, and adaptive personalization.
9
+ Pure TypeScript library for building AI agents with real cognitive capabilities — perception, memory, reasoning, emotions, social awareness, and adaptive learning.
6
10
 
7
- **Provider-agnostic**: works with any LLM (OpenAI, Anthropic, local models) and any storage backend.
11
+ **Provider-agnostic**: works with any LLM and any storage backend via simple interfaces.
8
12
 
9
13
  ## Install
10
14
 
@@ -12,315 +16,254 @@ Pure TypeScript framework for building AI agents with real cognitive capabilitie
12
16
  npm install cognitive-engine
13
17
  ```
14
18
 
15
- ## What It Does
19
+ Or use individual packages:
16
20
 
17
- Most AI frameworks just wrap API calls. Cognitive Engine gives your agent actual cognitive abilities:
21
+ ```bash
22
+ npm install @cognitive-engine/perception @cognitive-engine/bandit
23
+ ```
18
24
 
19
- - **Perception** Understands user messages beyond keywords: emotions, urgency, implicit needs, conversation phase
20
- - **Episodic Memory** — Remembers past interactions with semantic search, importance scoring, and natural forgetting
21
- - **BDI Reasoning** Beliefs-Desires-Intentions architecture for deciding *what* to do and *why*
22
- - **Adaptive Learning** — Thompson Sampling bandit that learns which response strategies work best per user
25
+ ## What It Does
26
+
27
+ Most AI libraries just wrap API calls. Cognitive Engine gives your agent actual cognitive abilities:
28
+
29
+ | Module | What it does |
30
+ |--------|-------------|
31
+ | **Perception** | Dual-mode message analysis — emotions, urgency, intent, entities |
32
+ | **Reasoning** | BDI (Beliefs-Desires-Intentions) with Bayesian belief updates |
33
+ | **Episodic Memory** | Store & recall interactions with semantic search and natural forgetting |
34
+ | **Semantic Memory** | Knowledge graph of facts with confidence tracking |
35
+ | **Emotional Model** | VAD (Valence-Arousal-Dominance) tracking, volatility detection |
36
+ | **Social Model** | Rapport, boundaries, communication preferences |
37
+ | **Mind** | Self-reflection, relationship tracking, open loops |
38
+ | **Temporal** | Behavior patterns, causal chains, predictions |
39
+ | **Planning** | Goal decomposition and plan tracking |
40
+ | **Metacognition** | Self-assessment, contradiction detection, strategy selection |
41
+ | **Bandit** | Thompson Sampling — learns what works per user |
42
+ | **Orchestrator** | Composes all modules into a single `process()` call |
23
43
 
24
44
  ## Quick Start
25
45
 
46
+ ### Full orchestrator (all modules)
47
+
26
48
  ```typescript
27
49
  import {
50
+ CognitiveOrchestrator,
28
51
  OpenAiLlmProvider,
29
52
  OpenAiEmbeddingProvider,
30
- PerceptionService,
31
- Reasoner,
32
- EpisodicMemory,
33
- EpisodeExtractor,
34
- ThompsonBandit,
35
- MemoryBanditStorage,
36
53
  MemoryStore,
37
54
  } from 'cognitive-engine'
38
55
 
39
- // 1. Set up providers
40
- const llm = new OpenAiLlmProvider({
41
- apiKey: process.env.OPENAI_API_KEY,
42
- model: 'gpt-4o-mini',
56
+ const engine = new CognitiveOrchestrator({
57
+ llm: new OpenAiLlmProvider({ apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o-mini' }),
58
+ embedding: new OpenAiEmbeddingProvider({ apiKey: process.env.OPENAI_API_KEY }),
59
+ store: new MemoryStore(),
43
60
  })
44
- const embedding = new OpenAiEmbeddingProvider({
45
- apiKey: process.env.OPENAI_API_KEY,
46
- })
47
- const store = new MemoryStore()
48
61
 
49
- // 2. Create cognitive modules
50
- const perception = new PerceptionService(llm)
51
- const reasoner = new Reasoner()
52
- const memory = new EpisodicMemory(store, embedding)
53
- const extractor = new EpisodeExtractor(llm, embedding)
54
- const bandit = new ThompsonBandit(new MemoryBanditStorage())
62
+ const result = await engine.process('user-123', 'I feel stuck on this project')
63
+
64
+ console.log(result.percept.emotionalTone) // 'frustrated'
65
+ console.log(result.reasoning.intentions[0].type) // 'empathize'
66
+ console.log(result.suggestedResponse) // AI-generated empathetic response
55
67
  ```
56
68
 
57
- ## Usage Examples
69
+ ### Selective modules
58
70
 
59
- ### Perception — Understand User Messages
71
+ ```typescript
72
+ const engine = new CognitiveOrchestrator({
73
+ llm, embedding, store,
74
+ modules: {
75
+ memory: true,
76
+ emotional: true,
77
+ // everything else disabled — zero overhead
78
+ },
79
+ })
80
+ ```
60
81
 
61
- Dual-mode analysis: fast regex for simple messages, deep LLM analysis for complex ones.
82
+ ### Individual modules (no orchestrator)
62
83
 
63
84
  ```typescript
64
- const { percept, beliefCandidates } = await perception.perceive(
65
- "I've been stressed about the project deadline, my manager keeps adding tasks"
66
- )
85
+ import { PerceptionService } from 'cognitive-engine'
67
86
 
68
- console.log(percept.emotionalTone) // 'anxious'
69
- console.log(percept.urgency) // 7
70
- console.log(percept.responseMode) // 'listening'
71
- console.log(percept.implicitNeeds) // ['emotional_support', 'validation']
72
- console.log(percept.entities) // [{ type: 'person', value: 'manager' }]
73
- console.log(percept.conversationPhase) // 'sharing'
87
+ const perception = new PerceptionService(llm)
88
+ const { percept } = await perception.perceive('Can you help me fix this bug?')
89
+ console.log(percept.requestType) // 'question'
90
+ console.log(percept.urgency) // 4
74
91
  ```
75
92
 
76
- Quick analysis (no LLM call, instant):
93
+ ## Module Examples
94
+
95
+ ### Perception — Understand Messages
77
96
 
78
97
  ```typescript
79
- import { quickAnalyze } from 'cognitive-engine'
98
+ const { percept, beliefCandidates } = await perception.perceive(
99
+ "I've been stressed about the deadline, my manager keeps adding tasks"
100
+ )
80
101
 
81
- const quick = quickAnalyze("Can you help me fix this bug?")
82
- console.log(quick.requestTypes) // ['question', 'help']
83
- console.log(quick.urgency) // 4
102
+ percept.emotionalTone // 'anxious'
103
+ percept.urgency // 7
104
+ percept.responseMode // 'listening'
105
+ percept.implicitNeeds // ['emotional_support', 'validation']
106
+ percept.entities // [{ type: 'person', value: 'manager' }]
84
107
  ```
85
108
 
86
109
  ### Reasoning — Decide What To Do
87
110
 
88
- BDI (Beliefs-Desires-Intentions) reasoning with Bayesian belief updates.
89
-
90
111
  ```typescript
91
- // Feed perception results into the world model
92
- for (const candidate of beliefCandidates) {
93
- reasoner.worldModel.addBelief(candidate, 'observed')
94
- }
95
-
96
- // Reason about the situation
97
112
  const result = reasoner.reason(percept)
98
113
 
99
- console.log(result.intentions)
100
- // [
101
- // { type: 'empathize', priority: 10, reason: 'User is stressed, listening mode' },
102
- // { type: 'explore', priority: 5, reason: 'Understand workload situation' }
103
- // ]
104
-
105
- console.log(result.state.beliefs)
114
+ result.intentions
106
115
  // [
107
- // { subject: 'user', predicate: 'feels', object: 'stressed', confidence: 0.85 },
108
- // { subject: 'user', predicate: 'deals_with', object: 'work_pressure', confidence: 0.7 }
116
+ // { type: 'empathize', priority: 10, reason: 'User is stressed' },
117
+ // { type: 'explore', priority: 5, reason: 'Understand workload' }
109
118
  // ]
110
119
  ```
111
120
 
112
- World model maintains beliefs with confidence that updates over time:
121
+ ### Memory Remember and Recall
113
122
 
114
123
  ```typescript
115
- const { worldModel } = reasoner
124
+ // Store episodes
125
+ const episode = await extractor.extract('user-123', message)
126
+ await memory.storeEpisode(episode)
116
127
 
117
- // Explicit statement → high confidence
118
- worldModel.addBelief(
119
- { subject: 'user', predicate: 'works_as', object: 'engineer', confidence: 0.9 },
120
- 'explicit'
121
- )
122
-
123
- // Repeated evidence strengthens beliefs
124
- worldModel.confirmBelief(beliefId) // confidence: 0.85 → 0.95
128
+ // Semantic search
129
+ const results = await memory.search({ userId: 'user-123', query: 'team collaboration' })
125
130
 
126
- // Contradicting evidence weakens them
127
- worldModel.weakenBelief(beliefId) // confidence: 0.6 0.45
128
-
129
- // Inferred beliefs decay faster than explicit ones
130
- worldModel.applyDecay()
131
+ // Build context for response
132
+ const context = await memory.getContext('user-123', 'How is the project going?')
131
133
  ```
132
134
 
133
- ### Episodic Memory Remember and Recall
134
-
135
- Store personal episodes with semantic search and natural forgetting.
135
+ ### BanditLearn What Works
136
136
 
137
137
  ```typescript
138
- // Extract episodes from user messages automatically
139
- const episode = await extractor.extract(
140
- 'user-123',
141
- 'Yesterday I had a great meeting with the team, we finally agreed on the architecture'
142
- )
143
-
144
- if (episode) {
145
- console.log(episode.summary) // 'Productive team meeting about architecture'
146
- console.log(episode.emotions) // ['satisfaction', 'relief']
147
- console.log(episode.importance) // 0.7
148
- console.log(episode.category) // 'work'
149
-
150
- await memory.storeEpisode(episode)
151
- }
152
-
153
- // Semantic search — find relevant memories
154
- const results = await memory.search({
155
- userId: 'user-123',
156
- query: 'team collaboration',
157
- limit: 5,
158
- })
138
+ const bandit = new ThompsonBandit(new MemoryBanditStorage())
159
139
 
160
- for (const result of results) {
161
- console.log(result.episode.summary)
162
- console.log(result.relevanceScore) // semantic similarity
163
- console.log(result.recencyScore) // time decay
164
- console.log(result.combinedScore) // weighted combination
165
- }
140
+ // Select best strategy for this context
141
+ const choice = await bandit.select(contextVector, ['empathetic', 'actionable', 'curious'])
142
+ // choice.action = 'empathetic', choice.expectedReward = 0.73
166
143
 
167
- // Build context for response generation
168
- const context = await memory.getContext('user-123', 'How is the project going?')
169
- console.log(context.recentEpisodes) // last 5 episodes
170
- console.log(context.relevantEpisodes) // semantically related
171
- console.log(context.emotionalPattern) // 'positive (satisfaction)'
172
-
173
- // Consolidation — forget old unimportant memories
174
- const consolidated = await memory.consolidate('user-123')
175
- console.log(consolidated.decayedCount) // importance reduced
176
- console.log(consolidated.deletedCount) // forgotten
177
- console.log(consolidated.remainingCount) // still remembered
144
+ // After user feedback, update
145
+ await bandit.update(choice.action, contextVector, 1.0)
146
+ // Over time: learns per-context preferences
178
147
  ```
179
148
 
180
- ### Adaptive Learning Thompson Sampling Bandit
181
-
182
- Learn which response strategies work best for each user context.
149
+ ### EventsReact to Cognitive Activity
183
150
 
184
151
  ```typescript
185
- // Initialize response strategies with context dimensions
186
- const contextDim = 3 // e.g., [urgency, emotionIntensity, messageLength]
187
- await bandit.initAction('empathetic', contextDim)
188
- await bandit.initAction('actionable', contextDim)
189
- await bandit.initAction('curious', contextDim)
190
-
191
- // Select best strategy based on current context
192
- const context = [0.8, 0.6, 0.3] // high urgency, medium emotion, short message
193
- const choice = await bandit.select(context, ['empathetic', 'actionable', 'curious'])
194
- console.log(choice.action) // 'empathetic'
195
- console.log(choice.expectedReward) // 0.73
196
-
197
- // After getting user feedback, update the model
198
- await bandit.update(choice.action, context, 1.0) // positive feedback
199
-
200
- // Over time, the bandit learns per-context preferences
201
- // High urgency + high emotion → empathetic works best
202
- // Low urgency + question → actionable works best
152
+ import { CognitiveEventEmitter, CognitiveOrchestrator } from 'cognitive-engine'
153
+
154
+ const events = new CognitiveEventEmitter()
155
+ events.on('perception:complete', (percept) => {
156
+ analytics.track('perception', { tone: percept.emotionalTone })
157
+ })
158
+ events.on('episode:created', (episode) => {
159
+ console.log('Remembered:', episode.summary)
160
+ })
161
+
162
+ const engine = new CognitiveOrchestrator({ llm, embedding, store, events })
203
163
  ```
204
164
 
205
- ### Custom Providers — Bring Your Own LLM/Storage
165
+ ## Custom Providers
206
166
 
207
- Implement the interfaces to use any LLM or storage backend:
167
+ Implement interfaces to use any LLM or storage:
208
168
 
209
169
  ```typescript
210
170
  import type { LlmProvider, Store, EmbeddingProvider } from 'cognitive-engine'
211
171
 
212
- // Custom LLM (e.g., Anthropic, Ollama, etc.)
172
+ // Your LLM (Anthropic, Ollama, Gemini, etc.)
213
173
  class MyLlmProvider implements LlmProvider {
214
174
  async complete(messages, options?) {
215
- // Call your LLM API
216
- return { content: '...', usage: { ... }, finishReason: 'stop' }
175
+ return { content: '...', usage: { promptTokens: 0, completionTokens: 0 }, finishReason: 'stop' }
217
176
  }
218
-
219
177
  async completeJson(messages, options?) {
220
- // Call your LLM API with JSON mode
221
178
  const response = await this.complete(messages, options)
222
179
  return { ...response, parsed: JSON.parse(response.content) }
223
180
  }
224
181
  }
225
182
 
226
- // Custom Store (e.g., PostgreSQL, Redis, MongoDB)
183
+ // Your Store (PostgreSQL, Redis, MongoDB, etc.)
227
184
  class PostgresStore implements Store {
228
185
  async get(collection, id) { /* SELECT ... */ }
229
186
  async set(collection, id, data) { /* INSERT/UPDATE ... */ }
230
187
  async delete(collection, id) { /* DELETE ... */ }
231
188
  async find(collection, filter) { /* SELECT ... WHERE ... */ }
232
189
  async upsert(collection, id, data) { /* INSERT ... ON CONFLICT ... */ }
233
- async vectorSearch(collection, vector, options) { /* pgvector search */ }
234
- }
235
-
236
- // Custom Embedding Provider
237
- class MyEmbeddingProvider implements EmbeddingProvider {
238
- async embed(text) { return [0.1, 0.2, ...] }
239
- async embedBatch(texts) { return texts.map(t => [0.1, ...]) }
190
+ // Optional: vector search with pgvector
191
+ async vectorSearch(collection, vector, options) { /* ORDER BY embedding <-> $1 */ }
240
192
  }
241
193
  ```
242
194
 
243
- ### Pipeline — Composable Processing
244
-
245
- Chain processing steps with type-safe pipelines:
246
-
247
- ```typescript
248
- import { Pipeline } from 'cognitive-engine'
249
-
250
- const pipeline = new Pipeline<string, string>()
251
- .pipe(async (input) => input.toLowerCase())
252
- .pipe(async (input) => input.trim())
253
- .pipe(async (input) => `processed: ${input}`)
254
-
255
- const result = await pipeline.execute(' Hello World ')
256
- // 'processed: hello world'
257
- ```
258
-
259
- ### Math Utilities
260
-
261
- Battle-tested math functions used internally, available for your own use:
262
-
263
- ```typescript
264
- import {
265
- cosineSimilarity,
266
- exponentialDecay,
267
- sampleDiagonalMVN,
268
- l2Normalize,
269
- } from 'cognitive-engine'
270
-
271
- // Vector similarity
272
- const sim = cosineSimilarity([1, 2, 3], [2, 4, 6]) // 1.0
273
-
274
- // Time-based decay (for memory, belief confidence)
275
- const weight = exponentialDecay(daysSinceEvent, decayRate) // 0.0–1.0
276
-
277
- // Thompson Sampling (diagonal MVN — O(n) per sample)
278
- const sample = sampleDiagonalMVN(mean, variance) // [0.3, 0.7, ...]
279
-
280
- // Normalize vectors for cosine similarity
281
- const normalized = l2Normalize([3, 4]) // [0.6, 0.8]
282
- ```
283
-
284
195
  ## Architecture
285
196
 
286
197
  ```
287
198
  User Message
288
199
 
289
200
 
290
- ┌─────────────┐
291
- │ Perception │ Dual-mode: regex (fast) + LLM (deep)
292
- │ → Percept │ Emotion, intent, entities, implicit needs
293
- └──────┬──────┘
201
+ ┌──────────────┐
202
+ │ Perception │ Dual-mode: regex (fast) + LLM (deep)
203
+ └──────┬───────┘
294
204
 
295
-
296
- ┌─────────────┐ ┌──────────────┐
297
- │ Reasoning │◄────│ World Model │ Bayesian belief updates
298
- Intentions (Beliefs) │ Confidence decay
299
- └──────┬──────┘ └──────────────┘
300
-
301
- ├──────────────────────────┐
302
- ▼ ▼
303
- ┌─────────────┐ ┌──────────────┐
304
- │ Memory │ │ Bandit │
305
- (Episodes) (Thompson) │
306
- Semantic │ Adaptive │
307
- │ search + │ │ O(n) diag │
308
- decay │ │ covariance │
309
- └─────────────┘ └──────────────┘
205
+ ┌────┴────┐
206
+ ▼ ▼
207
+ ┌────┐ ┌────────┐
208
+ Memory Reason Parallel execution
209
+ │ (episodic│ (BDI) │
210
+ +semantic│ │
211
+ └────┬─────┘────┬───┘
212
+ │ │
213
+
214
+ ┌─────────────────────────────────────┐
215
+ Mind / Emotional / Social / Plan Parallel
216
+ Temporal / Bandit
217
+ └──────────────┬──────────────────────┘
218
+
219
+
220
+ ┌──────────────────────┐
221
+ │ Metacognition │ Self-assessment
222
+ │ → Strategy selection│
223
+ └──────────┬───────────┘
224
+
225
+
226
+ ┌──────────────────────┐
227
+ │ Response Generation │ System prompt + LLM
228
+ └──────────────────────┘
310
229
  ```
311
230
 
312
- ## Key Design Decisions
313
-
314
- - **Pure TypeScript** no framework lock-in (NestJS, Express, etc.). Use anywhere.
315
- - **Provider-agnostic** — swap LLM, embedding, or storage via simple interfaces.
316
- - **Math-first** real algorithms (Thompson Sampling, Bayesian updates, cosine similarity), not just API wrappers.
317
- - **Strict types** — `strict: true`, `noUncheckedIndexedAccess`, zero `any` casts.
318
- - **168 tests**every module tested, including convergence tests for bandit algorithms.
231
+ ## Packages
232
+
233
+ All packages work standalone. Use only what you need.
234
+
235
+ | Package | Description |
236
+ |---------|-------------|
237
+ | `cognitive-engine` | Umbrellare-exports everything |
238
+ | `@cognitive-engine/core` | Types, interfaces, event system |
239
+ | `@cognitive-engine/math` | Vector ops, statistics, sampling |
240
+ | `@cognitive-engine/perception` | Message analysis |
241
+ | `@cognitive-engine/reasoning` | BDI inference engine |
242
+ | `@cognitive-engine/memory` | Episodic + semantic memory |
243
+ | `@cognitive-engine/emotional` | VAD emotional model |
244
+ | `@cognitive-engine/social` | Rapport, boundaries, preferences |
245
+ | `@cognitive-engine/mind` | Reflection, relationships, open loops |
246
+ | `@cognitive-engine/temporal` | Patterns, causal chains, predictions |
247
+ | `@cognitive-engine/planning` | Goal decomposition |
248
+ | `@cognitive-engine/metacognition` | Self-assessment |
249
+ | `@cognitive-engine/bandit` | Thompson Sampling |
250
+ | `@cognitive-engine/orchestrator` | Full cognitive pipeline |
251
+ | `@cognitive-engine/store-memory` | In-memory store (dev/test) |
252
+ | `@cognitive-engine/provider-openai` | OpenAI LLM + embeddings |
253
+
254
+ ## Design Principles
255
+
256
+ - **Library, not framework** — you call it, it doesn't call you. Compose freely.
257
+ - **Provider-agnostic** — swap LLM, embeddings, or storage via interfaces.
258
+ - **Each module works standalone** — no hidden coupling between packages.
259
+ - **Math-first** — real algorithms (Thompson Sampling, Bayesian updates, VAD model), not API wrappers.
260
+ - **Strict TypeScript** — `strict: true`, zero `any` casts, all interfaces extracted.
261
+ - **315+ tests** — every module tested, including convergence tests for bandit.
319
262
 
320
263
  ## Requirements
321
264
 
322
- - Node.js 20
323
- - TypeScript 5.0 (for consumers using TypeScript)
265
+ - Node.js >= 20
266
+ - TypeScript >= 5.0
324
267
 
325
268
  ## License
326
269
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "cognitive-engine",
3
- "version": "0.2.0",
4
- "description": "TypeScript framework for building AI agents with cognitive capabilities — perception, memory, reasoning, and adaptive learning",
3
+ "version": "0.2.1",
4
+ "description": "TypeScript library for building AI agents with cognitive capabilities — perception, memory, reasoning, emotions, and adaptive learning",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",
7
7
  "types": "./dist/index.d.ts",
@@ -12,7 +12,8 @@
12
12
  }
13
13
  },
14
14
  "files": [
15
- "dist"
15
+ "dist",
16
+ "README.md"
16
17
  ],
17
18
  "scripts": {
18
19
  "build": "tsc",