cognitive-engine 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +327 -0
  2. package/package.json +1 -1
package/README.md ADDED
@@ -0,0 +1,327 @@
1
+ # cognitive-engine
2
+
3
+ > Not just memory. A mind.
4
+
5
+ Pure TypeScript framework for building AI agents with real cognitive capabilities — perception, episodic memory, BDI reasoning, Thompson Sampling, and adaptive personalization.
6
+
7
+ **Provider-agnostic**: works with any LLM (OpenAI, Anthropic, local models) and any storage backend.
8
+
9
+ ## Install
10
+
11
+ ```bash
12
+ npm install cognitive-engine
13
+ ```
14
+
15
+ ## What It Does
16
+
17
+ Most AI frameworks just wrap API calls. Cognitive Engine gives your agent actual cognitive abilities:
18
+
19
+ - **Perception** — Understands user messages beyond keywords: emotions, urgency, implicit needs, conversation phase
20
+ - **Episodic Memory** — Remembers past interactions with semantic search, importance scoring, and natural forgetting
21
+ - **BDI Reasoning** — Beliefs-Desires-Intentions architecture for deciding *what* to do and *why*
22
+ - **Adaptive Learning** — Thompson Sampling bandit that learns which response strategies work best per user
23
+
24
+ ## Quick Start
25
+
26
+ ```typescript
27
+ import {
28
+ OpenAiLlmProvider,
29
+ OpenAiEmbeddingProvider,
30
+ PerceptionService,
31
+ Reasoner,
32
+ EpisodicMemory,
33
+ EpisodeExtractor,
34
+ ThompsonBandit,
35
+ MemoryBanditStorage,
36
+ MemoryStore,
37
+ } from 'cognitive-engine'
38
+
39
+ // 1. Set up providers
40
+ const llm = new OpenAiLlmProvider({
41
+ apiKey: process.env.OPENAI_API_KEY,
42
+ model: 'gpt-4o-mini',
43
+ })
44
+ const embedding = new OpenAiEmbeddingProvider({
45
+ apiKey: process.env.OPENAI_API_KEY,
46
+ })
47
+ const store = new MemoryStore()
48
+
49
+ // 2. Create cognitive modules
50
+ const perception = new PerceptionService(llm)
51
+ const reasoner = new Reasoner()
52
+ const memory = new EpisodicMemory(store, embedding)
53
+ const extractor = new EpisodeExtractor(llm, embedding)
54
+ const bandit = new ThompsonBandit(new MemoryBanditStorage())
55
+ ```
56
+
57
+ ## Usage Examples
58
+
59
+ ### Perception — Understand User Messages
60
+
61
+ Dual-mode analysis: fast regex for simple messages, deep LLM analysis for complex ones.
62
+
63
+ ```typescript
64
+ const { percept, beliefCandidates } = await perception.perceive(
65
+ "I've been stressed about the project deadline, my manager keeps adding tasks"
66
+ )
67
+
68
+ console.log(percept.emotionalTone) // 'anxious'
69
+ console.log(percept.urgency) // 7
70
+ console.log(percept.responseMode) // 'listening'
71
+ console.log(percept.implicitNeeds) // ['emotional_support', 'validation']
72
+ console.log(percept.entities) // [{ type: 'person', value: 'manager' }]
73
+ console.log(percept.conversationPhase) // 'sharing'
74
+ ```
75
+
76
+ Quick analysis (no LLM call, instant):
77
+
78
+ ```typescript
79
+ import { quickAnalyze } from 'cognitive-engine'
80
+
81
+ const quick = quickAnalyze("Can you help me fix this bug?")
82
+ console.log(quick.requestTypes) // ['question', 'help']
83
+ console.log(quick.urgency) // 4
84
+ ```
85
+
86
+ ### Reasoning — Decide What To Do
87
+
88
+ BDI (Beliefs-Desires-Intentions) reasoning with Bayesian belief updates.
89
+
90
+ ```typescript
91
+ // Feed perception results into the world model
92
+ for (const candidate of beliefCandidates) {
93
+ reasoner.worldModel.addBelief(candidate, 'observed')
94
+ }
95
+
96
+ // Reason about the situation
97
+ const result = reasoner.reason(percept)
98
+
99
+ console.log(result.intentions)
100
+ // [
101
+ // { type: 'empathize', priority: 10, reason: 'User is stressed, listening mode' },
102
+ // { type: 'explore', priority: 5, reason: 'Understand workload situation' }
103
+ // ]
104
+
105
+ console.log(result.state.beliefs)
106
+ // [
107
+ // { subject: 'user', predicate: 'feels', object: 'stressed', confidence: 0.85 },
108
+ // { subject: 'user', predicate: 'deals_with', object: 'work_pressure', confidence: 0.7 }
109
+ // ]
110
+ ```
111
+
112
+ World model maintains beliefs with confidence that updates over time:
113
+
114
+ ```typescript
115
+ const { worldModel } = reasoner
116
+
117
+ // Explicit statement → high confidence
118
+ worldModel.addBelief(
119
+ { subject: 'user', predicate: 'works_as', object: 'engineer', confidence: 0.9 },
120
+ 'explicit'
121
+ )
122
+
123
+ // Repeated evidence strengthens beliefs
124
+ worldModel.confirmBelief(beliefId) // confidence: 0.85 → 0.95
125
+
126
+ // Contradicting evidence weakens them
127
+ worldModel.weakenBelief(beliefId) // confidence: 0.6 → 0.45
128
+
129
+ // Inferred beliefs decay faster than explicit ones
130
+ worldModel.applyDecay()
131
+ ```
132
+
133
+ ### Episodic Memory — Remember and Recall
134
+
135
+ Store personal episodes with semantic search and natural forgetting.
136
+
137
+ ```typescript
138
+ // Extract episodes from user messages automatically
139
+ const episode = await extractor.extract(
140
+ 'user-123',
141
+ 'Yesterday I had a great meeting with the team, we finally agreed on the architecture'
142
+ )
143
+
144
+ if (episode) {
145
+ console.log(episode.summary) // 'Productive team meeting about architecture'
146
+ console.log(episode.emotions) // ['satisfaction', 'relief']
147
+ console.log(episode.importance) // 0.7
148
+ console.log(episode.category) // 'work'
149
+
150
+ await memory.storeEpisode(episode)
151
+ }
152
+
153
+ // Semantic search — find relevant memories
154
+ const results = await memory.search({
155
+ userId: 'user-123',
156
+ query: 'team collaboration',
157
+ limit: 5,
158
+ })
159
+
160
+ for (const result of results) {
161
+ console.log(result.episode.summary)
162
+ console.log(result.relevanceScore) // semantic similarity
163
+ console.log(result.recencyScore) // time decay
164
+ console.log(result.combinedScore) // weighted combination
165
+ }
166
+
167
+ // Build context for response generation
168
+ const context = await memory.getContext('user-123', 'How is the project going?')
169
+ console.log(context.recentEpisodes) // last 5 episodes
170
+ console.log(context.relevantEpisodes) // semantically related
171
+ console.log(context.emotionalPattern) // 'positive (satisfaction)'
172
+
173
+ // Consolidation — forget old unimportant memories
174
+ const consolidated = await memory.consolidate('user-123')
175
+ console.log(consolidated.decayedCount) // importance reduced
176
+ console.log(consolidated.deletedCount) // forgotten
177
+ console.log(consolidated.remainingCount) // still remembered
178
+ ```
179
+
180
+ ### Adaptive Learning — Thompson Sampling Bandit
181
+
182
+ Learn which response strategies work best for each user context.
183
+
184
+ ```typescript
185
+ // Initialize response strategies with context dimensions
186
+ const contextDim = 3 // e.g., [urgency, emotionIntensity, messageLength]
187
+ await bandit.initAction('empathetic', contextDim)
188
+ await bandit.initAction('actionable', contextDim)
189
+ await bandit.initAction('curious', contextDim)
190
+
191
+ // Select best strategy based on current context
192
+ const context = [0.8, 0.6, 0.3] // high urgency, medium emotion, short message
193
+ const choice = await bandit.select(context, ['empathetic', 'actionable', 'curious'])
194
+ console.log(choice.action) // 'empathetic'
195
+ console.log(choice.expectedReward) // 0.73
196
+
197
+ // After getting user feedback, update the model
198
+ await bandit.update(choice.action, context, 1.0) // positive feedback
199
+
200
+ // Over time, the bandit learns per-context preferences
201
+ // High urgency + high emotion → empathetic works best
202
+ // Low urgency + question → actionable works best
203
+ ```
204
+
205
+ ### Custom Providers — Bring Your Own LLM/Storage
206
+
207
+ Implement the interfaces to use any LLM or storage backend:
208
+
209
+ ```typescript
210
+ import type { LlmProvider, Store, EmbeddingProvider } from 'cognitive-engine'
211
+
212
+ // Custom LLM (e.g., Anthropic, Ollama, etc.)
213
+ class MyLlmProvider implements LlmProvider {
214
+ async complete(messages, options?) {
215
+ // Call your LLM API
216
+ return { content: '...', usage: { ... }, finishReason: 'stop' }
217
+ }
218
+
219
+ async completeJson(messages, options?) {
220
+ // Call your LLM API with JSON mode
221
+ const response = await this.complete(messages, options)
222
+ return { ...response, parsed: JSON.parse(response.content) }
223
+ }
224
+ }
225
+
226
+ // Custom Store (e.g., PostgreSQL, Redis, MongoDB)
227
+ class PostgresStore implements Store {
228
+ async get(collection, id) { /* SELECT ... */ }
229
+ async set(collection, id, data) { /* INSERT/UPDATE ... */ }
230
+ async delete(collection, id) { /* DELETE ... */ }
231
+ async find(collection, filter) { /* SELECT ... WHERE ... */ }
232
+ async upsert(collection, id, data) { /* INSERT ... ON CONFLICT ... */ }
233
+ async vectorSearch(collection, vector, options) { /* pgvector search */ }
234
+ }
235
+
236
+ // Custom Embedding Provider
237
+ class MyEmbeddingProvider implements EmbeddingProvider {
238
+ async embed(text) { return [0.1, 0.2, ...] }
239
+ async embedBatch(texts) { return texts.map(t => [0.1, ...]) }
240
+ }
241
+ ```
242
+
243
+ ### Pipeline — Composable Processing
244
+
245
+ Chain processing steps with type-safe pipelines:
246
+
247
+ ```typescript
248
+ import { Pipeline } from 'cognitive-engine'
249
+
250
+ const pipeline = new Pipeline<string, string>()
251
+ .pipe(async (input) => input.toLowerCase())
252
+ .pipe(async (input) => input.trim())
253
+ .pipe(async (input) => `processed: ${input}`)
254
+
255
+ const result = await pipeline.execute(' Hello World ')
256
+ // 'processed: hello world'
257
+ ```
258
+
259
+ ### Math Utilities
260
+
261
+ Battle-tested math functions used internally, available for your own use:
262
+
263
+ ```typescript
264
+ import {
265
+ cosineSimilarity,
266
+ exponentialDecay,
267
+ sampleDiagonalMVN,
268
+ l2Normalize,
269
+ } from 'cognitive-engine'
270
+
271
+ // Vector similarity
272
+ const sim = cosineSimilarity([1, 2, 3], [2, 4, 6]) // 1.0
273
+
274
+ // Time-based decay (for memory, belief confidence)
275
+ const weight = exponentialDecay(daysSinceEvent, decayRate) // 0.0–1.0
276
+
277
+ // Thompson Sampling (diagonal MVN — O(n) per sample)
278
+ const sample = sampleDiagonalMVN(mean, variance) // [0.3, 0.7, ...]
279
+
280
+ // Normalize vectors for cosine similarity
281
+ const normalized = l2Normalize([3, 4]) // [0.6, 0.8]
282
+ ```
283
+
284
+ ## Architecture
285
+
286
+ ```
287
+ User Message
288
+
289
+
290
+ ┌─────────────┐
291
+ │ Perception │ Dual-mode: regex (fast) + LLM (deep)
292
+ │ → Percept │ Emotion, intent, entities, implicit needs
293
+ └──────┬──────┘
294
+
295
+
296
+ ┌─────────────┐ ┌──────────────┐
297
+ │ Reasoning │◄────│ World Model │ Bayesian belief updates
298
+ │ → Intentions│ │ (Beliefs) │ Confidence decay
299
+ └──────┬──────┘ └──────────────┘
300
+
301
+ ├──────────────────────────┐
302
+ ▼ ▼
303
+ ┌─────────────┐ ┌──────────────┐
304
+ │ Memory │ │ Bandit │
305
+ │ (Episodes) │ │ (Thompson) │
306
+ │ Semantic │ │ Adaptive │
307
+ │ search + │ │ O(n) diag │
308
+ │ decay │ │ covariance │
309
+ └─────────────┘ └──────────────┘
310
+ ```
311
+
312
+ ## Key Design Decisions
313
+
314
+ - **Pure TypeScript** — no framework lock-in (NestJS, Express, etc.). Use anywhere.
315
+ - **Provider-agnostic** — swap LLM, embedding, or storage via simple interfaces.
316
+ - **Math-first** — real algorithms (Thompson Sampling, Bayesian updates, cosine similarity), not just API wrappers.
317
+ - **Strict types** — `strict: true`, `noUncheckedIndexedAccess`, zero `any` casts.
318
+ - **168 tests** — every module tested, including convergence tests for bandit algorithms.
319
+
320
+ ## Requirements
321
+
322
+ - Node.js ≥ 20
323
+ - TypeScript ≥ 5.0 (for consumers using TypeScript)
324
+
325
+ ## License
326
+
327
+ [Apache-2.0](https://github.com/medonomator/cognitive-engine/blob/main/LICENSE) — Copyright 2026 Dmitry Zorin
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cognitive-engine",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "description": "TypeScript framework for building AI agents with cognitive capabilities — perception, memory, reasoning, and adaptive learning",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",