@positronic/template-new-project 0.0.76 → 0.0.78

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -7,7 +7,7 @@ This guide explains how to test Positronic brains using the testing utilities in
7
7
  All test files should be placed in the `tests/` directory at the root of your project. This keeps tests separate from your brain implementations and prevents them from being deployed with your application.
8
8
 
9
9
  Test files should follow the naming convention `<brain-name>.test.ts`. For example:
10
- - Brain file: `brains/customer-support.ts`
10
+ - Brain file: `src/brains/customer-support.ts`
11
11
  - Test file: `tests/customer-support.test.ts`
12
12
 
13
13
  ## Testing Philosophy
@@ -25,7 +25,7 @@ Testing brains is about verifying they produce the correct outputs given specifi
25
25
 
26
26
  ```typescript
27
27
  import { createMockClient, runBrainTest } from '../tests/test-utils.js';
28
- import yourBrain from '../brains/your-brain.js';
28
+ import yourBrain from '../src/brains/your-brain.js';
29
29
 
30
30
  describe('your-brain', () => {
31
31
  it('should process user data and generate a report', async () => {
@@ -193,6 +193,66 @@ it('should use customer data to generate personalized content', async () => {
193
193
  });
194
194
  ```
195
195
 
196
+ ### Testing Brains with Plugins
197
+
198
+ Brains that use plugins (gmail, slack, ntfy, etc.) need mock plugins in tests. **Don't use the project brain wrapper** (`src/brain.ts`) in tests — it has real plugins that need API keys. Instead, use the core `brain()` directly and attach mock plugins:
199
+
200
+ ```typescript
201
+ import { brain, definePlugin } from '@positronic/core';
202
+ import { createMockClient, runBrainTest } from './test-utils.js';
203
+
204
+ // Create mock plugins that match what the brain expects
205
+ const mockGmail = definePlugin({
206
+ name: 'gmail',
207
+ create: () => ({
208
+ getAccounts: () => [{ name: 'test-account', refreshToken: 'test-token' }],
209
+ searchThreads: jest.fn().mockResolvedValue([
210
+ { threadId: 'thread-1' },
211
+ ]),
212
+ getThreadDetails: jest.fn().mockResolvedValue({
213
+ threadId: 'thread-1',
214
+ subject: 'Test Email',
215
+ from: 'sender@example.com',
216
+ body: 'Email body content',
217
+ }),
218
+ archiveMessages: jest.fn().mockResolvedValue(undefined),
219
+ }),
220
+ });
221
+
222
+ const mockNtfy = definePlugin({
223
+ name: 'ntfy',
224
+ create: () => ({
225
+ send: jest.fn().mockResolvedValue(undefined),
226
+ }),
227
+ });
228
+
229
+ // Build the brain directly with mock plugins
230
+ const testBrain = brain('email-processor')
231
+ .withPlugin(mockGmail)
232
+ .withPlugin(mockNtfy)
233
+ .step('Fetch emails', async ({ gmail }) => {
234
+ const accounts = gmail.getAccounts();
235
+ const threads = await gmail.searchThreads(accounts[0].refreshToken, 'label:inbox');
236
+ return { threads };
237
+ })
238
+ .step('Notify', async ({ state, ntfy }) => {
239
+ await ntfy.send(`Found <%= '${state.threads.length}' %> emails`);
240
+ return state;
241
+ });
242
+
243
+ it('should fetch emails and notify', async () => {
244
+ const mockClient = createMockClient();
245
+ const result = await runBrainTest(testBrain, { client: mockClient });
246
+
247
+ expect(result.completed).toBe(true);
248
+ expect(result.finalState.threads).toHaveLength(1);
249
+ });
250
+ ```
251
+
252
+ The `name` in `definePlugin` must match what the brain accesses on the step context — `gmail`, `slack`, `ntfy`, etc. The `create()` function returns an object with the same methods the real plugin provides, but backed by mocks.
253
+
254
+ **Testing brains defined in separate files:** If you're testing an existing brain from `src/brains/`, you can't easily swap its plugins because they come from the project's `createBrain()` call. Instead, re-define the brain's steps in the test with mock plugins (copy the step chain, not the imports). Or restructure so the brain logic is a function that accepts a brain builder.
255
+
196
256
  ## Best Practices
197
257
 
198
258
  1. **Test Behavior, Not Implementation**
@@ -236,7 +296,7 @@ Following testing best practices, avoid testing:
236
296
 
237
297
  ```typescript
238
298
  import { createMockClient, runBrainTest } from './test-utils.js';
239
- import analysisBrain from '../brains/analysis-brain.js';
299
+ import analysisBrain from '../src/brains/analysis-brain.js';
240
300
 
241
301
  describe('analysis-brain', () => {
242
302
  it('should analyze customer feedback and generate insights', async () => {
@@ -1,6 +1,6 @@
1
1
  # Memory Guide
2
2
 
3
- This guide covers the memory system in Positronic, which enables brains to store and retrieve long-term memories using [Mem0](https://mem0.ai) or other memory providers.
3
+ This guide covers the memory system in Positronic, which enables brains to store and retrieve long-term memories using [Mem0](https://mem0.ai) via the `mem0` plugin.
4
4
 
5
5
  ## Overview
6
6
 
@@ -8,7 +8,7 @@ The memory system provides:
8
8
  - **Long-term memory storage** - Persist facts, preferences, and context across brain runs
9
9
  - **Semantic search** - Retrieve relevant memories based on natural language queries
10
10
  - **Automatic conversation indexing** - Optionally store all conversations for later retrieval
11
- - **Tools for agents** - Built-in tools that let agents store and recall memories
11
+ - **Tools for prompt loops** - Built-in tools that let LLMs store and recall memories
12
12
  - **Automatic user scoping** - Memories are scoped to the current user via `currentUser`, no manual userId threading needed
13
13
 
14
14
  ## Quick Start
@@ -19,7 +19,7 @@ The memory system provides:
19
19
  npm install @positronic/mem0
20
20
  ```
21
21
 
22
- ### 2. Set up the provider
22
+ ### 2. Set up the API key
23
23
 
24
24
  Add your Mem0 API key to `.env`:
25
25
 
@@ -27,43 +27,53 @@ Add your Mem0 API key to `.env`:
27
27
  MEM0_API_KEY=your-api-key-here
28
28
  ```
29
29
 
30
- ### 3. Configure in brain.ts
30
+ ### 3. Add the plugin to your project brain
31
+
32
+ Configure the mem0 plugin in `src/brain.ts` so all brains get memory:
31
33
 
32
34
  ```typescript
33
- import { createBrain, defaultTools } from '@positronic/core';
34
- import { createMem0Provider, createMem0Tools } from '@positronic/mem0';
35
+ import { createBrain } from '@positronic/core';
36
+ import { mem0 } from '@positronic/mem0';
35
37
  import { components } from './components/index.js';
36
38
 
37
- const memory = createMem0Provider({
38
- apiKey: process.env.MEM0_API_KEY!,
39
- });
40
-
41
39
  export const brain = createBrain({
40
+ plugins: [mem0.setup({ apiKey: process.env.MEM0_API_KEY! })],
42
41
  components,
43
- defaultTools,
44
- memory,
45
42
  });
46
43
  ```
47
44
 
48
- ### 4. Use memory tools in agents
45
+ Or add it to a single brain with `.withPlugin()`:
49
46
 
50
47
  ```typescript
51
48
  import { brain } from '../brain.js';
52
- import { createMem0Tools } from '@positronic/mem0';
53
- import { z } from 'zod';
49
+ import { mem0 } from '@positronic/mem0';
54
50
 
55
- const memoryTools = createMem0Tools();
51
+ export default brain('assistant')
52
+ .withPlugin(mem0.setup({ apiKey: process.env.MEM0_API_KEY! }))
53
+ .step('Load Context', async ({ mem0: m }) => {
54
+ const memories = await m.search('user preferences');
55
+ return { context: memories.map(m => m.content).join('\n') };
56
+ });
57
+ ```
58
+
59
+ ### 4. Use memory tools in prompt loops
60
+
61
+ ```typescript
62
+ import { brain } from '../brain.js';
63
+ import { z } from 'zod';
56
64
 
57
65
  export default brain('assistant')
58
- .brain('Help User', () => ({
59
- system: 'You are helpful. Use rememberFact to store user preferences.',
60
- prompt: 'The user said: I prefer dark mode',
61
- tools: {
62
- ...memoryTools,
63
- done: {
64
- description: 'Complete the task',
65
- inputSchema: z.object({ result: z.string() }),
66
- terminal: true,
66
+ .prompt('Help User', ({ mem0: m }) => ({
67
+ message: 'The user said: I prefer dark mode',
68
+ outputSchema: z.object({ result: z.string() }),
69
+ loop: {
70
+ tools: {
71
+ ...m.tools,
72
+ done: {
73
+ description: 'Complete the task',
74
+ inputSchema: z.object({ result: z.string() }),
75
+ terminal: true,
76
+ },
67
77
  },
68
78
  },
69
79
  }));
@@ -71,7 +81,7 @@ export default brain('assistant')
71
81
 
72
82
  ## Memory Tools
73
83
 
74
- The package provides two tools that agents can use:
84
+ The plugin provides two tools on `mem0.tools` that LLMs can call during prompt loops:
75
85
 
76
86
  ### rememberFact
77
87
 
@@ -80,7 +90,7 @@ Stores a fact in long-term memory.
80
90
  - **Input**: `{ fact: string }`
81
91
  - **Output**: `{ remembered: boolean, fact: string }`
82
92
 
83
- When the agent calls `rememberFact({ fact: "User prefers dark mode" })`, the fact is stored in Mem0 and can be retrieved later.
93
+ When the LLM calls `rememberFact({ fact: "User prefers dark mode" })`, the fact is stored in Mem0 and can be retrieved later.
84
94
 
85
95
  ### recallMemories
86
96
 
@@ -89,34 +99,26 @@ Searches for relevant memories.
89
99
  - **Input**: `{ query: string, limit?: number }`
90
100
  - **Output**: `{ found: number, memories: Array<{ content: string, relevance?: number }> }`
91
101
 
92
- When the agent calls `recallMemories({ query: "user preferences" })`, it receives matching memories with relevance scores.
102
+ When the LLM calls `recallMemories({ query: "user preferences" })`, it receives matching memories with relevance scores.
93
103
 
94
- ### Using Memory Tools in Agents
104
+ ### Using Memory Tools in Prompt Loops
95
105
 
96
106
  ```typescript
97
107
  import { brain } from '../brain.js';
98
- import { createMem0Tools } from '@positronic/mem0';
99
108
  import { z } from 'zod';
100
109
 
101
- const memoryTools = createMem0Tools();
102
-
103
110
  export default brain('personalized-assistant')
104
- .brain('Chat', () => ({
105
- system: `You are a personalized assistant.
106
-
107
- Use rememberFact to store important information about the user:
108
- - Preferences (theme, communication style, etc.)
109
- - Context (current projects, goals)
110
- - Any facts they want you to remember
111
-
112
- Use recallMemories before responding to check for relevant context.`,
113
- prompt: userMessage,
114
- tools: {
115
- ...memoryTools,
116
- done: {
117
- description: 'Send final response',
118
- inputSchema: z.object({ response: z.string() }),
119
- terminal: true,
111
+ .prompt('Chat', ({ mem0: m }) => ({
112
+ message: userMessage,
113
+ outputSchema: z.object({ response: z.string() }),
114
+ loop: {
115
+ tools: {
116
+ ...m.tools,
117
+ done: {
118
+ description: 'Send final response',
119
+ inputSchema: z.object({ response: z.string() }),
120
+ terminal: true,
121
+ },
120
122
  },
121
123
  },
122
124
  }));
@@ -124,56 +126,34 @@ Use recallMemories before responding to check for relevant context.`,
124
126
 
125
127
  ## Automatic Conversation Indexing
126
128
 
127
- The Mem0 adapter automatically stores all agent conversations to memory. This builds up context over time without explicit tool calls.
128
-
129
- ### Setting Up the Adapter
130
-
131
- In your `runner.ts`:
132
-
133
- ```typescript
134
- import { BrainRunner } from '@positronic/core';
135
- import { createMem0Adapter, createMem0Provider } from '@positronic/mem0';
136
-
137
- const provider = createMem0Provider({
138
- apiKey: process.env.MEM0_API_KEY!,
139
- });
140
-
141
- const adapter = createMem0Adapter({ provider });
142
-
143
- export const runner = new BrainRunner({
144
- adapters: [adapter],
145
- client: myClient,
146
- });
147
- ```
129
+ The mem0 plugin includes a built-in adapter that automatically indexes conversations to memory. When a brain completes, the adapter flushes buffered messages to Mem0. This builds up context over time without explicit tool calls.
148
130
 
149
131
  ### Adapter Behavior
150
132
 
151
- - **On agent start**: Buffers the initial prompt as a user message
152
- - **During execution**: Buffers all user and assistant messages
153
- - **On completion**: Flushes buffer to memory provider
133
+ - **On completion**: Flushes buffered messages to memory provider
154
134
  - **On error/cancel**: Discards buffer (doesn't store failed conversations)
155
135
 
156
- ### Including Tool Calls
136
+ ### Disabling Auto-Indexing
157
137
 
158
- By default, tool calls are not included in the indexed conversation. Enable this for full conversation history:
138
+ Auto-indexing is enabled by default. To disable it:
159
139
 
160
140
  ```typescript
161
- const adapter = createMem0Adapter({
162
- provider,
163
- includeToolCalls: true,
164
- });
141
+ mem0.setup({
142
+ apiKey: process.env.MEM0_API_KEY!,
143
+ autoIndex: false,
144
+ })
165
145
  ```
166
146
 
167
147
  ## Accessing Memory in Steps
168
148
 
169
- When memory is attached, you can access it directly in step functions:
149
+ When the mem0 plugin is attached, you can access it directly in step functions via `mem0` on the context. Destructure it as `mem0: m` to avoid shadowing the import:
170
150
 
171
151
  ### In Regular Steps
172
152
 
173
153
  ```typescript
174
154
  export default brain('my-brain')
175
- .step('Load Context', async ({ memory }) => {
176
- const memories = await memory.search('user preferences', {
155
+ .step('Load Context', async ({ mem0: m }) => {
156
+ const memories = await m.search('user preferences', {
177
157
  limit: 5,
178
158
  });
179
159
 
@@ -183,28 +163,28 @@ export default brain('my-brain')
183
163
  });
184
164
  ```
185
165
 
186
- ### In Agent Config Functions
166
+ ### In Prompt Config Functions
187
167
 
188
168
  ```typescript
189
169
  export default brain('my-brain')
190
- .brain('Process', async ({ memory }) => {
191
- const prefs = await memory.search('user preferences');
170
+ .prompt('Process', async ({ mem0: m }) => {
171
+ const prefs = await m.search('user preferences');
192
172
 
193
173
  const context = prefs.length > 0
194
174
  ? '\n\nUser preferences:\n' + prefs.map(p => '- ' + p.content).join('\n')
195
175
  : '';
196
176
 
197
177
  return {
178
+ message: 'Help the user with their request',
198
179
  system: 'You are helpful.' + context,
199
- prompt: 'Help the user with their request',
200
- tools: { /* ... */ },
180
+ outputSchema: z.object({ response: z.string() }),
201
181
  };
202
182
  });
203
183
  ```
204
184
 
205
185
  ## Helper Functions
206
186
 
207
- The package includes helper functions for common memory patterns:
187
+ The package includes helper functions for common memory patterns. These accept any object with `search` and `add` methods, so the `mem0` plugin injection works directly.
208
188
 
209
189
  ### formatMemories
210
190
 
@@ -213,7 +193,7 @@ Formats an array of memories into a readable string:
213
193
  ```typescript
214
194
  import { formatMemories } from '@positronic/mem0';
215
195
 
216
- const memories = await memory.search('preferences');
196
+ const memories = await m.search('preferences');
217
197
 
218
198
  const text = formatMemories(memories);
219
199
  // "1. User prefers dark mode\n2. User likes concise responses"
@@ -233,9 +213,9 @@ Creates a system prompt augmented with relevant memories:
233
213
  import { createMemorySystemPrompt } from '@positronic/mem0';
234
214
 
235
215
  export default brain('my-brain')
236
- .brain('Chat', async ({ memory }) => {
216
+ .prompt('Chat', async ({ mem0: m }) => {
237
217
  const system = await createMemorySystemPrompt(
238
- memory,
218
+ m,
239
219
  'You are a helpful assistant.',
240
220
  'user context and preferences',
241
221
  {
@@ -244,7 +224,11 @@ export default brain('my-brain')
244
224
  }
245
225
  );
246
226
 
247
- return { system, prompt: userMessage, tools: { /* ... */ } };
227
+ return {
228
+ message: userMessage,
229
+ system,
230
+ outputSchema: z.object({ response: z.string() }),
231
+ };
248
232
  });
249
233
  ```
250
234
 
@@ -255,38 +239,70 @@ Gets just the memory context block for manual prompt construction:
255
239
  ```typescript
256
240
  import { getMemoryContext } from '@positronic/mem0';
257
241
 
258
- const context = await getMemoryContext(memory, 'user preferences', {
242
+ const context = await getMemoryContext(m, 'user preferences', {
259
243
  limit: 5,
260
244
  });
261
245
 
262
246
  const system = 'You are helpful.\n\n' + (context ? '## User Context\n' + context : '');
263
247
  ```
264
248
 
249
+ ## Plugin Configuration
250
+
251
+ ### Required Options
252
+
253
+ - `apiKey` — your Mem0 API key
254
+
255
+ ### Optional Options
256
+
257
+ - `scope` — memory scoping mode (see Memory Scoping below)
258
+ - `'user'` — memories are shared across all brains for each user
259
+ - `'brain'` — memories are shared across all users for each brain
260
+ - Default: per-brain-per-user (memories are isolated by both brain and user)
261
+ - `autoIndex` — whether to auto-index conversations on brain completion (default: `true`)
262
+ - `baseUrl` — custom Mem0 API base URL
263
+ - `orgId` — Mem0 organization ID
264
+ - `projectId` — Mem0 project ID
265
+
266
+ ```typescript
267
+ mem0.setup({
268
+ apiKey: process.env.MEM0_API_KEY!,
269
+ scope: 'user',
270
+ autoIndex: false,
271
+ })
272
+ ```
273
+
265
274
  ## Memory Scoping
266
275
 
267
- Memories are scoped by two identifiers:
276
+ Memories are scoped by two identifiers that are set automatically:
268
277
 
269
278
  ### agentId
270
279
 
271
- Automatically set to the brain/step title. Memories are isolated per agent:
280
+ Automatically set to the brain title. Memories are isolated per brain by default:
272
281
 
273
282
  ```typescript
274
- brain('support-agent').withMemory(memory) // agentId = 'support-agent'
275
- brain('sales-agent').withMemory(memory) // agentId = 'sales-agent'
283
+ brain('support-agent') // agentId = 'support-agent'
284
+ .withPlugin(mem0.setup({ apiKey: '...' }))
285
+
286
+ brain('sales-agent') // agentId = 'sales-agent'
287
+ .withPlugin(mem0.setup({ apiKey: '...' }))
276
288
  ```
277
289
 
290
+ With `scope: 'user'`, the agentId is cleared so memories are shared across brains for each user.
291
+
278
292
  ### userId
279
293
 
280
294
  Automatically set from `currentUser.name` when the brain runs. All memory operations are automatically scoped to the current user — no need to pass userId manually:
281
295
 
282
296
  ```typescript
283
- // userId is auto-bound from currentUser — just use memory directly
284
- await memory.search('preferences');
285
- await memory.add(messages);
297
+ // userId is auto-bound from currentUser — just use mem0 directly
298
+ await m.search('preferences');
299
+ await m.add([{ role: 'user', content: 'test' }]);
286
300
 
287
- // In tools — the agent just passes the fact/query, userId is automatic
301
+ // In tools — the LLM just passes the fact/query, userId is automatic
288
302
  rememberFact({ fact: 'Prefers dark mode' })
289
303
  recallMemories({ query: 'preferences' })
290
304
  ```
291
305
 
306
+ With `scope: 'brain'`, the userId is cleared so memories are shared across users for each brain.
307
+
292
308
  See the [currentUser section in positronic-guide.md](positronic-guide.md#currentuser) for how to set the current user when running brains.
@@ -0,0 +1,218 @@
1
+ # Creating Plugins
2
+
3
+ Plugins let you add services, tools, and event handlers to brains. A plugin bundles everything related to an integration into a single unit.
4
+
5
+ ## Quick Start
6
+
7
+ ```typescript
8
+ // src/plugins/weather.ts
9
+ import { definePlugin } from '@positronic/core';
10
+ import { z } from 'zod';
11
+
12
+ export const weather = definePlugin({
13
+ name: 'weather',
14
+ create: () => ({
15
+ async forecast(city: string) {
16
+ const res = await fetch(<%= '\`https://api.weather.com/v1/${city}\`' %>);
17
+ return res.json();
18
+ },
19
+ }),
20
+ });
21
+ ```
22
+
23
+ Use it in a brain:
24
+
25
+ ```typescript
26
+ import { brain } from '../brain.js';
27
+ import { weather } from '../plugins/weather.js';
28
+
29
+ export default brain('daily-report')
30
+ .withPlugin(weather)
31
+ .step('Get Weather', async ({ weather: w }) => {
32
+ const forecast = await w.forecast('Seattle');
33
+ return { forecast };
34
+ });
35
+ ```
36
+
37
+ ## Plugin Anatomy
38
+
39
+ A plugin has three parts:
40
+
41
+ - **`name`** — identifies the plugin. This is the key on StepContext (e.g., `ctx.weather`).
42
+ - **`setup`** — (optional) defines a config shape. Returns a configured plugin when called.
43
+ - **`create`** — called once per brain run. Returns the plugin's public API.
44
+
45
+ ### Without config
46
+
47
+ ```typescript
48
+ export const myPlugin = definePlugin({
49
+ name: 'myPlugin',
50
+ create: () => ({
51
+ doStuff: () => 'done',
52
+ }),
53
+ });
54
+
55
+ // Usage: brain('x').withPlugin(myPlugin)
56
+ // Access: ({ myPlugin }) => myPlugin.doStuff()
57
+ ```
58
+
59
+ ### With config
60
+
61
+ ```typescript
62
+ export const slack = definePlugin({
63
+ name: 'slack',
64
+ setup: (config: { defaultChannel: string; token: string }) => config,
65
+ create: ({ config }) => ({
66
+ async post(channel: string, message: string) {
67
+ // config.token is available here
68
+ },
69
+ }),
70
+ });
71
+
72
+ // Usage: brain('x').withPlugin(slack.setup({ defaultChannel: '#general', token: '...' }))
73
+ // Access: ({ slack }) => slack.post('#alerts', 'hello')
74
+ ```
75
+
76
+ ## What `create` Receives
77
+
78
+ ```typescript
79
+ create: ({ config, brainTitle, currentUser, brainRunId }) => {
80
+ // config — whatever setup() returned, or undefined
81
+ // brainTitle — the brain's title string
82
+ // currentUser — { name: string } of the user running the brain
83
+ // brainRunId — unique ID for this brain run
84
+ }
85
+ ```
86
+
87
+ Use these to scope your plugin's behavior per brain and user.
88
+
89
+ ## Adding Tools
90
+
91
+ Tools are functions the LLM can call during prompt loops. Return them under a `tools` key:
92
+
93
+ ```typescript
94
+ export const notes = definePlugin({
95
+ name: 'notes',
96
+ create: () => {
97
+ const saved: string[] = [];
98
+
99
+ return {
100
+ // Service methods — direct access in steps
101
+ getAll: () => [...saved],
102
+
103
+ // Tools — for LLM tool-calling in prompt loops
104
+ tools: {
105
+ saveNote: {
106
+ description: 'Save a note for later',
107
+ inputSchema: z.object({
108
+ note: z.string().describe('The note to save'),
109
+ }),
110
+ async execute(input: { note: string }) {
111
+ saved.push(input.note);
112
+ return { saved: true };
113
+ },
114
+ },
115
+ },
116
+ };
117
+ },
118
+ });
119
+ ```
120
+
121
+ Using tools in a prompt loop:
122
+
123
+ ```typescript
124
+ brain('note-taker')
125
+ .withPlugin(notes)
126
+ .prompt('Take Notes', ({ notes: n }) => ({
127
+ message: 'Listen to the user and save important notes',
128
+ outputSchema: z.object({ summary: z.string() }),
129
+ loop: {
130
+ tools: { ...n.tools },
131
+ },
132
+ }))
133
+ ```
134
+
135
+ ## Adding an Adapter
136
+
137
+ An adapter receives brain events (START, STEP_COMPLETE, COMPLETE, ERROR, etc.). Use it for logging, indexing, or side effects:
138
+
139
+ ```typescript
140
+ export const analytics = definePlugin({
141
+ name: 'analytics',
142
+ setup: (config: { endpoint: string }) => config,
143
+ create: ({ config, brainTitle }) => ({
144
+ adapter: {
145
+ dispatch(event: any) {
146
+ if (event.type === 'COMPLETE') {
147
+ fetch(config!.endpoint, {
148
+ method: 'POST',
149
+ body: JSON.stringify({ brain: brainTitle, event: 'completed' }),
150
+ });
151
+ }
152
+ },
153
+ },
154
+ }),
155
+ });
156
+ ```
157
+
158
+ The adapter is intercepted by the framework — it does NOT appear on StepContext.
159
+
160
+ ## Multiple Plugins
161
+
162
+ Declare multiple plugins upfront in the `brain()` call:
163
+
164
+ ```typescript
165
+ brain({ title: 'my-brain', plugins: { slack, mem0, analytics } })
166
+ .step('Go', ({ slack, mem0 }) => {
167
+ // Both available, fully typed
168
+ });
169
+ ```
170
+
171
+ Or chain `.withPlugin()` calls:
172
+
173
+ ```typescript
174
+ brain('my-brain')
175
+ .withPlugin(slack.setup({ token: '...' }))
176
+ .withPlugin(mem0.setup({ apiKey: '...' }))
177
+ ```
178
+
179
+ ## Project-Wide Plugins
180
+
181
+ Configure plugins once in `src/brain.ts` so all brains get them:
182
+
183
+ ```typescript
184
+ import { createBrain } from '@positronic/core';
185
+ import { components } from './components/index.js';
186
+ import { mem0 } from '@positronic/mem0';
187
+
188
+ export const brain = createBrain({
189
+ plugins: [mem0.setup({ apiKey: process.env.MEM0_API_KEY! })],
190
+ components,
191
+ });
192
+ ```
193
+
194
+ Individual brains can add more plugins with `.withPlugin()`. If a brain calls `.withPlugin()` with a plugin that shares a name with a project-level one, the per-brain config wins.
195
+
196
+ ## Testing Plugins
197
+
198
+ In tests, create a plugin with mock behavior:
199
+
200
+ ```typescript
201
+ const mockSlack = definePlugin({
202
+ name: 'slack',
203
+ create: () => ({
204
+ post: jest.fn(async () => {}),
205
+ }),
206
+ });
207
+
208
+ const testBrain = brain('test')
209
+ .withPlugin(mockSlack)
210
+ .step('Notify', async ({ slack }) => {
211
+ await slack.post('#general', 'hello');
212
+ return { notified: true };
213
+ });
214
+ ```
215
+
216
+ ## Plugin Scoping
217
+
218
+ `create()` is called **per brain run** — each run gets a fresh instance. For nested brains (`.brain()` steps), inner brains get their own `create()` call with the inner brain's title and context. This means plugins are automatically scoped per brain.