@powerhousedao/academy 3.3.0-dev.15 → 3.3.0-dev.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,3 +1,7 @@
1
+ ## 3.3.0-dev.16 (2025-07-22)
2
+
3
+ This was a version bump only for @powerhousedao/academy to align it with other projects, there were no code changes.
4
+
1
5
  ## 3.3.0-dev.15 (2025-07-17)
2
6
 
3
7
  ### 🩹 Fixes
@@ -1,28 +1,62 @@
1
- # Build a Todo-List processor
1
+ # Build a Todo-List Processor
2
2
 
3
- 1. Generate the processor
4
- 2. Define your database schema
5
- 3. Customize the processor to your needs
6
- 4. Test your processor
7
- 5. Use the relational database in Frontend and Subgraph
3
+ ## What You'll Learn
8
4
 
5
+ In this tutorial, you'll learn how to build a **relational database processor** that listens to changes in Powerhouse TodoList documents and automatically maintains a synchronized relational database. This is useful for creating queryable data stores, generating reports, or integrating with existing database-driven applications.
9
6
 
10
- ## Generate the Processor
7
+ ## What is a Processor?
8
+
9
+ A **processor** in Powerhouse is a background service that automatically responds to document changes. Think of it as a "listener" that watches for specific document operations (like creating, updating, or deleting todos) and then performs custom logic - in this case, updating a relational database.
10
+
11
+ **Key Benefits:**
12
+ - **Real-time synchronization**: Your database stays automatically up-to-date with document changes
13
+ - **Query performance**: Relational databases excel at complex queries and joins
14
+
15
+ ## Tutorial Steps
16
+
17
+ 1. **Generate the processor** - Create the basic processor structure
18
+ 2. **Define your database schema** - Design the tables to store your data
19
+ 3. **Generate TypeScript types** - Get type safety for database operations
20
+ 4. **Configure the filter** - Specify which documents to listen to
21
+ 5. **Customize the processor logic** - Implement how document changes update the database
22
+ 6. **Use the data via Subgraph** - Query your processed data through GraphQL
23
+
24
+ ---
25
+
26
+ ## Step 1: Generate the Processor
27
+
28
+ First, we'll create the processor using the Powerhouse CLI. This command scaffolds all the necessary files and configuration.
11
29
 
12
- In order to generate the processor you need to run the following command:
13
30
  ```bash
14
31
  ph generate --processor todo-processor --processor-type relational-db --document-types powerhouse/todolist
15
32
  ```
16
33
 
17
- With that command you create a processor named todo-processor which is of type relational db and listens on changes from documents of type powerhouse/todolist.
34
+ **Breaking down this command:**
35
+ - `--processor todo-processor`: Names your processor "todo-processor"
36
+ - `--processor-type relational-db`: Creates a processor that works with SQL databases
37
+ - `--document-types powerhouse/todolist`: Tells the processor to listen for changes in TodoList documents
38
+
39
+ **What gets created:**
40
+ - `processors/todo-processor/` directory with all necessary files
41
+ - Migration files for database schema management
42
+ - Factory function for processor instantiation
43
+ - Base processor class ready for customization
44
+
45
+ ---
18
46
 
19
- ## Define your database schema
47
+ ## Step 2: Define Your Database Schema
20
48
 
21
- As next step we need to define the db schema in the `processors/todo-processor/migration.ts` file.
49
+ Next, we need to define what our database tables will look like. This happens in the **migration file**, which contains instructions for creating (and optionally destroying) database tables.
22
50
 
23
- The migration file has a up and a down function which gets called when either the processor was added or when the processor was removed.
51
+ **File location:** `processors/todo-processor/migration.ts`
24
52
 
25
- Below you can find the example of a todo table.
53
+ ### Understanding Migrations
54
+
55
+ Migrations are scripts that modify your database structure. They have two functions:
56
+ - **`up()`**: Runs when the processor is added - creates tables and indexes
57
+ - **`down()`**: Runs when the processor is removed - cleans up by dropping tables
58
+
59
+ Here's our TodoList migration:
26
60
 
27
61
  ```ts
28
62
  import { type IBaseRelationalDb } from "document-drive/processors/types"
@@ -30,41 +64,61 @@ import { type IBaseRelationalDb } from "document-drive/processors/types"
30
64
  export async function up(db: IBaseRelationalDb): Promise<void> {
31
65
  // Create table
32
66
  await db.schema
33
- .createTable("todo")
34
- .addColumn("name", "varchar(255)")
35
- .addColumn("completed", "boolean")
36
- .addPrimaryKeyConstraint("todo_pkey", ["name"])
37
- .ifNotExists()
38
- .execute();
39
-
67
+ .createTable("todo") // Table name: "todo"
68
+ .addColumn("name", "varchar(255)") // Todo item text (up to 255 characters)
69
+ .addColumn("completed", "boolean") // Completion status (true/false)
70
+ .addPrimaryKeyConstraint("todo_pkey", ["name"]) // Primary key on 'name' column
71
+ .ifNotExists() // Only create if table doesn't exist
72
+ .execute(); // Execute the SQL command
73
+
74
+ // Optional: Log all tables for debugging
40
75
  const tables = await db.introspection.getTables();
41
76
  console.log(tables);
42
77
  }
43
78
 
44
79
  export async function down(db: IBaseRelationalDb): Promise<void> {
45
- // drop table
80
+ // Clean up: drop the table when processor is removed
46
81
  await db.schema.dropTable("todo").execute();
47
82
  }
48
83
  ```
49
84
 
50
- ## Generate Types
85
+ **Design decisions explained:**
86
+ - **`name` as primary key**: Assumes todo names are unique (you might want to use an auto-incrementing ID instead)
87
+ - **Simple boolean for completion**: Easy to query for completed vs. incomplete todos
88
+ - **`ifNotExists()`**: Prevents errors if the processor restarts
89
+
90
+ ---
51
91
 
52
- After defining your db schema its important to generate the types for typescript. This allows to create type safety queries and make use of code completion in your IDE when writing database queries.
92
+ ## Step 3: Generate TypeScript Types
53
93
 
54
- Simply execute the following command.
94
+ After defining your database schema, generate TypeScript types for type-safe database operations. This provides IDE autocomplete and catches errors at compile time.
55
95
 
56
96
  ```bash
57
97
  ph generate --migration-file processors/todo-indexer/migrations.js --schema-file processors/todo-indexer/schema.ts
58
98
  ```
59
99
 
60
- Afterwards check your `processors/todo-processor/schema.ts` file.
61
- It will contain the types of your database.
100
+ **What this does:**
101
+ - Analyzes your migration file
102
+ - Generates TypeScript interfaces matching your database tables
103
+ - Creates a `schema.ts` file with type definitions
104
+
105
+ **Result:** You'll get types like:
106
+ ```ts
107
+ interface Todo {
108
+ name: string;
109
+ completed: boolean;
110
+ }
111
+ ```
62
112
 
63
- ## Define the Filter
113
+ These types will be available in `processors/todo-processor/schema.ts` and ensure your database queries are type-safe.
64
114
 
65
- Checkout the `processors/todo-processor/factory.ts`.
115
+ ---
66
116
 
67
- Here you can define how the processor is being instantiated. In thise case it listens on powerhouse/todo-list document changes in the main branch and the global scope.
117
+ ## Step 4: Configure the Filter
118
+
119
+ The **filter** determines which document changes your processor should respond to. This is configured in the factory function.
120
+
121
+ **File location:** `processors/todo-processor/factory.ts`
68
122
 
69
123
  ```ts
70
124
  export const todoProcessorProcessorFactory =
@@ -75,10 +129,10 @@ export const todoProcessorProcessorFactory =
75
129
 
76
130
  // Create a filter for the processor
77
131
  const filter: RelationalDbProcessorFilter = {
78
- branch: ["main"],
79
- documentId: ["*"],
80
- documentType: ["powerhouse/todo-list"],
81
- scope: ["global"],
132
+ branch: ["main"], // Only listen to main branch changes
133
+ documentId: ["*"], // Listen to ALL documents (wildcard)
134
+ documentType: ["powerhouse/todo-list"], // Only TodoList document types
135
+ scope: ["global"], // Global scope (vs. user-specific)
82
136
  };
83
137
 
84
138
  // Create a namespaced store for the processor
@@ -96,14 +150,23 @@ export const todoProcessorProcessorFactory =
96
150
  },
97
151
  ];
98
152
  };
99
-
100
153
  ```
101
154
 
102
- ## Customize the logic of the processor
155
+ **Filter options explained:**
156
+ - **`branch`**: Which document branches to monitor (usually "main" for production data)
157
+ - **`documentId`**: Specific document IDs or "*" for all documents
158
+ - **`documentType`**: The document model type - must match exactly
159
+ - **`scope`**: "global" for shared data, or specific scopes for user/organization data
160
+
161
+ **Namespace concept**: Each processor gets its own database namespace to avoid conflicts when multiple processors or drives exist.
103
162
 
104
- When you defined your db schema and the filter when your processor should receive processed operations its time to implement the actual logic.
163
+ ---
105
164
 
106
- In the following you'll find an example where we store all the created and udpated todos in a table.
165
+ ## Step 5: Implement the Processor Logic
166
+
167
+ Now for the core functionality - how your processor responds to document changes. This is where you define what happens when TodoList documents are created, updated, or deleted.
168
+
169
+ **File location:** `processors/todo-processor/index.ts`
107
170
 
108
171
  ```ts
109
172
  type DocumentType = ToDoListDocument;
@@ -112,88 +175,209 @@ export class TodoIndexerProcessor extends RelationalDbProcessor<DB> {
112
175
 
113
176
  static override getNamespace(driveId: string): string {
114
177
  // Default namespace: `${this.name}_${driveId.replaceAll("-", "_")}`
178
+ // Each drive gets its own database tables to prevent data mixing
115
179
  return super.getNamespace(driveId);
116
180
  }
117
181
 
118
182
  override async initAndUpgrade(): Promise<void> {
183
+ // Run database migrations when processor starts
184
+ // This creates your tables if they don't exist
119
185
  await up(this.relationalDb as IBaseRelationalDb);
120
186
  }
121
187
 
122
188
  override async onStrands(
123
189
  strands: InternalTransmitterUpdate<DocumentType>[],
124
190
  ): Promise<void> {
191
+ // Early exit if no data to process
125
192
  if (strands.length === 0) {
126
193
  return;
127
194
  }
128
195
 
196
+ // Process each strand (a strand represents changes to one document)
129
197
  for (const strand of strands) {
130
198
  if (strand.operations.length === 0) {
131
199
  continue;
132
200
  }
133
201
 
202
+ // Process each operation in the strand
134
203
  for (const operation of strand.operations) {
204
+ // Simple example: Insert a new todo for every operation
205
+ // In a real implementation, you'd check the operation type and data
135
206
  await this.relationalDb
136
207
  .insertInto("todo")
137
208
  .values({
138
- task: strand.documentId,
139
- status: true,
209
+ task: strand.documentId, // Use document ID as task name
210
+ status: true, // Default to completed
140
211
  })
141
212
  .execute();
142
213
  }
143
214
  }
144
215
  }
145
216
 
146
- async onDisconnect() {}
217
+ async onDisconnect() {
218
+ // Cleanup logic when processor shuts down
219
+ // Could include closing connections, saving state, etc.
220
+ }
147
221
  }
148
-
149
222
  ```
150
223
 
151
- ## Fetch Data through a Subgraph
224
+ ### Understanding Strands and Operations
152
225
 
153
- ### Generate Subgraph
226
+ **Strands** represent a sequence of changes to a single document. Each strand contains:
227
+ - `documentId`: Which document changed
228
+ - `operations`: Array of operations (add todo, complete todo, etc.)
229
+ - `state`: The current document state
154
230
 
155
- Simply generate a new subgraph with:
156
- ```bash
157
- ph generate --subgraph <subgraph-name>
231
+ **Operations** are the actual changes made to the document:
232
+ - `ADD_TODO`: New todo item created
233
+ - `TOGGLE_TODO`: Todo completion status changed
234
+ - `DELETE_TODO`: Todo item removed
235
+
236
+ ### Improving the Example
237
+
238
+ The provided example is simplified. In production, you'd want to:
239
+
240
+ 1. **Parse operation types:**
241
+ ```ts
242
+ switch (operation.type) {
243
+ case 'ADD_TODO':
244
+ // Insert new todo
245
+ break;
246
+ case 'CHECK_TODO':
247
+ // Update completion status
248
+ break;
249
+ case 'DELETE_TODO':
250
+ // Remove todo from database
251
+ break;
252
+ }
253
+ ```
254
+
255
+ 2. **Handle errors gracefully:**
256
+ ```ts
257
+ try {
258
+ await this.relationalDb.insertInto("todo").values(values).execute();
259
+ } catch (error) {
260
+ console.error('Failed to insert todo:', error);
261
+ // Could implement retry logic, dead letter queue, etc.
262
+ }
158
263
  ```
159
264
 
160
- ### Fetch Data from Processor
265
+ 3. **Use transactions for consistency:**
266
+ ```ts
267
+ await this.relationalDb.transaction().execute(async (trx) => {
268
+ // Multiple operations that should all succeed or all fail
269
+ });
270
+ ```
161
271
 
162
- open ```./subgraphs/<subgraph-name>/index.ts```
272
+ ---
163
273
 
274
+ ## Step 6: Query Data Through a Subgraph
164
275
 
276
+ Once your processor is storing data in the database, you can expose it via GraphQL using a **subgraph**. This creates a clean API for frontend applications to query the processed data.
165
277
 
166
- define the following:
278
+ ### Generate a Subgraph
167
279
 
280
+ Create a new GraphQL subgraph that can query your processor's database:
281
+
282
+ ```bash
283
+ ph generate --subgraph <subgraph-name>
284
+ ```
285
+
286
+ **What this creates:**
287
+ - GraphQL schema definitions
288
+ - Resolver functions that fetch data
289
+ - Integration with your processor's database
290
+
291
+ ### Configure the Subgraph
292
+
293
+ **File location:** `./subgraphs/<subgraph-name>/index.ts`
168
294
 
169
295
  ```ts
170
296
  resolvers = {
171
297
  Query: {
172
298
  todoList: {
173
299
  resolve: async (parent, args, context, info) => {
300
+ // Query the processor's database using the generated types
174
301
  const todoList = await TodoProcessor.query(
175
- args.driveId ?? "powerhouse",
176
- this.relationalDb
302
+ args.driveId ?? "powerhouse", // Default drive if none specified
303
+ this.relationalDb // Database connection from processor
177
304
  )
178
- .selectFrom("todo")
179
- .selectAll()
180
- .execute();
305
+ .selectFrom("todo") // FROM todo table
306
+ .selectAll() // SELECT * (all columns)
307
+ .execute(); // Execute and return results
181
308
  return todoList
182
309
  },
183
310
  },
184
311
  },
185
312
  };
186
313
 
314
+ // GraphQL schema definition
187
315
  typeDefs = gql`
188
316
  type Query {
189
317
  type Todo {
190
- name: String!
191
- completed: Boolean!
318
+ name: String! # Todo text (required)
319
+ completed: Boolean! # Completion status (required)
192
320
  }
193
321
 
194
- todoList(driveId: String): [Todo!]!
322
+ todoList(driveId: String): [Todo!]! # Query to get all todos for a drive
195
323
  }
196
324
  `;
197
- ```
325
+ ```
326
+
327
+ ### Understanding the GraphQL Integration
328
+
329
+ **Resolvers** are functions that fetch data for each GraphQL field:
330
+ - `parent`: Data from parent resolver (unused here)
331
+ - `args`: Arguments passed to the query (like `driveId`)
332
+ - `context`: Shared context (database connections, user info, etc.)
333
+ - `info`: Metadata about the GraphQL query
334
+
335
+ **Type Definitions** describe your GraphQL schema:
336
+ - `type Todo`: Defines the structure of a todo item
337
+ - `todoList(driveId: String): [Todo!]!`: A query that returns an array of todos
338
+ - `!` means the field is required/non-null
339
+
340
+ ### Querying Your Data
341
+
342
+ Once deployed, frontend applications can query your data like this:
343
+
344
+ ```graphql
345
+ query GetTodos($driveId: String) {
346
+ todoList(driveId: $driveId) {
347
+ name
348
+ completed
349
+ }
350
+ }
351
+ ```
352
+
353
+ This would return:
354
+ ```json
355
+ {
356
+ "data": {
357
+ "todoList": [
358
+ {"name": "Buy groceries", "completed": false},
359
+ {"name": "Write tutorial", "completed": true}
360
+ ]
361
+ }
362
+ }
363
+ ```
364
+
365
+ ---
366
+
367
+ ## Next Steps and Best Practices
368
+
369
+ ### Testing Your Processor
370
+
371
+ 1. **Unit tests**: Test individual functions with mock data
372
+ 2. **Integration tests**: Test the full processor with real document operations
373
+
374
+ ### Production Considerations
375
+
376
+ 1. **Error handling**: Implement robust error handling and logging
377
+ 2. **Monitoring**: Add metrics to track processor performance
378
+ 3. **Scaling**: Consider database indexing and query optimization
379
+ 4. **Security**: Validate input data and implement proper access controls
380
+
381
+ This processor tutorial demonstrates the power of Powerhouse's event-driven architecture, where document changes automatically flow through to specialized data stores optimized for different use cases.
198
382
 
199
383
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@powerhousedao/academy",
3
- "version": "3.3.0-dev.15",
3
+ "version": "3.3.0-dev.16",
4
4
  "homepage": "https://powerhouse.academy",
5
5
  "repository": {
6
6
  "type": "git",