@powerhousedao/academy 3.3.0-dev.15 → 3.3.0-dev.17

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (19) hide show
  1. package/CHANGELOG.md +15 -0
  2. package/docs/academy/02-MasteryTrack/04-WorkWithData/03-UsingSubgraphs.md +16 -31
  3. package/docs/academy/02-MasteryTrack/04-WorkWithData/05-RelationalDbProcessor.md +663 -0
  4. package/docs/academy/02-MasteryTrack/04-WorkWithData/07-drive-analytics.md +1 -1
  5. package/docs/academy/04-APIReferences/04-RelationalDatabase.md +13 -13
  6. package/package.json +1 -1
  7. package/docs/academy/02-MasteryTrack/04-WorkWithData/07-OperationalDbProcessorTutorial/01-TodoList-example.md +0 -199
  8. package/docs/academy/02-MasteryTrack/04-WorkWithData/07-OperationalDbProcessorTutorial/_category_.json +0 -8
  9. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_01-SetupBuilderEnvironment.md +0 -0
  10. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_02-CreateNewPowerhouseProject.md +0 -0
  11. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_03-GenerateAnAnalyticsProcessor.md +0 -0
  12. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_04-UpdateAnalyticsProcessor.md +0 -0
  13. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_category_.json +0 -0
  14. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Create-SPV.gif +0 -0
  15. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Create-a-new-asset.png +0 -0
  16. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Create-a-transaction.gif +0 -0
  17. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Transaction-table.png +0 -0
  18. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/create-a-new-RWA-document.gif +0 -0
  19. /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/granularity.png +0 -0
package/CHANGELOG.md CHANGED
@@ -1,3 +1,18 @@
1
+ ## 3.3.0-dev.17 (2025-07-23)
2
+
3
+ ### 🩹 Fixes
4
+
5
+ - update release notes ([f1b6a8e71](https://github.com/powerhouse-inc/powerhouse/commit/f1b6a8e71))
6
+ - add release notes on correct branch ([a2d60a537](https://github.com/powerhouse-inc/powerhouse/commit/a2d60a537))
7
+
8
+ ### ❤️ Thank You
9
+
10
+ - Callme-T
11
+
12
+ ## 3.3.0-dev.16 (2025-07-22)
13
+
14
+ This was a version bump only for @powerhousedao/academy to align it with other projects, there were no code changes.
15
+
1
16
  ## 3.3.0-dev.15 (2025-07-17)
2
17
 
3
18
  ### 🩹 Fixes
@@ -1,4 +1,4 @@
1
- # Using Subgraphs
1
+ # Using subgraphs
2
2
 
3
3
  This tutorial will demonstrate how to create and customize a subgraph using our To-do List project as an example.
4
4
  Let's start with the basics and gradually add more complex features and functionality.
@@ -79,11 +79,11 @@ Initializing Subgraph Manager...
79
79
  ➜ Reactor: http://localhost:4001/d/powerhouse
80
80
  ```
81
81
 
82
- ## 2. Building a To-do List Subgraph
82
+ ## 2. Building a to-do list subgraph
83
83
 
84
84
  Now that we've generated our subgraph, let's build a complete To-do List subgraph that extends the functionality of our To-do List document model. This subgraph will provide additional querying capabilities and demonstrate how subgraphs work with document models.
85
85
 
86
- ### 2.1 Understanding the To-do List Document Model
86
+ ### 2.1 Understanding the to-do list document model
87
87
 
88
88
  Before building our subgraph, let's recall the structure of our To-do List document model from the [DocumentModelCreation tutorial](/academy/MasteryTrack/DocumentModelCreation/SpecifyTheStateSchema):
89
89
 
@@ -111,11 +111,11 @@ The document model has these operations:
111
111
  - `UPDATE_TODO_ITEM`: Updates an existing to-do item
112
112
  - `DELETE_TODO_ITEM`: Deletes a to-do item
113
113
 
114
- ### 2.2 Define the Subgraph Schema
114
+ ### 2.2 Define the subgraph schema
115
115
 
116
116
  Now let's create a subgraph that provides enhanced querying capabilities for our To-do List documents.
117
117
 
118
- **Step 1: Define the schema in `subgraphs/to-do-list/schema.ts`:**
118
+ **Step 1: Define the schema in `subgraphs/to-do-list/schema.ts` by creating the file:**
119
119
 
120
120
  ```typescript
121
121
  export const typeDefs = `
@@ -148,19 +148,19 @@ export const typeDefs = `
148
148
  text: String! # The task description
149
149
  checked: Boolean! # Completion status
150
150
  }
151
- `;
151
+ }`
152
152
  ```
153
153
 
154
154
 
155
-
156
- #### Understanding Resolvers
155
+ <details>
156
+ <summary> #### Understanding resolvers </summary>
157
157
 
158
158
  Before diving into the technical implementation, let's understand why these three different query types matter for your product.
159
159
  Think of resolvers as custom API endpoints that are automatically created based on what your users actually need to know about your data.
160
160
 
161
161
  When someone asks your system a question through GraphQL, the resolver:
162
162
 
163
- 1. **Understands the request** - "The customer wants unchecked items"
163
+ 1. **Understands the request** - "The user wants unchecked items"
164
164
  2. **Knows where to get the data** - "I need to check the todo_items database table"
165
165
  3. **Applies the right filters** - "Only get items where checked = false"
166
166
  4. **Returns the answer** - "Here are the 5 unchecked items"
@@ -187,6 +187,8 @@ Think of resolvers as custom API endpoints that are automatically created based
187
187
  - **User Experience**: Different resolvers serve different user needs efficiently
188
188
  - **Flexibility**: Users can ask for exactly what they need, nothing more, nothing less
189
189
 
190
+ </details>
191
+
190
192
  **Step 2: Create resolvers in `subgraphs/to-do-list/resolvers.ts`:**
191
193
 
192
194
  ```typescript
@@ -329,7 +331,7 @@ export default class ToDoListSubgraph {
329
331
  }
330
332
  ```
331
333
 
332
- ### 2.3 Understanding the Implementation
334
+ ### 2.3 Understanding the implementation
333
335
 
334
336
  **What this multi-file approach provides:**
335
337
 
@@ -344,7 +346,7 @@ export default class ToDoListSubgraph {
344
346
  - Resolvers that fetch and filter todo items from the operational store
345
347
  - Event processing to keep the subgraph data synchronized with document model changes
346
348
 
347
- ### 2.4 Understanding the Document Model Event Integration
349
+ ### 2.4 Understanding the document model event integration
348
350
 
349
351
  Notice that our `index.ts` file already includes a `process` method - this is the **processor integration** that keeps our subgraph synchronized with To-do List document model events. When users interact with To-do List documents through Connect, this method automatically handles the updates.
350
352
 
@@ -394,7 +396,7 @@ if (event.type === "DELETE_TODO_ITEM") {
394
396
  4. **Subgraph response**: Your `process` method updates the operational store
395
397
  5. **Query availability**: Users can now query the updated data via GraphQL
396
398
 
397
- ### 2.5 Summary of What We've Built
399
+ ### 2.5 Summary of what we've built
398
400
 
399
401
  Our complete To-do List subgraph includes:
400
402
 
@@ -411,7 +413,7 @@ Our complete To-do List subgraph includes:
411
413
  - **Real-time synchronization**: Changes in Connect immediately appear in subgraph queries
412
414
  - **Complete statistics**: The `todoList` query returns total, checked, and unchecked counts
413
415
 
414
- ## 3. Testing the To-do List Subgraph
416
+ ## 3. Testing the to-do list subgraph
415
417
 
416
418
  ### 3.1. Start the reactor
417
419
  To activate the subgraph, run:
@@ -433,7 +435,7 @@ You should see the subgraph being registered in the console output:
433
435
  ### 3.2. Create some test data
434
436
  Before testing queries, let's create some To-do List documents with test data:
435
437
 
436
- 1. Open Connect at `http://localhost:3001`
438
+ 1. Open Connect at `http://localhost:3001` in another terminal
437
439
  2. Add the 'remote' drive that is running locally via the (+) 'Add Drive' button. Add 'http://localhost:4001/d/powerhouse'
438
440
  3. Create a new To-do List document
439
441
  4. Add some test items:
@@ -654,25 +656,12 @@ This demonstrates how the supergraph provides a unified interface to both your d
654
656
 
655
657
  Congratulations! You've successfully built a complete To-do List subgraph that demonstrates the power of extending document models with custom GraphQL functionality. Let's recap what you've accomplished:
656
658
 
657
- ### What you built:
658
- - **A custom GraphQL schema** that provides enhanced querying capabilities for To-do List documents
659
- - **An operational data store** that efficiently stores and retrieves to-do items
660
- - **Real-time event processing** that keeps your subgraph synchronized with document model changes
661
- - **Advanced query capabilities** including filtering and counting operations
662
- - **Integration with the supergraph** for unified API access
663
-
664
659
  ### Key concepts learned:
665
660
  - **Subgraphs extend document models** with additional querying and data processing capabilities
666
661
  - **Operational data stores** provide efficient storage for subgraph data
667
662
  - **Event processing** enables real-time synchronization between document models and subgraphs
668
663
  - **The supergraph** unifies multiple subgraphs into a single GraphQL endpoint
669
664
 
670
- ### Next steps:
671
- - Explore adding **mutations** to your subgraph for more complex operations
672
- - Implement **data aggregation** for analytics and reporting
673
- - Connect to **external APIs** for enhanced functionality
674
- - Build **processors** that automate workflows between different document models
675
-
676
665
  This tutorial has provided you with a solid foundation for building sophisticated data processing and querying capabilities in the Powerhouse ecosystem.
677
666
 
678
667
  ## Subgraphs are particularly useful for
@@ -691,10 +680,6 @@ This tutorial has provided you with a solid foundation for building sophisticate
691
680
  - Add automated task assignments
692
681
  - Create custom reporting functionality
693
682
 
694
- ### Prebuilt subgraphs
695
-
696
- Some subgraphs (e.g., System Subgraph, Drive Subgraph) already exist.
697
- To integrate with them, register them via the Reactor API.
698
683
 
699
684
  ### Future enhancements
700
685
 
@@ -0,0 +1,663 @@
1
+ # Relational database processor
2
+
3
+ In this chapter, we will implement a **Todo-List** relational database processor. This processor receives processed operations from the reactor and can use the `prevState`, `resultingState`, or data from the operations themselves to populate a database.
4
+
5
+ **What is a Relational Database Processor?**
6
+
7
+ A relational database processor is a specialized component that listens to document changes in your Powerhouse application and transforms that data into a traditional relational database format (like PostgreSQL, MySQL, or SQLite). This is incredibly useful for:
8
+
9
+ - **Analytics and Reporting**: Running complex SQL queries on your document data
10
+ - **Integration**: Connecting with existing business intelligence tools
11
+
12
+ ## Generate the Processor
13
+
14
+ To generate a relational database processor, run the following command:
15
+
16
+ ```bash
17
+ ph generate --processor todo-indexer --processor-type relationalDb --document-types powerhouse/todolist
18
+ ```
19
+
20
+ **Breaking down this command:**
21
+ - `--processor todo-indexer`: Creates a processor with the name "todo-indexer"
22
+ - `--processor-type relationalDb`: Specifies we want a relational database processor (vs other types like analytics or webhook processors)
23
+ - `--document-types powerhouse/todolist`: Tells the processor to only listen for changes to documents of type "powerhouse/todolist"
24
+
25
+ This command creates a processor named `todo-indexer` of type `relational database` that listens for changes from documents of type `powerhouse/todolist`.
26
+
27
+ **What gets generated:**
28
+ - A processor class file (`processors/todo-indexer/index.ts`)
29
+ - A database migration file (`processors/todo-indexer/migrations.ts`)
30
+ - A factory file for configuration (`processors/todo-indexer/factory.ts`)
31
+ - A schema file for TypeScript types (`processors/todo-indexer/schema.ts`)
32
+
33
+ ## Define Your Database Schema
34
+
35
+ Next, define your database schema in the `processors/todo-indexer/migration.ts` file.
36
+
37
+ **Understanding Database Migrations**
38
+
39
+ Migrations are version-controlled database changes that ensure your database schema evolves safely over time. They contain:
40
+ - **`up()` function**: Creates or modifies database structures when the processor starts
41
+ - **`down()` function**: Safely removes changes when the processor is removed
42
+
43
+ This approach ensures your database schema stays in sync across different environments (development, staging, production).
44
+
45
+ The migration file contains `up` and `down` functions that are called when the processor is added or removed, respectively.
46
+
47
+ In the migration.ts file you'll find an example of the todo table default schema:
48
+
49
+ ```ts
50
+ import { type IRelationalDb } from "document-drive/processors/types"
51
+
52
+ export async function up(db: IRelationalDb<any>): Promise<void> {
53
+ // Create table - this runs when the processor starts
54
+ await db.schema
55
+ .createTable("todo") // Creates a new table named "todo"
56
+ .addColumn("task", "varchar(255)") // Text column for the task description (max 255 characters)
57
+ .addColumn("status", "boolean") // Boolean column for completion status (true/false)
58
+ .addPrimaryKeyConstraint("todo_pkey", ["task"]) // Makes "task" the primary key (unique identifier)
59
+ .ifNotExists() // Only create if table doesn't already exist
60
+ .execute(); // Execute the SQL command
61
+ }
62
+
63
+ export async function down(db: IRelationalDb<any>): Promise<void> {
64
+ // Drop table - this runs when the processor is removed
65
+ await db.schema.dropTable("todo").execute();
66
+ }
67
+ ```
68
+
69
+ **Design Considerations:**
70
+ - We're using `task` as the primary key, which means each task description must be unique
71
+ - The `varchar(255)` limit ensures reasonable memory usage
72
+ - The `boolean` status makes it easy to filter completed vs. incomplete tasks
73
+ - Consider adding timestamps (`created_at`, `updated_at`) for audit trails in production applications
74
+
75
+ ## Generate Database Types
76
+
77
+ After defining your database schema, generate TypeScript types for type-safe queries and better IDE support:
78
+
79
+ ```bash
80
+ ph generate --migration-file processors/todo-indexer/migrations.ts
81
+ ```
82
+
83
+ **Why Generate Types?**
84
+
85
+ TypeScript types provide several benefits:
86
+ - **Type Safety**: Catch errors at compile time instead of runtime
87
+ - **IDE Support**: Get autocomplete and IntelliSense for your database queries
88
+ - **Documentation**: Types serve as living documentation of your database structure
89
+ - **Refactoring**: Safe renaming and restructuring of database fields
90
+
91
+ Check your `processors/todo-indexer/schema.ts` file after generation - it will contain the TypeScript types for your database schema.
92
+
93
+ **Example of generated types:**
94
+ ```ts
95
+ // This is what gets auto-generated based on your migration
96
+ export interface Database {
97
+ todo: {
98
+ task: string;
99
+ status: boolean;
100
+ };
101
+ }
102
+ ```
103
+
104
+ ## Configure the Processor Filter
105
+
106
+ This give you the opportunity to configure the processor filter in `processors/todo-indexer/factory.ts`:
107
+
108
+ **Understanding Processor Filters**
109
+
110
+ Filters determine which document changes your processor will respond to. This is crucial for performance and functionality:
111
+ - **Performance**: Only process relevant changes to avoid unnecessary work
112
+ - **Isolation**: Different processors can handle different document types
113
+ - **Scalability**: Distribute processing load across multiple processors
114
+
115
+ ```ts
116
+ import {
117
+ type ProcessorRecord,
118
+ type IProcessorHostModule,
119
+ } from "document-drive/processors/types";
120
+ import { type RelationalDbProcessorFilter } from "document-drive/processors/relational";
121
+ import { TodoIndexerProcessor } from "./index.js";
122
+
123
+ export const todoIndexerProcessorFactory =
124
+ (module: IProcessorHostModule) =>
125
+ async (driveId: string): Promise<ProcessorRecord[]> => {
126
+ // Create a namespace for the processor and the provided drive id
127
+ // Namespaces prevent data collisions between different drives
128
+ const namespace = TodoIndexerProcessor.getNamespace(driveId);
129
+
130
+ // Create a namespaced db for the processor
131
+ // This ensures each drive gets its own isolated database tables
132
+ const store =
133
+ await module.relationalDb.createNamespace<TodoIndexerProcessor>(
134
+ namespace,
135
+ );
136
+
137
+ // Create a filter for the processor
138
+ // This determines which document changes trigger the processor
139
+ const filter: RelationalDbProcessorFilter = {
140
+ branch: ["main"], // Only process changes from the "main" branch
141
+ documentId: ["*"], // Process changes from any document ID (* = wildcard)
142
+ documentType: ["powerhouse/todolist"], // Only process todolist documents
143
+ scope: ["global"], // Process global changes (not user-specific)
144
+ };
145
+
146
+ // Create the processor instance
147
+ const processor = new TodoIndexerProcessor(namespace, filter, store);
148
+ return [
149
+ {
150
+ processor,
151
+ filter,
152
+ },
153
+ ];
154
+ };
155
+ ```
156
+
157
+ **Filter Options Explained:**
158
+ - **`branch`**: Which document branches to monitor (usually "main" for production data)
159
+ - **`documentId`**: Specific document IDs to watch ("*" means all documents)
160
+ - **`documentType`**: Document types to process (ensures type safety)
161
+ - **`scope`**: Whether to process global changes or user-specific ones
162
+
163
+ ## Implement the Processor Logic
164
+
165
+ Now implement the actual processor logic in `processors/todo-indexer/index.ts` by copying the code underneath:
166
+
167
+ **Understanding the Processor Lifecycle**
168
+
169
+ The processor has several key methods:
170
+ - **`initAndUpgrade()`**: Runs once when the processor starts (perfect for running migrations)
171
+ - **`onStrands()`**: Runs every time relevant document changes occur (this is where the main logic goes)
172
+ - **`onDisconnect()`**: Cleanup when the processor shuts down
173
+
174
+ **What are "Strands"?**
175
+ Strands represent a batch of operations that happened to documents. Each strand contains:
176
+ - Document ID and metadata
177
+ - Array of operations (create, update, delete, etc.)
178
+ - Previous and resulting document states
179
+
180
+ ```ts
181
+ import { type IRelationalDb } from "document-drive/processors/types";
182
+ import { RelationalDbProcessor } from "document-drive/processors/relational";
183
+ import { type InternalTransmitterUpdate } from "document-drive/server/listener/transmitter/internal";
184
+ import type { ToDoListDocument } from "../../document-models/to-do-list/index.js";
185
+
186
+ import { up } from "./migrations.js";
187
+ import { type DB } from "./schema.js";
188
+
189
+ // Define the document type this processor handles
190
+ type DocumentType = ToDoListDocument;
191
+
192
+ export class TodoIndexerProcessor extends RelationalDbProcessor<DB> {
193
+ // Generate a unique namespace for this processor based on the drive ID
194
+ // This prevents data conflicts between different drives
195
+ static override getNamespace(driveId: string): string {
196
+ // Default namespace: `${this.name}_${driveId.replaceAll("-", "_")}`
197
+ return super.getNamespace(driveId);
198
+ }
199
+
200
+ // Initialize the processor and run database migrations
201
+ // This method runs once when the processor starts up
202
+ override async initAndUpgrade(): Promise<void> {
203
+ await up(this.relationalDb); // Run the database migration to create tables
204
+ }
205
+
206
+ // Main processing logic - handles incoming document changes
207
+ // This method is called whenever there are new document operations
208
+ override async onStrands(
209
+ strands: InternalTransmitterUpdate<DocumentType>[],
210
+ ): Promise<void> {
211
+ // Early return if no changes to process
212
+ if (strands.length === 0) {
213
+ return;
214
+ }
215
+
216
+ // Process each strand (batch of changes) individually
217
+ for (const strand of strands) {
218
+ // Skip strands with no operations
219
+ if (strand.operations.length === 0) {
220
+ continue;
221
+ }
222
+
223
+ // Process each operation within the strand
224
+ for (const operation of strand.operations) {
225
+ // Insert a record for each operation into the database
226
+ // This is a simple example - you might want more sophisticated logic
227
+ await this.relationalDb
228
+ .insertInto("todo")
229
+ .values({
230
+ // Create a unique task identifier combining document ID, operation index, and type
231
+ task: `${strand.documentId}-${operation.index}: ${operation.type}`,
232
+ status: true, // Default to completed status
233
+ })
234
+ // Handle conflicts by doing nothing if the task already exists
235
+ // This prevents duplicate entries if operations are replayed
236
+ .onConflict((oc) => oc.column("task").doNothing())
237
+ .execute(); // Execute the database query
238
+ }
239
+ }
240
+ }
241
+
242
+ // Cleanup method called when the processor disconnects
243
+ // Use this for closing connections, clearing caches, etc.
244
+ async onDisconnect() {
245
+ // Add any cleanup logic here
246
+ // For example: await this.relationalDb.destroy();
247
+ }
248
+ }
249
+ ```
250
+
251
+ ## Expose Data Through a Subgraph
252
+
253
+ ### Generate a Subgraph
254
+
255
+ Generate a new subgraph to expose your processor data:
256
+
257
+ ```bash
258
+ ph generate --subgraph todo
259
+ ```
260
+
261
+ **What is a Subgraph?**
262
+
263
+ A subgraph is a GraphQL schema that exposes your processed data to clients. It:
264
+ - Provides a standardized API for accessing your relational database data
265
+ - Integrates with the Powerhouse supergraph for unified data access
266
+ - Supports both queries (reading data) and mutations (modifying data)
267
+ - Can join data across multiple processors and document types
268
+
269
+ ### Configure the Subgraph
270
+
271
+ Open `./subgraphs/todo/index.ts` and configure the resolvers:
272
+
273
+ ```ts
274
+ import { Subgraph } from "@powerhousedao/reactor-api";
275
+ import { gql } from "graphql-tag";
276
+ import { TodoIndexerProcessor } from "../../processors/todo-indexer/index.js";
277
+
278
+ export class TodoSubgraph extends Subgraph {
279
+ // Human-readable name for this subgraph
280
+ name = "Todos";
281
+
282
+ // GraphQL resolvers - functions that fetch data for each field
283
+ resolvers = {
284
+ Query: {
285
+ todos: {
286
+ // Resolver function for the "todos" query
287
+ // Arguments: parent object, query arguments, context, GraphQL info
288
+ resolve: async (_: any, args: {driveId: string}) => {
289
+ // Query the database using the processor's static query method
290
+ // This gives us access to the namespaced database for the specific drive
291
+ const todos = await TodoIndexerProcessor.query(args.driveId, this.relationalDb)
292
+ .selectFrom("todo") // Select from the "todo" table
293
+ .selectAll() // Get all columns
294
+ .execute(); // Execute the query
295
+
296
+ // Transform database results to match GraphQL schema
297
+ return todos.map((todo) => ({
298
+ task: todo.task, // Map database "task" column to GraphQL "task" field
299
+ status: todo.status, // Map database "status" column to GraphQL "status" field
300
+ }));
301
+ },
302
+ },
303
+ },
304
+ };
305
+
306
+ // GraphQL schema definition using GraphQL Schema Definition Language (SDL)
307
+ typeDefs = gql`
308
+
309
+ # Define the structure of a todo item as returned by GraphQL
310
+ type ToDoListEntry {
311
+ task: String! # The task description (! means required/non-null)
312
+ status: Boolean! # The completion status (true = done, false = pending)
313
+ }
314
+
315
+ # Define available queries
316
+ type Query {
317
+ todos(driveId: ID!): [ToDoListEntry] # Get array of todos for a specific drive
318
+ }
319
+ `;
320
+
321
+ // Cleanup method called when the subgraph disconnects
322
+ async onDisconnect() {
323
+ // Add any cleanup logic here if needed
324
+ }
325
+ }
326
+ ```
327
+
328
+ ## Now query the data via the supergraph.
329
+
330
+ **Understanding the Supergraph**
331
+
332
+ The Powerhouse supergraph is a unified GraphQL endpoint that combines:
333
+ - **Document Models**: Direct access to your Powerhouse documents
334
+ - **Subgraphs**: Custom data views from your processors
335
+ - **Built-in APIs**: System functionality like authentication and drives
336
+
337
+ This unified approach means you can query document state AND processed data in a single request, which is perfect for building rich user interfaces.
338
+
339
+ The Powerhouse supergraph for any given remote drive or reactor can be found under `http://localhost:4001/graphql`. The gateway / supergraph available on `/graphql` combines all the subgraphs, except for the drive subgraph (which is accessible via `/d/:driveId`). To access the endpoint, start the reactor and navigate to the URL with `graphql` appended. The following commands explain how you can test & try the supergraph.
340
+
341
+ - Start the reactor:
342
+
343
+ ```bash
344
+ ph reactor
345
+ ```
346
+
347
+ - Open the GraphQL editor in your browser:
348
+
349
+ ```
350
+ http://localhost:4001/graphql
351
+ ```
352
+ The supergraph allows you to both query & mutate data from the same endpoint.
353
+ Read more about [subgraphs](/academy/MasteryTrack/WorkWithData/UsingSubgraphs)
354
+
355
+ <details>
356
+ <summary>**Example: Complete Data Flow from Document Operations to Relational Database**</summary>
357
+
358
+ **Understanding the Complete Data Pipeline**
359
+
360
+ This comprehensive example demonstrates the **entire data flow** in a Powerhouse application:
361
+ 1. **Storage Layer**: Create a drive (document storage container)
362
+ 2. **Document Layer**: Create a todo document and add operations
363
+ 3. **Processing Layer**: Watch the relational database processor automatically index changes
364
+ 4. **API Layer**: Query both original document state AND processed relational data
365
+ 5. **Analysis**: Compare the different data representations
366
+
367
+ ---
368
+
369
+ ### **Step 1: Create a Drive (Storage Container)**
370
+
371
+ **What's Happening**: Every document needs a "drive" - think of it as a folder or database that contains related documents. This is where your todo documents will live.
372
+
373
+ ```graphql
374
+ mutation DriveCreation($name: String!) {
375
+ addDrive(name: $name) {
376
+ name
377
+ }
378
+ }
379
+ ```
380
+
381
+ Variables:
382
+ ```json
383
+ {
384
+ "driveId": "powerhouse",
385
+ "name": "tutorial"
386
+ }
387
+ ```
388
+
389
+ 💡 **Behind the Scenes**: This creates a new drive namespace. Your relational database processor will create isolated tables for this drive using the namespace pattern we defined earlier.
390
+
391
+ ---
392
+
393
+ ### **Step 2: Create a Todo Document**
394
+
395
+ **What's Happening**: Now we're creating an actual todo list document inside our drive. This uses the document model we built in previous chapters.
396
+
397
+ ```graphql
398
+ mutation Mutation($driveId: String, $name: String) {
399
+ ToDoList_createDocument(driveId: $driveId, name: $name)
400
+ }
401
+ ```
402
+
403
+ Variables:
404
+ ```json
405
+ {
406
+ "driveId": "powerhouse",
407
+ "name": "tutorial"
408
+ }
409
+ ```
410
+
411
+ Result:
412
+ ```json
413
+ {
414
+ "data": {
415
+ "ToDoList_createDocument": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b"
416
+ }
417
+ }
418
+ ```
419
+
420
+ 💡 **Key Insight**: The returned UUID (`72b73d31-4874-4b71-8cc3-289ed4cfbe2b`) is crucial - this is the document ID that will appear in our processor's database records, linking operations back to their source document. You will receive a different UUID.
421
+
422
+ ---
423
+
424
+ ### **Step 3: Add Todo Items (Generate Operations)**
425
+
426
+ **What's Happening**: Each time we add a todo item, we're creating a new **operation** in the document's history. Our relational database processor is listening for these operations in real-time.
427
+
428
+ ```graphql
429
+ mutation Mutation($driveId: String, $docId: PHID, $input: ToDoList_AddTodoItemInput) {
430
+ ToDoList_addTodoItem(driveId: $driveId, docId: $docId, input: $input)
431
+ }
432
+ ```
433
+
434
+ Variables:
435
+ ```json
436
+ {
437
+ "driveId": "powerhouse",
438
+ "name": "tutorial",
439
+ "docId": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b",
440
+ "input": {
441
+ "id": "1",
442
+ "text": "complete mutation"
443
+ }
444
+ }
445
+ ```
446
+
447
+ Result:
448
+ ```json
449
+ {
450
+ "data": {
451
+ "ToDoList_addTodoItem": 1
452
+ }
453
+ }
454
+ ```
455
+
456
+ 💡 **What Happens Next**:
457
+ 1. **Document Model**: Stores the operation and updates document state
458
+ 2. **Reactor**: Broadcasts the operation to all listening processors
459
+ 3. **Our Processor**: Automatically receives the operation and creates a database record
460
+ 4. **Database**: Now contains: `"72b73d31-4874-4b71-8cc3-289ed4cfbe2b-0: ADD_TODO_ITEM"`
461
+
462
+ 🔄 **Repeat this step 2-3 times** with different todo items to see multiple operations get processed. Each operation will have an incrementing index (0, 1, 2...).
463
+
464
+ ---
465
+
466
+ ### **Step 4: Query Both Data Sources**
467
+
468
+ **The Power of Dual Data Access**: Now we can query BOTH the original document state AND our processed relational data in a single GraphQL request. This demonstrates the flexibility of the Powerhouse architecture.
469
+
470
+ ```graphql
471
+ query Query($driveId: ID!) {
472
+ todos(driveId: $driveId) {
473
+ task
474
+ status
475
+ }
476
+ ToDoList {
477
+ getDocuments {
478
+ state {
479
+ items {
480
+ text
481
+ }
482
+ }
483
+ }
484
+ }
485
+ }
486
+ ```
487
+
488
+ Variables:
489
+ ```json
490
+ {
491
+ "driveId": "powerhouse"
492
+ }
493
+ ```
494
+
495
+ Response:
496
+ ```json
497
+ {
498
+ "data": {
499
+ "todos": [
500
+ {
501
+ "task": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b-0: ADD_TODO_ITEM",
502
+ "status": true
503
+ },
504
+ {
505
+ "task": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b-1: ADD_TODO_ITEM",
506
+ "status": true
507
+ },
508
+ {
509
+ "task": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b-2: ADD_TODO_ITEM",
510
+ "status": true
511
+ }
512
+ ],
513
+ "ToDoList": {
514
+ "getDocuments": [
515
+ {
516
+ "state": {
517
+ "items": [
518
+ {
519
+ "text": "complete mutation"
520
+ },
521
+ {
522
+ "text": "add another todo"
523
+ },
524
+ {
525
+ "text": "Now check the data"
526
+ }
527
+ ]
528
+ }
529
+ }
530
+ ]
531
+ }
532
+ }
533
+ }
534
+ ```
535
+
536
+ ---
537
+
538
+ ### **🔍 Data Analysis: Understanding What You're Seeing**
539
+
540
+ **Document Model Data (`ToDoList.getDocuments`):**
541
+ - ✅ **Current State**: Shows the final todo items as they exist in the document
542
+ - ✅ **User-Friendly**: Displays actual todo text like "complete mutation"
543
+ - ✅ **Real-Time**: Always reflects the latest document state
544
+ - ❌ **Limited History**: Doesn't show how the document changed over time
545
+
546
+ **Processed Relational Data (`todos`):**
547
+ - ✅ **Operation History**: Shows each individual operation that occurred
548
+ - ✅ **Audit Trail**: You can see the sequence (0, 1, 2) of operations
549
+ - ✅ **Analytics Ready**: Perfect for counting operations, tracking changes
550
+ - ✅ **Integration Friendly**: Standard SQL database that other tools can access
551
+ - ❌ **Less User-Friendly**: Shows operation metadata rather than final state
552
+
553
+ ---
554
+
555
+ **Key Differences:**
556
+ - **Document Query**: Gets the current state directly from the document model
557
+ - **Subgraph Query**: Gets processed/transformed data from your relational database
558
+ - **Combined Power**: You can query both in a single GraphQL request for rich UIs
559
+
560
+ This demonstrates how the supergraph provides a unified interface to both your document models and your custom subgraphs, allowing you to query and mutate data from the same endpoint.
561
+ </details>
562
+
563
+ ## Use the Data in Frontend Applications
564
+
565
+ **Integration Options**
566
+
567
+ Your processed data can now be consumed by any GraphQL client:
568
+ - **React**: Using Apollo Client, urql, or Relay
569
+ - **Next.js**: API routes, getServerSideProps, or app router
570
+ - **Mobile Apps**: React Native, Flutter, or native iOS/Android
571
+ - **Desktop Apps**: Electron, Tauri, or other frameworks
572
+ - **Third-party Tools**: Any tool that supports GraphQL APIs
573
+
574
+ ### React Hooks
575
+
576
+ **Coming Soon**: This section will cover how to use React hooks to consume your subgraph data in React applications. For now, you can use standard GraphQL clients like Apollo or urql to query your supergraph endpoint.
577
+
578
+ ### Next.js API Route Example
579
+
580
+ **Why API Routes?**
581
+
582
+ Next.js API routes are useful when you need to:
583
+ - Add server-side authentication or authorization
584
+ - Transform data before sending to the client
585
+ - Implement caching or rate limiting
586
+ - Proxy requests to avoid CORS issues
587
+ - Add logging or monitoring
588
+
589
+ ```ts
590
+ // pages/api/todos.ts
591
+ import { type NextApiRequest, type NextApiResponse } from "next"
592
+
593
+ export default async function handler(
594
+ req: NextApiRequest,
595
+ res: NextApiResponse
596
+ ) {
597
+ // Only allow GET requests for this endpoint
598
+ if (req.method !== "GET") {
599
+ return res.status(405).json({ message: "Method not allowed" });
600
+ }
601
+
602
+ // Extract driveId from query parameters, default to "powerhouse"
603
+ const { driveId = "powerhouse" } = req.query;
604
+
605
+ try {
606
+ // Query your subgraph or database directly
607
+ // In production, you might want to add authentication headers here
608
+ const response = await fetch("http://localhost:4001/graphql", {
609
+ method: "POST",
610
+ headers: { "Content-Type": "application/json" },
611
+ body: JSON.stringify({
612
+ query: `
613
+ query GetTodoList($driveId: String) {
614
+ todoList(driveId: $driveId) {
615
+ id
616
+ name
617
+ completed
618
+ createdAt
619
+ updatedAt
620
+ }
621
+ }
622
+ `,
623
+ variables: { driveId },
624
+ }),
625
+ });
626
+
627
+ const data = await response.json();
628
+
629
+ // Return the todos array from the GraphQL response
630
+ res.status(200).json(data.data.todoList);
631
+ } catch (error) {
632
+ // Log the error for debugging (in production, use proper logging)
633
+ console.error("Failed to fetch todos:", error);
634
+
635
+ // Return a generic error message to the client
636
+ res.status(500).json({ error: "Failed to fetch todos" });
637
+ }
638
+ }
639
+ ```
640
+
641
+ ## Summary
642
+
643
+ You've successfully created a relational database processor that:
644
+
645
+ 1. ✅ **Listens for document changes** - Automatically detects when todo documents are modified
646
+ 2. ✅ **Stores data in a structured database** - Transforms document operations into relational data
647
+ 3. ✅ **Provides type-safe database operations** - Uses TypeScript for compile-time safety
648
+ 4. ✅ **Exposes data through GraphQL** - Makes processed data available via a unified API
649
+ 5. ✅ **Can be consumed by frontend applications** - Ready for integration with any GraphQL client
650
+
651
+
652
+ This processor will automatically sync your document changes to the relational database, making the data available for complex queries, reporting, and integration with other systems.
653
+
654
+ **Real-World Applications:**
655
+
656
+ This pattern is commonly used for:
657
+ - **Analytics dashboards** showing document usage patterns
658
+ - **Business intelligence** reports on document data
659
+ - **Integration** with existing enterprise systems
660
+ - **Search and filtering** with complex SQL queries
661
+ - **Data archival** and compliance requirements
662
+
663
+
@@ -1,4 +1,4 @@
1
- # Drive Analytics
1
+ # Drive analytics
2
2
 
3
3
  Drive Analytics provides automated monitoring and insights into document drive operations within Powerhouse applications. This system tracks user interactions, document modifications, and drive activity to help developers understand usage patterns and system performance.
4
4
 
@@ -173,28 +173,28 @@ The returned hook accepts:
173
173
 
174
174
  </details>
175
175
 
176
- ### 2. useOperationalStore()
176
+ ### 2. useRelationalDb()
177
177
 
178
178
  <details>
179
- <summary>`useOperationalStore<Schema>()`: Access the enhanced database instance directly</summary>
179
+ <summary>`useRelationalDb<Schema>()`: Access the enhanced database instance directly</summary>
180
180
 
181
181
  ### Hook Name and Signature
182
182
 
183
183
  ```typescript
184
- function useOperationalStore<Schema>(): IOperationalStore<Schema>
184
+ function useRelationalDb<Schema>(): IRelationalDb<Schema>
185
185
  ```
186
186
 
187
187
  ### Description
188
188
 
189
- Provides direct access to the enhanced Kysely database instance with live query capabilities. Use this when you need to perform operational database operations outside of the typical query patterns.
189
+ Provides direct access to the enhanced Kysely database instance with live query capabilities. Use this when you need to perform relational database operations outside of the typical query patterns.
190
190
 
191
191
  ### Usage Example
192
192
 
193
193
  ```typescript
194
- import { useOperationalStore } from '@powerhousedao/reactor-browser/operational';
194
+ import { useRelationalDb } from '@powerhousedao/reactor-browser/relational';
195
195
 
196
196
  function DatabaseOperations() {
197
- const { db, isLoading, error } = useOperationalStore<MyDatabase>();
197
+ const { db, isLoading, error } = useRelationalDb<MyDatabase>();
198
198
 
199
199
  const createUser = async (name: string, email: string) => {
200
200
  if (!db) return;
@@ -241,19 +241,19 @@ function DatabaseOperations() {
241
241
  ### Related Hooks
242
242
 
243
243
  - [`createProcessorQuery`](#1-createprocessorquery) - For optimized queries
244
- - [`useOperationalQuery`](#3-useoperationalquery) - For manual query control
244
+ - [`useRelationalQuery`](#3-userelationalquery) - For manual query control
245
245
 
246
246
  </details>
247
247
 
248
- ### 3. useOperationalQuery()
248
+ ### 3. useRelationalQuery()
249
249
 
250
250
  <details>
251
- <summary>`useOperationalQuery<Schema, T, TParams>()`: Lower-level hook for manual query control</summary>
251
+ <summary>`useRelationalQuery<Schema, T, TParams>()`: Lower-level hook for manual query control</summary>
252
252
 
253
253
  ### Hook Name and Signature
254
254
 
255
255
  ```typescript
256
- function useOperationalQuery<Schema, T, TParams>(
256
+ function useRelationalQuery<Schema, T, TParams>(
257
257
  queryCallback: (db: EnhancedKysely<Schema>, parameters?: TParams) => QueryCallbackReturnType,
258
258
  parameters?: TParams
259
259
  ): QueryResult<T>
@@ -266,10 +266,10 @@ Lower-level hook for creating live queries with manual control over the query ca
266
266
  ### Usage Example
267
267
 
268
268
  ```typescript
269
- import { useOperationalQuery } from '@powerhousedao/reactor-browser/operational';
269
+ import { useRelationalQuery } from '@powerhousedao/reactor-browser/relational';
270
270
 
271
271
  function UserCount() {
272
- const { result, isLoading, error } = useOperationalQuery<MyDatabase, { count: number }>(
272
+ const { result, isLoading, error } = useRelationalQuery<MyDatabase, { count: number }>(
273
273
  (db) => {
274
274
  return db
275
275
  .selectFrom('users')
@@ -309,7 +309,7 @@ function UserCount() {
309
309
  ### Related Hooks
310
310
 
311
311
  - [`createProcessorQuery`](#1-createprocessorquery) - Recommended higher-level API
312
- - [`useOperationalStore`](#2-useoperationalstore) - For direct database access
312
+ - [`useRelationalDb`](#2-usedelationaldb) - For direct database access
313
313
 
314
314
  </details>
315
315
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@powerhousedao/academy",
3
- "version": "3.3.0-dev.15",
3
+ "version": "3.3.0-dev.17",
4
4
  "homepage": "https://powerhouse.academy",
5
5
  "repository": {
6
6
  "type": "git",
@@ -1,199 +0,0 @@
1
- # Build a Todo-List processor
2
-
3
- 1. Generate the processor
4
- 2. Define your database schema
5
- 3. Customize the processor to your needs
6
- 4. Test your processor
7
- 5. Use the relational database in Frontend and Subgraph
8
-
9
-
10
- ## Generate the Processor
11
-
12
- In order to generate the processor you need to run the following command:
13
- ```bash
14
- ph generate --processor todo-processor --processor-type relational-db --document-types powerhouse/todolist
15
- ```
16
-
17
- With that command you create a processor named todo-processor which is of type relational db and listens on changes from documents of type powerhouse/todolist.
18
-
19
- ## Define your database schema
20
-
21
- As next step we need to define the db schema in the `processors/todo-processor/migration.ts` file.
22
-
23
- The migration file has a up and a down function which gets called when either the processor was added or when the processor was removed.
24
-
25
- Below you can find the example of a todo table.
26
-
27
- ```ts
28
- import { type IBaseRelationalDb } from "document-drive/processors/types"
29
-
30
- export async function up(db: IBaseRelationalDb): Promise<void> {
31
- // Create table
32
- await db.schema
33
- .createTable("todo")
34
- .addColumn("name", "varchar(255)")
35
- .addColumn("completed", "boolean")
36
- .addPrimaryKeyConstraint("todo_pkey", ["name"])
37
- .ifNotExists()
38
- .execute();
39
-
40
- const tables = await db.introspection.getTables();
41
- console.log(tables);
42
- }
43
-
44
- export async function down(db: IBaseRelationalDb): Promise<void> {
45
- // drop table
46
- await db.schema.dropTable("todo").execute();
47
- }
48
- ```
49
-
50
- ## Generate Types
51
-
52
- After defining your db schema its important to generate the types for typescript. This allows to create type safety queries and make use of code completion in your IDE when writing database queries.
53
-
54
- Simply execute the following command.
55
-
56
- ```bash
57
- ph generate --migration-file processors/todo-indexer/migrations.js --schema-file processors/todo-indexer/schema.ts
58
- ```
59
-
60
- Afterwards check your `processors/todo-processor/schema.ts` file.
61
- It will contain the types of your database.
62
-
63
- ## Define the Filter
64
-
65
- Checkout the `processors/todo-processor/factory.ts`.
66
-
67
- Here you can define how the processor is being instantiated. In thise case it listens on powerhouse/todo-list document changes in the main branch and the global scope.
68
-
69
- ```ts
70
- export const todoProcessorProcessorFactory =
71
- (module: IProcessorHostModule) =>
72
- async (driveId: string): Promise<ProcessorRecord[]> => {
73
- // Create a namespace for the processor and the provided drive id
74
- const namespace = TodoProcessorProcessor.getNamespace(driveId);
75
-
76
- // Create a filter for the processor
77
- const filter: RelationalDbProcessorFilter = {
78
- branch: ["main"],
79
- documentId: ["*"],
80
- documentType: ["powerhouse/todo-list"],
81
- scope: ["global"],
82
- };
83
-
84
- // Create a namespaced store for the processor
85
- const store = await createNamespacedDb<TodoProcessorProcessor>(
86
- namespace,
87
- module.relationalStore,
88
- );
89
-
90
- // Create the processor
91
- const processor = new TodoProcessorProcessor(namespace, filter, store);
92
- return [
93
- {
94
- processor,
95
- filter,
96
- },
97
- ];
98
- };
99
-
100
- ```
101
-
102
- ## Customize the logic of the processor
103
-
104
- When you defined your db schema and the filter when your processor should receive processed operations its time to implement the actual logic.
105
-
106
- In the following you'll find an example where we store all the created and udpated todos in a table.
107
-
108
- ```ts
109
- type DocumentType = ToDoListDocument;
110
-
111
- export class TodoIndexerProcessor extends RelationalDbProcessor<DB> {
112
-
113
- static override getNamespace(driveId: string): string {
114
- // Default namespace: `${this.name}_${driveId.replaceAll("-", "_")}`
115
- return super.getNamespace(driveId);
116
- }
117
-
118
- override async initAndUpgrade(): Promise<void> {
119
- await up(this.relationalDb as IBaseRelationalDb);
120
- }
121
-
122
- override async onStrands(
123
- strands: InternalTransmitterUpdate<DocumentType>[],
124
- ): Promise<void> {
125
- if (strands.length === 0) {
126
- return;
127
- }
128
-
129
- for (const strand of strands) {
130
- if (strand.operations.length === 0) {
131
- continue;
132
- }
133
-
134
- for (const operation of strand.operations) {
135
- await this.relationalDb
136
- .insertInto("todo")
137
- .values({
138
- task: strand.documentId,
139
- status: true,
140
- })
141
- .execute();
142
- }
143
- }
144
- }
145
-
146
- async onDisconnect() {}
147
- }
148
-
149
- ```
150
-
151
- ## Fetch Data through a Subgraph
152
-
153
- ### Generate Subgraph
154
-
155
- Simply generate a new subgraph with:
156
- ```bash
157
- ph generate --subgraph <subgraph-name>
158
- ```
159
-
160
- ### Fetch Data from Processor
161
-
162
- open ```./subgraphs/<subgraph-name>/index.ts```
163
-
164
-
165
-
166
- define the following:
167
-
168
-
169
- ```ts
170
- resolvers = {
171
- Query: {
172
- todoList: {
173
- resolve: async (parent, args, context, info) => {
174
- const todoList = await TodoProcessor.query(
175
- args.driveId ?? "powerhouse",
176
- this.relationalDb
177
- )
178
- .selectFrom("todo")
179
- .selectAll()
180
- .execute();
181
- return todoList
182
- },
183
- },
184
- },
185
- };
186
-
187
- typeDefs = gql`
188
- type Query {
189
- type Todo {
190
- name: String!
191
- completed: Boolean!
192
- }
193
-
194
- todoList(driveId: String): [Todo!]!
195
- }
196
- `;
197
- ```
198
-
199
-
@@ -1,8 +0,0 @@
1
- {
2
- "label": "RelationalDb Processor",
3
- "position": 6,
4
- "link": {
5
- "type": "generated-index",
6
- "description": "Learn how to make use of a RelationalDb Processor in this tutorial!"
7
- }
8
- }