@powerhousedao/academy 3.3.0-dev.16 → 3.3.0-dev.18
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +15 -0
- package/docs/academy/02-MasteryTrack/04-WorkWithData/03-UsingSubgraphs.md +16 -31
- package/docs/academy/02-MasteryTrack/04-WorkWithData/05-RelationalDbProcessor.md +663 -0
- package/docs/academy/02-MasteryTrack/04-WorkWithData/07-drive-analytics.md +1 -1
- package/docs/academy/04-APIReferences/04-RelationalDatabase.md +13 -13
- package/package.json +1 -1
- package/docs/academy/02-MasteryTrack/04-WorkWithData/07-OperationalDbProcessorTutorial/01-TodoList-example.md +0 -383
- package/docs/academy/02-MasteryTrack/04-WorkWithData/07-OperationalDbProcessorTutorial/_category_.json +0 -8
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_01-SetupBuilderEnvironment.md +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_02-CreateNewPowerhouseProject.md +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_03-GenerateAnAnalyticsProcessor.md +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_04-UpdateAnalyticsProcessor.md +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/_category_.json +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Create-SPV.gif +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Create-a-new-asset.png +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Create-a-transaction.gif +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/Transaction-table.png +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/create-a-new-RWA-document.gif +0 -0
- /package/docs/academy/02-MasteryTrack/04-WorkWithData/{_05-AnalyticsProcessorTutorial → _ARCHIVE-AnalyticsProcessorTutorial}/images/granularity.png +0 -0
package/CHANGELOG.md
CHANGED
|
@@ -1,3 +1,18 @@
|
|
|
1
|
+
## 3.3.0-dev.18 (2025-07-24)
|
|
2
|
+
|
|
3
|
+
This was a version bump only for @powerhousedao/academy to align it with other projects, there were no code changes.
|
|
4
|
+
|
|
5
|
+
## 3.3.0-dev.17 (2025-07-23)
|
|
6
|
+
|
|
7
|
+
### 🩹 Fixes
|
|
8
|
+
|
|
9
|
+
- update release notes ([f1b6a8e71](https://github.com/powerhouse-inc/powerhouse/commit/f1b6a8e71))
|
|
10
|
+
- add release notes on correct branch ([a2d60a537](https://github.com/powerhouse-inc/powerhouse/commit/a2d60a537))
|
|
11
|
+
|
|
12
|
+
### ❤️ Thank You
|
|
13
|
+
|
|
14
|
+
- Callme-T
|
|
15
|
+
|
|
1
16
|
## 3.3.0-dev.16 (2025-07-22)
|
|
2
17
|
|
|
3
18
|
This was a version bump only for @powerhousedao/academy to align it with other projects, there were no code changes.
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Using
|
|
1
|
+
# Using subgraphs
|
|
2
2
|
|
|
3
3
|
This tutorial will demonstrate how to create and customize a subgraph using our To-do List project as an example.
|
|
4
4
|
Let's start with the basics and gradually add more complex features and functionality.
|
|
@@ -79,11 +79,11 @@ Initializing Subgraph Manager...
|
|
|
79
79
|
➜ Reactor: http://localhost:4001/d/powerhouse
|
|
80
80
|
```
|
|
81
81
|
|
|
82
|
-
## 2. Building a
|
|
82
|
+
## 2. Building a to-do list subgraph
|
|
83
83
|
|
|
84
84
|
Now that we've generated our subgraph, let's build a complete To-do List subgraph that extends the functionality of our To-do List document model. This subgraph will provide additional querying capabilities and demonstrate how subgraphs work with document models.
|
|
85
85
|
|
|
86
|
-
### 2.1 Understanding the
|
|
86
|
+
### 2.1 Understanding the to-do list document model
|
|
87
87
|
|
|
88
88
|
Before building our subgraph, let's recall the structure of our To-do List document model from the [DocumentModelCreation tutorial](/academy/MasteryTrack/DocumentModelCreation/SpecifyTheStateSchema):
|
|
89
89
|
|
|
@@ -111,11 +111,11 @@ The document model has these operations:
|
|
|
111
111
|
- `UPDATE_TODO_ITEM`: Updates an existing to-do item
|
|
112
112
|
- `DELETE_TODO_ITEM`: Deletes a to-do item
|
|
113
113
|
|
|
114
|
-
### 2.2 Define the
|
|
114
|
+
### 2.2 Define the subgraph schema
|
|
115
115
|
|
|
116
116
|
Now let's create a subgraph that provides enhanced querying capabilities for our To-do List documents.
|
|
117
117
|
|
|
118
|
-
**Step 1: Define the schema in `subgraphs/to-do-list/schema.ts
|
|
118
|
+
**Step 1: Define the schema in `subgraphs/to-do-list/schema.ts` by creating the file:**
|
|
119
119
|
|
|
120
120
|
```typescript
|
|
121
121
|
export const typeDefs = `
|
|
@@ -148,19 +148,19 @@ export const typeDefs = `
|
|
|
148
148
|
text: String! # The task description
|
|
149
149
|
checked: Boolean! # Completion status
|
|
150
150
|
}
|
|
151
|
-
|
|
151
|
+
}`
|
|
152
152
|
```
|
|
153
153
|
|
|
154
154
|
|
|
155
|
-
|
|
156
|
-
#### Understanding
|
|
155
|
+
<details>
|
|
156
|
+
<summary> #### Understanding resolvers </summary>
|
|
157
157
|
|
|
158
158
|
Before diving into the technical implementation, let's understand why these three different query types matter for your product.
|
|
159
159
|
Think of resolvers as custom API endpoints that are automatically created based on what your users actually need to know about your data.
|
|
160
160
|
|
|
161
161
|
When someone asks your system a question through GraphQL, the resolver:
|
|
162
162
|
|
|
163
|
-
1. **Understands the request** - "The
|
|
163
|
+
1. **Understands the request** - "The user wants unchecked items"
|
|
164
164
|
2. **Knows where to get the data** - "I need to check the todo_items database table"
|
|
165
165
|
3. **Applies the right filters** - "Only get items where checked = false"
|
|
166
166
|
4. **Returns the answer** - "Here are the 5 unchecked items"
|
|
@@ -187,6 +187,8 @@ Think of resolvers as custom API endpoints that are automatically created based
|
|
|
187
187
|
- **User Experience**: Different resolvers serve different user needs efficiently
|
|
188
188
|
- **Flexibility**: Users can ask for exactly what they need, nothing more, nothing less
|
|
189
189
|
|
|
190
|
+
</details>
|
|
191
|
+
|
|
190
192
|
**Step 2: Create resolvers in `subgraphs/to-do-list/resolvers.ts`:**
|
|
191
193
|
|
|
192
194
|
```typescript
|
|
@@ -329,7 +331,7 @@ export default class ToDoListSubgraph {
|
|
|
329
331
|
}
|
|
330
332
|
```
|
|
331
333
|
|
|
332
|
-
### 2.3 Understanding the
|
|
334
|
+
### 2.3 Understanding the implementation
|
|
333
335
|
|
|
334
336
|
**What this multi-file approach provides:**
|
|
335
337
|
|
|
@@ -344,7 +346,7 @@ export default class ToDoListSubgraph {
|
|
|
344
346
|
- Resolvers that fetch and filter todo items from the operational store
|
|
345
347
|
- Event processing to keep the subgraph data synchronized with document model changes
|
|
346
348
|
|
|
347
|
-
### 2.4 Understanding the
|
|
349
|
+
### 2.4 Understanding the document model event integration
|
|
348
350
|
|
|
349
351
|
Notice that our `index.ts` file already includes a `process` method - this is the **processor integration** that keeps our subgraph synchronized with To-do List document model events. When users interact with To-do List documents through Connect, this method automatically handles the updates.
|
|
350
352
|
|
|
@@ -394,7 +396,7 @@ if (event.type === "DELETE_TODO_ITEM") {
|
|
|
394
396
|
4. **Subgraph response**: Your `process` method updates the operational store
|
|
395
397
|
5. **Query availability**: Users can now query the updated data via GraphQL
|
|
396
398
|
|
|
397
|
-
### 2.5 Summary of
|
|
399
|
+
### 2.5 Summary of what we've built
|
|
398
400
|
|
|
399
401
|
Our complete To-do List subgraph includes:
|
|
400
402
|
|
|
@@ -411,7 +413,7 @@ Our complete To-do List subgraph includes:
|
|
|
411
413
|
- **Real-time synchronization**: Changes in Connect immediately appear in subgraph queries
|
|
412
414
|
- **Complete statistics**: The `todoList` query returns total, checked, and unchecked counts
|
|
413
415
|
|
|
414
|
-
## 3. Testing the
|
|
416
|
+
## 3. Testing the to-do list subgraph
|
|
415
417
|
|
|
416
418
|
### 3.1. Start the reactor
|
|
417
419
|
To activate the subgraph, run:
|
|
@@ -433,7 +435,7 @@ You should see the subgraph being registered in the console output:
|
|
|
433
435
|
### 3.2. Create some test data
|
|
434
436
|
Before testing queries, let's create some To-do List documents with test data:
|
|
435
437
|
|
|
436
|
-
1. Open Connect at `http://localhost:3001`
|
|
438
|
+
1. Open Connect at `http://localhost:3001` in another terminal
|
|
437
439
|
2. Add the 'remote' drive that is running locally via the (+) 'Add Drive' button. Add 'http://localhost:4001/d/powerhouse'
|
|
438
440
|
3. Create a new To-do List document
|
|
439
441
|
4. Add some test items:
|
|
@@ -654,25 +656,12 @@ This demonstrates how the supergraph provides a unified interface to both your d
|
|
|
654
656
|
|
|
655
657
|
Congratulations! You've successfully built a complete To-do List subgraph that demonstrates the power of extending document models with custom GraphQL functionality. Let's recap what you've accomplished:
|
|
656
658
|
|
|
657
|
-
### What you built:
|
|
658
|
-
- **A custom GraphQL schema** that provides enhanced querying capabilities for To-do List documents
|
|
659
|
-
- **An operational data store** that efficiently stores and retrieves to-do items
|
|
660
|
-
- **Real-time event processing** that keeps your subgraph synchronized with document model changes
|
|
661
|
-
- **Advanced query capabilities** including filtering and counting operations
|
|
662
|
-
- **Integration with the supergraph** for unified API access
|
|
663
|
-
|
|
664
659
|
### Key concepts learned:
|
|
665
660
|
- **Subgraphs extend document models** with additional querying and data processing capabilities
|
|
666
661
|
- **Operational data stores** provide efficient storage for subgraph data
|
|
667
662
|
- **Event processing** enables real-time synchronization between document models and subgraphs
|
|
668
663
|
- **The supergraph** unifies multiple subgraphs into a single GraphQL endpoint
|
|
669
664
|
|
|
670
|
-
### Next steps:
|
|
671
|
-
- Explore adding **mutations** to your subgraph for more complex operations
|
|
672
|
-
- Implement **data aggregation** for analytics and reporting
|
|
673
|
-
- Connect to **external APIs** for enhanced functionality
|
|
674
|
-
- Build **processors** that automate workflows between different document models
|
|
675
|
-
|
|
676
665
|
This tutorial has provided you with a solid foundation for building sophisticated data processing and querying capabilities in the Powerhouse ecosystem.
|
|
677
666
|
|
|
678
667
|
## Subgraphs are particularly useful for
|
|
@@ -691,10 +680,6 @@ This tutorial has provided you with a solid foundation for building sophisticate
|
|
|
691
680
|
- Add automated task assignments
|
|
692
681
|
- Create custom reporting functionality
|
|
693
682
|
|
|
694
|
-
### Prebuilt subgraphs
|
|
695
|
-
|
|
696
|
-
Some subgraphs (e.g., System Subgraph, Drive Subgraph) already exist.
|
|
697
|
-
To integrate with them, register them via the Reactor API.
|
|
698
683
|
|
|
699
684
|
### Future enhancements
|
|
700
685
|
|
|
@@ -0,0 +1,663 @@
|
|
|
1
|
+
# Relational database processor
|
|
2
|
+
|
|
3
|
+
In this chapter, we will implement a **Todo-List** relational database processor. This processor receives processed operations from the reactor and can use the `prevState`, `resultingState`, or data from the operations themselves to populate a database.
|
|
4
|
+
|
|
5
|
+
**What is a Relational Database Processor?**
|
|
6
|
+
|
|
7
|
+
A relational database processor is a specialized component that listens to document changes in your Powerhouse application and transforms that data into a traditional relational database format (like PostgreSQL, MySQL, or SQLite). This is incredibly useful for:
|
|
8
|
+
|
|
9
|
+
- **Analytics and Reporting**: Running complex SQL queries on your document data
|
|
10
|
+
- **Integration**: Connecting with existing business intelligence tools
|
|
11
|
+
|
|
12
|
+
## Generate the Processor
|
|
13
|
+
|
|
14
|
+
To generate a relational database processor, run the following command:
|
|
15
|
+
|
|
16
|
+
```bash
|
|
17
|
+
ph generate --processor todo-indexer --processor-type relationalDb --document-types powerhouse/todolist
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
**Breaking down this command:**
|
|
21
|
+
- `--processor todo-indexer`: Creates a processor with the name "todo-indexer"
|
|
22
|
+
- `--processor-type relationalDb`: Specifies we want a relational database processor (vs other types like analytics or webhook processors)
|
|
23
|
+
- `--document-types powerhouse/todolist`: Tells the processor to only listen for changes to documents of type "powerhouse/todolist"
|
|
24
|
+
|
|
25
|
+
This command creates a processor named `todo-indexer` of type `relational database` that listens for changes from documents of type `powerhouse/todolist`.
|
|
26
|
+
|
|
27
|
+
**What gets generated:**
|
|
28
|
+
- A processor class file (`processors/todo-indexer/index.ts`)
|
|
29
|
+
- A database migration file (`processors/todo-indexer/migrations.ts`)
|
|
30
|
+
- A factory file for configuration (`processors/todo-indexer/factory.ts`)
|
|
31
|
+
- A schema file for TypeScript types (`processors/todo-indexer/schema.ts`)
|
|
32
|
+
|
|
33
|
+
## Define Your Database Schema
|
|
34
|
+
|
|
35
|
+
Next, define your database schema in the `processors/todo-indexer/migration.ts` file.
|
|
36
|
+
|
|
37
|
+
**Understanding Database Migrations**
|
|
38
|
+
|
|
39
|
+
Migrations are version-controlled database changes that ensure your database schema evolves safely over time. They contain:
|
|
40
|
+
- **`up()` function**: Creates or modifies database structures when the processor starts
|
|
41
|
+
- **`down()` function**: Safely removes changes when the processor is removed
|
|
42
|
+
|
|
43
|
+
This approach ensures your database schema stays in sync across different environments (development, staging, production).
|
|
44
|
+
|
|
45
|
+
The migration file contains `up` and `down` functions that are called when the processor is added or removed, respectively.
|
|
46
|
+
|
|
47
|
+
In the migration.ts file you'll find an example of the todo table default schema:
|
|
48
|
+
|
|
49
|
+
```ts
|
|
50
|
+
import { type IRelationalDb } from "document-drive/processors/types"
|
|
51
|
+
|
|
52
|
+
export async function up(db: IRelationalDb<any>): Promise<void> {
|
|
53
|
+
// Create table - this runs when the processor starts
|
|
54
|
+
await db.schema
|
|
55
|
+
.createTable("todo") // Creates a new table named "todo"
|
|
56
|
+
.addColumn("task", "varchar(255)") // Text column for the task description (max 255 characters)
|
|
57
|
+
.addColumn("status", "boolean") // Boolean column for completion status (true/false)
|
|
58
|
+
.addPrimaryKeyConstraint("todo_pkey", ["task"]) // Makes "task" the primary key (unique identifier)
|
|
59
|
+
.ifNotExists() // Only create if table doesn't already exist
|
|
60
|
+
.execute(); // Execute the SQL command
|
|
61
|
+
}
|
|
62
|
+
|
|
63
|
+
export async function down(db: IRelationalDb<any>): Promise<void> {
|
|
64
|
+
// Drop table - this runs when the processor is removed
|
|
65
|
+
await db.schema.dropTable("todo").execute();
|
|
66
|
+
}
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Design Considerations:**
|
|
70
|
+
- We're using `task` as the primary key, which means each task description must be unique
|
|
71
|
+
- The `varchar(255)` limit ensures reasonable memory usage
|
|
72
|
+
- The `boolean` status makes it easy to filter completed vs. incomplete tasks
|
|
73
|
+
- Consider adding timestamps (`created_at`, `updated_at`) for audit trails in production applications
|
|
74
|
+
|
|
75
|
+
## Generate Database Types
|
|
76
|
+
|
|
77
|
+
After defining your database schema, generate TypeScript types for type-safe queries and better IDE support:
|
|
78
|
+
|
|
79
|
+
```bash
|
|
80
|
+
ph generate --migration-file processors/todo-indexer/migrations.ts
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
**Why Generate Types?**
|
|
84
|
+
|
|
85
|
+
TypeScript types provide several benefits:
|
|
86
|
+
- **Type Safety**: Catch errors at compile time instead of runtime
|
|
87
|
+
- **IDE Support**: Get autocomplete and IntelliSense for your database queries
|
|
88
|
+
- **Documentation**: Types serve as living documentation of your database structure
|
|
89
|
+
- **Refactoring**: Safe renaming and restructuring of database fields
|
|
90
|
+
|
|
91
|
+
Check your `processors/todo-indexer/schema.ts` file after generation - it will contain the TypeScript types for your database schema.
|
|
92
|
+
|
|
93
|
+
**Example of generated types:**
|
|
94
|
+
```ts
|
|
95
|
+
// This is what gets auto-generated based on your migration
|
|
96
|
+
export interface Database {
|
|
97
|
+
todo: {
|
|
98
|
+
task: string;
|
|
99
|
+
status: boolean;
|
|
100
|
+
};
|
|
101
|
+
}
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
## Configure the Processor Filter
|
|
105
|
+
|
|
106
|
+
This give you the opportunity to configure the processor filter in `processors/todo-indexer/factory.ts`:
|
|
107
|
+
|
|
108
|
+
**Understanding Processor Filters**
|
|
109
|
+
|
|
110
|
+
Filters determine which document changes your processor will respond to. This is crucial for performance and functionality:
|
|
111
|
+
- **Performance**: Only process relevant changes to avoid unnecessary work
|
|
112
|
+
- **Isolation**: Different processors can handle different document types
|
|
113
|
+
- **Scalability**: Distribute processing load across multiple processors
|
|
114
|
+
|
|
115
|
+
```ts
|
|
116
|
+
import {
|
|
117
|
+
type ProcessorRecord,
|
|
118
|
+
type IProcessorHostModule,
|
|
119
|
+
} from "document-drive/processors/types";
|
|
120
|
+
import { type RelationalDbProcessorFilter } from "document-drive/processors/relational";
|
|
121
|
+
import { TodoIndexerProcessor } from "./index.js";
|
|
122
|
+
|
|
123
|
+
export const todoIndexerProcessorFactory =
|
|
124
|
+
(module: IProcessorHostModule) =>
|
|
125
|
+
async (driveId: string): Promise<ProcessorRecord[]> => {
|
|
126
|
+
// Create a namespace for the processor and the provided drive id
|
|
127
|
+
// Namespaces prevent data collisions between different drives
|
|
128
|
+
const namespace = TodoIndexerProcessor.getNamespace(driveId);
|
|
129
|
+
|
|
130
|
+
// Create a namespaced db for the processor
|
|
131
|
+
// This ensures each drive gets its own isolated database tables
|
|
132
|
+
const store =
|
|
133
|
+
await module.relationalDb.createNamespace<TodoIndexerProcessor>(
|
|
134
|
+
namespace,
|
|
135
|
+
);
|
|
136
|
+
|
|
137
|
+
// Create a filter for the processor
|
|
138
|
+
// This determines which document changes trigger the processor
|
|
139
|
+
const filter: RelationalDbProcessorFilter = {
|
|
140
|
+
branch: ["main"], // Only process changes from the "main" branch
|
|
141
|
+
documentId: ["*"], // Process changes from any document ID (* = wildcard)
|
|
142
|
+
documentType: ["powerhouse/todolist"], // Only process todolist documents
|
|
143
|
+
scope: ["global"], // Process global changes (not user-specific)
|
|
144
|
+
};
|
|
145
|
+
|
|
146
|
+
// Create the processor instance
|
|
147
|
+
const processor = new TodoIndexerProcessor(namespace, filter, store);
|
|
148
|
+
return [
|
|
149
|
+
{
|
|
150
|
+
processor,
|
|
151
|
+
filter,
|
|
152
|
+
},
|
|
153
|
+
];
|
|
154
|
+
};
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
**Filter Options Explained:**
|
|
158
|
+
- **`branch`**: Which document branches to monitor (usually "main" for production data)
|
|
159
|
+
- **`documentId`**: Specific document IDs to watch ("*" means all documents)
|
|
160
|
+
- **`documentType`**: Document types to process (ensures type safety)
|
|
161
|
+
- **`scope`**: Whether to process global changes or user-specific ones
|
|
162
|
+
|
|
163
|
+
## Implement the Processor Logic
|
|
164
|
+
|
|
165
|
+
Now implement the actual processor logic in `processors/todo-indexer/index.ts` by copying the code underneath:
|
|
166
|
+
|
|
167
|
+
**Understanding the Processor Lifecycle**
|
|
168
|
+
|
|
169
|
+
The processor has several key methods:
|
|
170
|
+
- **`initAndUpgrade()`**: Runs once when the processor starts (perfect for running migrations)
|
|
171
|
+
- **`onStrands()`**: Runs every time relevant document changes occur (this is where the main logic goes)
|
|
172
|
+
- **`onDisconnect()`**: Cleanup when the processor shuts down
|
|
173
|
+
|
|
174
|
+
**What are "Strands"?**
|
|
175
|
+
Strands represent a batch of operations that happened to documents. Each strand contains:
|
|
176
|
+
- Document ID and metadata
|
|
177
|
+
- Array of operations (create, update, delete, etc.)
|
|
178
|
+
- Previous and resulting document states
|
|
179
|
+
|
|
180
|
+
```ts
|
|
181
|
+
import { type IRelationalDb } from "document-drive/processors/types";
|
|
182
|
+
import { RelationalDbProcessor } from "document-drive/processors/relational";
|
|
183
|
+
import { type InternalTransmitterUpdate } from "document-drive/server/listener/transmitter/internal";
|
|
184
|
+
import type { ToDoListDocument } from "../../document-models/to-do-list/index.js";
|
|
185
|
+
|
|
186
|
+
import { up } from "./migrations.js";
|
|
187
|
+
import { type DB } from "./schema.js";
|
|
188
|
+
|
|
189
|
+
// Define the document type this processor handles
|
|
190
|
+
type DocumentType = ToDoListDocument;
|
|
191
|
+
|
|
192
|
+
export class TodoIndexerProcessor extends RelationalDbProcessor<DB> {
|
|
193
|
+
// Generate a unique namespace for this processor based on the drive ID
|
|
194
|
+
// This prevents data conflicts between different drives
|
|
195
|
+
static override getNamespace(driveId: string): string {
|
|
196
|
+
// Default namespace: `${this.name}_${driveId.replaceAll("-", "_")}`
|
|
197
|
+
return super.getNamespace(driveId);
|
|
198
|
+
}
|
|
199
|
+
|
|
200
|
+
// Initialize the processor and run database migrations
|
|
201
|
+
// This method runs once when the processor starts up
|
|
202
|
+
override async initAndUpgrade(): Promise<void> {
|
|
203
|
+
await up(this.relationalDb); // Run the database migration to create tables
|
|
204
|
+
}
|
|
205
|
+
|
|
206
|
+
// Main processing logic - handles incoming document changes
|
|
207
|
+
// This method is called whenever there are new document operations
|
|
208
|
+
override async onStrands(
|
|
209
|
+
strands: InternalTransmitterUpdate<DocumentType>[],
|
|
210
|
+
): Promise<void> {
|
|
211
|
+
// Early return if no changes to process
|
|
212
|
+
if (strands.length === 0) {
|
|
213
|
+
return;
|
|
214
|
+
}
|
|
215
|
+
|
|
216
|
+
// Process each strand (batch of changes) individually
|
|
217
|
+
for (const strand of strands) {
|
|
218
|
+
// Skip strands with no operations
|
|
219
|
+
if (strand.operations.length === 0) {
|
|
220
|
+
continue;
|
|
221
|
+
}
|
|
222
|
+
|
|
223
|
+
// Process each operation within the strand
|
|
224
|
+
for (const operation of strand.operations) {
|
|
225
|
+
// Insert a record for each operation into the database
|
|
226
|
+
// This is a simple example - you might want more sophisticated logic
|
|
227
|
+
await this.relationalDb
|
|
228
|
+
.insertInto("todo")
|
|
229
|
+
.values({
|
|
230
|
+
// Create a unique task identifier combining document ID, operation index, and type
|
|
231
|
+
task: `${strand.documentId}-${operation.index}: ${operation.type}`,
|
|
232
|
+
status: true, // Default to completed status
|
|
233
|
+
})
|
|
234
|
+
// Handle conflicts by doing nothing if the task already exists
|
|
235
|
+
// This prevents duplicate entries if operations are replayed
|
|
236
|
+
.onConflict((oc) => oc.column("task").doNothing())
|
|
237
|
+
.execute(); // Execute the database query
|
|
238
|
+
}
|
|
239
|
+
}
|
|
240
|
+
}
|
|
241
|
+
|
|
242
|
+
// Cleanup method called when the processor disconnects
|
|
243
|
+
// Use this for closing connections, clearing caches, etc.
|
|
244
|
+
async onDisconnect() {
|
|
245
|
+
// Add any cleanup logic here
|
|
246
|
+
// For example: await this.relationalDb.destroy();
|
|
247
|
+
}
|
|
248
|
+
}
|
|
249
|
+
```
|
|
250
|
+
|
|
251
|
+
## Expose Data Through a Subgraph
|
|
252
|
+
|
|
253
|
+
### Generate a Subgraph
|
|
254
|
+
|
|
255
|
+
Generate a new subgraph to expose your processor data:
|
|
256
|
+
|
|
257
|
+
```bash
|
|
258
|
+
ph generate --subgraph todo
|
|
259
|
+
```
|
|
260
|
+
|
|
261
|
+
**What is a Subgraph?**
|
|
262
|
+
|
|
263
|
+
A subgraph is a GraphQL schema that exposes your processed data to clients. It:
|
|
264
|
+
- Provides a standardized API for accessing your relational database data
|
|
265
|
+
- Integrates with the Powerhouse supergraph for unified data access
|
|
266
|
+
- Supports both queries (reading data) and mutations (modifying data)
|
|
267
|
+
- Can join data across multiple processors and document types
|
|
268
|
+
|
|
269
|
+
### Configure the Subgraph
|
|
270
|
+
|
|
271
|
+
Open `./subgraphs/todo/index.ts` and configure the resolvers:
|
|
272
|
+
|
|
273
|
+
```ts
|
|
274
|
+
import { Subgraph } from "@powerhousedao/reactor-api";
|
|
275
|
+
import { gql } from "graphql-tag";
|
|
276
|
+
import { TodoIndexerProcessor } from "../../processors/todo-indexer/index.js";
|
|
277
|
+
|
|
278
|
+
export class TodoSubgraph extends Subgraph {
|
|
279
|
+
// Human-readable name for this subgraph
|
|
280
|
+
name = "Todos";
|
|
281
|
+
|
|
282
|
+
// GraphQL resolvers - functions that fetch data for each field
|
|
283
|
+
resolvers = {
|
|
284
|
+
Query: {
|
|
285
|
+
todos: {
|
|
286
|
+
// Resolver function for the "todos" query
|
|
287
|
+
// Arguments: parent object, query arguments, context, GraphQL info
|
|
288
|
+
resolve: async (_: any, args: {driveId: string}) => {
|
|
289
|
+
// Query the database using the processor's static query method
|
|
290
|
+
// This gives us access to the namespaced database for the specific drive
|
|
291
|
+
const todos = await TodoIndexerProcessor.query(args.driveId, this.relationalDb)
|
|
292
|
+
.selectFrom("todo") // Select from the "todo" table
|
|
293
|
+
.selectAll() // Get all columns
|
|
294
|
+
.execute(); // Execute the query
|
|
295
|
+
|
|
296
|
+
// Transform database results to match GraphQL schema
|
|
297
|
+
return todos.map((todo) => ({
|
|
298
|
+
task: todo.task, // Map database "task" column to GraphQL "task" field
|
|
299
|
+
status: todo.status, // Map database "status" column to GraphQL "status" field
|
|
300
|
+
}));
|
|
301
|
+
},
|
|
302
|
+
},
|
|
303
|
+
},
|
|
304
|
+
};
|
|
305
|
+
|
|
306
|
+
// GraphQL schema definition using GraphQL Schema Definition Language (SDL)
|
|
307
|
+
typeDefs = gql`
|
|
308
|
+
|
|
309
|
+
# Define the structure of a todo item as returned by GraphQL
|
|
310
|
+
type ToDoListEntry {
|
|
311
|
+
task: String! # The task description (! means required/non-null)
|
|
312
|
+
status: Boolean! # The completion status (true = done, false = pending)
|
|
313
|
+
}
|
|
314
|
+
|
|
315
|
+
# Define available queries
|
|
316
|
+
type Query {
|
|
317
|
+
todos(driveId: ID!): [ToDoListEntry] # Get array of todos for a specific drive
|
|
318
|
+
}
|
|
319
|
+
`;
|
|
320
|
+
|
|
321
|
+
// Cleanup method called when the subgraph disconnects
|
|
322
|
+
async onDisconnect() {
|
|
323
|
+
// Add any cleanup logic here if needed
|
|
324
|
+
}
|
|
325
|
+
}
|
|
326
|
+
```
|
|
327
|
+
|
|
328
|
+
## Now query the data via the supergraph.
|
|
329
|
+
|
|
330
|
+
**Understanding the Supergraph**
|
|
331
|
+
|
|
332
|
+
The Powerhouse supergraph is a unified GraphQL endpoint that combines:
|
|
333
|
+
- **Document Models**: Direct access to your Powerhouse documents
|
|
334
|
+
- **Subgraphs**: Custom data views from your processors
|
|
335
|
+
- **Built-in APIs**: System functionality like authentication and drives
|
|
336
|
+
|
|
337
|
+
This unified approach means you can query document state AND processed data in a single request, which is perfect for building rich user interfaces.
|
|
338
|
+
|
|
339
|
+
The Powerhouse supergraph for any given remote drive or reactor can be found under `http://localhost:4001/graphql`. The gateway / supergraph available on `/graphql` combines all the subgraphs, except for the drive subgraph (which is accessible via `/d/:driveId`). To access the endpoint, start the reactor and navigate to the URL with `graphql` appended. The following commands explain how you can test & try the supergraph.
|
|
340
|
+
|
|
341
|
+
- Start the reactor:
|
|
342
|
+
|
|
343
|
+
```bash
|
|
344
|
+
ph reactor
|
|
345
|
+
```
|
|
346
|
+
|
|
347
|
+
- Open the GraphQL editor in your browser:
|
|
348
|
+
|
|
349
|
+
```
|
|
350
|
+
http://localhost:4001/graphql
|
|
351
|
+
```
|
|
352
|
+
The supergraph allows you to both query & mutate data from the same endpoint.
|
|
353
|
+
Read more about [subgraphs](/academy/MasteryTrack/WorkWithData/UsingSubgraphs)
|
|
354
|
+
|
|
355
|
+
<details>
|
|
356
|
+
<summary>**Example: Complete Data Flow from Document Operations to Relational Database**</summary>
|
|
357
|
+
|
|
358
|
+
**Understanding the Complete Data Pipeline**
|
|
359
|
+
|
|
360
|
+
This comprehensive example demonstrates the **entire data flow** in a Powerhouse application:
|
|
361
|
+
1. **Storage Layer**: Create a drive (document storage container)
|
|
362
|
+
2. **Document Layer**: Create a todo document and add operations
|
|
363
|
+
3. **Processing Layer**: Watch the relational database processor automatically index changes
|
|
364
|
+
4. **API Layer**: Query both original document state AND processed relational data
|
|
365
|
+
5. **Analysis**: Compare the different data representations
|
|
366
|
+
|
|
367
|
+
---
|
|
368
|
+
|
|
369
|
+
### **Step 1: Create a Drive (Storage Container)**
|
|
370
|
+
|
|
371
|
+
**What's Happening**: Every document needs a "drive" - think of it as a folder or database that contains related documents. This is where your todo documents will live.
|
|
372
|
+
|
|
373
|
+
```graphql
|
|
374
|
+
mutation DriveCreation($name: String!) {
|
|
375
|
+
addDrive(name: $name) {
|
|
376
|
+
name
|
|
377
|
+
}
|
|
378
|
+
}
|
|
379
|
+
```
|
|
380
|
+
|
|
381
|
+
Variables:
|
|
382
|
+
```json
|
|
383
|
+
{
|
|
384
|
+
"driveId": "powerhouse",
|
|
385
|
+
"name": "tutorial"
|
|
386
|
+
}
|
|
387
|
+
```
|
|
388
|
+
|
|
389
|
+
💡 **Behind the Scenes**: This creates a new drive namespace. Your relational database processor will create isolated tables for this drive using the namespace pattern we defined earlier.
|
|
390
|
+
|
|
391
|
+
---
|
|
392
|
+
|
|
393
|
+
### **Step 2: Create a Todo Document**
|
|
394
|
+
|
|
395
|
+
**What's Happening**: Now we're creating an actual todo list document inside our drive. This uses the document model we built in previous chapters.
|
|
396
|
+
|
|
397
|
+
```graphql
|
|
398
|
+
mutation Mutation($driveId: String, $name: String) {
|
|
399
|
+
ToDoList_createDocument(driveId: $driveId, name: $name)
|
|
400
|
+
}
|
|
401
|
+
```
|
|
402
|
+
|
|
403
|
+
Variables:
|
|
404
|
+
```json
|
|
405
|
+
{
|
|
406
|
+
"driveId": "powerhouse",
|
|
407
|
+
"name": "tutorial"
|
|
408
|
+
}
|
|
409
|
+
```
|
|
410
|
+
|
|
411
|
+
Result:
|
|
412
|
+
```json
|
|
413
|
+
{
|
|
414
|
+
"data": {
|
|
415
|
+
"ToDoList_createDocument": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b"
|
|
416
|
+
}
|
|
417
|
+
}
|
|
418
|
+
```
|
|
419
|
+
|
|
420
|
+
💡 **Key Insight**: The returned UUID (`72b73d31-4874-4b71-8cc3-289ed4cfbe2b`) is crucial - this is the document ID that will appear in our processor's database records, linking operations back to their source document. You will receive a different UUID.
|
|
421
|
+
|
|
422
|
+
---
|
|
423
|
+
|
|
424
|
+
### **Step 3: Add Todo Items (Generate Operations)**
|
|
425
|
+
|
|
426
|
+
**What's Happening**: Each time we add a todo item, we're creating a new **operation** in the document's history. Our relational database processor is listening for these operations in real-time.
|
|
427
|
+
|
|
428
|
+
```graphql
|
|
429
|
+
mutation Mutation($driveId: String, $docId: PHID, $input: ToDoList_AddTodoItemInput) {
|
|
430
|
+
ToDoList_addTodoItem(driveId: $driveId, docId: $docId, input: $input)
|
|
431
|
+
}
|
|
432
|
+
```
|
|
433
|
+
|
|
434
|
+
Variables:
|
|
435
|
+
```json
|
|
436
|
+
{
|
|
437
|
+
"driveId": "powerhouse",
|
|
438
|
+
"name": "tutorial",
|
|
439
|
+
"docId": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b",
|
|
440
|
+
"input": {
|
|
441
|
+
"id": "1",
|
|
442
|
+
"text": "complete mutation"
|
|
443
|
+
}
|
|
444
|
+
}
|
|
445
|
+
```
|
|
446
|
+
|
|
447
|
+
Result:
|
|
448
|
+
```json
|
|
449
|
+
{
|
|
450
|
+
"data": {
|
|
451
|
+
"ToDoList_addTodoItem": 1
|
|
452
|
+
}
|
|
453
|
+
}
|
|
454
|
+
```
|
|
455
|
+
|
|
456
|
+
💡 **What Happens Next**:
|
|
457
|
+
1. **Document Model**: Stores the operation and updates document state
|
|
458
|
+
2. **Reactor**: Broadcasts the operation to all listening processors
|
|
459
|
+
3. **Our Processor**: Automatically receives the operation and creates a database record
|
|
460
|
+
4. **Database**: Now contains: `"72b73d31-4874-4b71-8cc3-289ed4cfbe2b-0: ADD_TODO_ITEM"`
|
|
461
|
+
|
|
462
|
+
🔄 **Repeat this step 2-3 times** with different todo items to see multiple operations get processed. Each operation will have an incrementing index (0, 1, 2...).
|
|
463
|
+
|
|
464
|
+
---
|
|
465
|
+
|
|
466
|
+
### **Step 4: Query Both Data Sources**
|
|
467
|
+
|
|
468
|
+
**The Power of Dual Data Access**: Now we can query BOTH the original document state AND our processed relational data in a single GraphQL request. This demonstrates the flexibility of the Powerhouse architecture.
|
|
469
|
+
|
|
470
|
+
```graphql
|
|
471
|
+
query Query($driveId: ID!) {
|
|
472
|
+
todos(driveId: $driveId) {
|
|
473
|
+
task
|
|
474
|
+
status
|
|
475
|
+
}
|
|
476
|
+
ToDoList {
|
|
477
|
+
getDocuments {
|
|
478
|
+
state {
|
|
479
|
+
items {
|
|
480
|
+
text
|
|
481
|
+
}
|
|
482
|
+
}
|
|
483
|
+
}
|
|
484
|
+
}
|
|
485
|
+
}
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
Variables:
|
|
489
|
+
```json
|
|
490
|
+
{
|
|
491
|
+
"driveId": "powerhouse"
|
|
492
|
+
}
|
|
493
|
+
```
|
|
494
|
+
|
|
495
|
+
Response:
|
|
496
|
+
```json
|
|
497
|
+
{
|
|
498
|
+
"data": {
|
|
499
|
+
"todos": [
|
|
500
|
+
{
|
|
501
|
+
"task": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b-0: ADD_TODO_ITEM",
|
|
502
|
+
"status": true
|
|
503
|
+
},
|
|
504
|
+
{
|
|
505
|
+
"task": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b-1: ADD_TODO_ITEM",
|
|
506
|
+
"status": true
|
|
507
|
+
},
|
|
508
|
+
{
|
|
509
|
+
"task": "72b73d31-4874-4b71-8cc3-289ed4cfbe2b-2: ADD_TODO_ITEM",
|
|
510
|
+
"status": true
|
|
511
|
+
}
|
|
512
|
+
],
|
|
513
|
+
"ToDoList": {
|
|
514
|
+
"getDocuments": [
|
|
515
|
+
{
|
|
516
|
+
"state": {
|
|
517
|
+
"items": [
|
|
518
|
+
{
|
|
519
|
+
"text": "complete mutation"
|
|
520
|
+
},
|
|
521
|
+
{
|
|
522
|
+
"text": "add another todo"
|
|
523
|
+
},
|
|
524
|
+
{
|
|
525
|
+
"text": "Now check the data"
|
|
526
|
+
}
|
|
527
|
+
]
|
|
528
|
+
}
|
|
529
|
+
}
|
|
530
|
+
]
|
|
531
|
+
}
|
|
532
|
+
}
|
|
533
|
+
}
|
|
534
|
+
```
|
|
535
|
+
|
|
536
|
+
---
|
|
537
|
+
|
|
538
|
+
### **🔍 Data Analysis: Understanding What You're Seeing**
|
|
539
|
+
|
|
540
|
+
**Document Model Data (`ToDoList.getDocuments`):**
|
|
541
|
+
- ✅ **Current State**: Shows the final todo items as they exist in the document
|
|
542
|
+
- ✅ **User-Friendly**: Displays actual todo text like "complete mutation"
|
|
543
|
+
- ✅ **Real-Time**: Always reflects the latest document state
|
|
544
|
+
- ❌ **Limited History**: Doesn't show how the document changed over time
|
|
545
|
+
|
|
546
|
+
**Processed Relational Data (`todos`):**
|
|
547
|
+
- ✅ **Operation History**: Shows each individual operation that occurred
|
|
548
|
+
- ✅ **Audit Trail**: You can see the sequence (0, 1, 2) of operations
|
|
549
|
+
- ✅ **Analytics Ready**: Perfect for counting operations, tracking changes
|
|
550
|
+
- ✅ **Integration Friendly**: Standard SQL database that other tools can access
|
|
551
|
+
- ❌ **Less User-Friendly**: Shows operation metadata rather than final state
|
|
552
|
+
|
|
553
|
+
---
|
|
554
|
+
|
|
555
|
+
**Key Differences:**
|
|
556
|
+
- **Document Query**: Gets the current state directly from the document model
|
|
557
|
+
- **Subgraph Query**: Gets processed/transformed data from your relational database
|
|
558
|
+
- **Combined Power**: You can query both in a single GraphQL request for rich UIs
|
|
559
|
+
|
|
560
|
+
This demonstrates how the supergraph provides a unified interface to both your document models and your custom subgraphs, allowing you to query and mutate data from the same endpoint.
|
|
561
|
+
</details>
|
|
562
|
+
|
|
563
|
+
## Use the Data in Frontend Applications
|
|
564
|
+
|
|
565
|
+
**Integration Options**
|
|
566
|
+
|
|
567
|
+
Your processed data can now be consumed by any GraphQL client:
|
|
568
|
+
- **React**: Using Apollo Client, urql, or Relay
|
|
569
|
+
- **Next.js**: API routes, getServerSideProps, or app router
|
|
570
|
+
- **Mobile Apps**: React Native, Flutter, or native iOS/Android
|
|
571
|
+
- **Desktop Apps**: Electron, Tauri, or other frameworks
|
|
572
|
+
- **Third-party Tools**: Any tool that supports GraphQL APIs
|
|
573
|
+
|
|
574
|
+
### React Hooks
|
|
575
|
+
|
|
576
|
+
**Coming Soon**: This section will cover how to use React hooks to consume your subgraph data in React applications. For now, you can use standard GraphQL clients like Apollo or urql to query your supergraph endpoint.
|
|
577
|
+
|
|
578
|
+
### Next.js API Route Example
|
|
579
|
+
|
|
580
|
+
**Why API Routes?**
|
|
581
|
+
|
|
582
|
+
Next.js API routes are useful when you need to:
|
|
583
|
+
- Add server-side authentication or authorization
|
|
584
|
+
- Transform data before sending to the client
|
|
585
|
+
- Implement caching or rate limiting
|
|
586
|
+
- Proxy requests to avoid CORS issues
|
|
587
|
+
- Add logging or monitoring
|
|
588
|
+
|
|
589
|
+
```ts
|
|
590
|
+
// pages/api/todos.ts
|
|
591
|
+
import { type NextApiRequest, type NextApiResponse } from "next"
|
|
592
|
+
|
|
593
|
+
export default async function handler(
|
|
594
|
+
req: NextApiRequest,
|
|
595
|
+
res: NextApiResponse
|
|
596
|
+
) {
|
|
597
|
+
// Only allow GET requests for this endpoint
|
|
598
|
+
if (req.method !== "GET") {
|
|
599
|
+
return res.status(405).json({ message: "Method not allowed" });
|
|
600
|
+
}
|
|
601
|
+
|
|
602
|
+
// Extract driveId from query parameters, default to "powerhouse"
|
|
603
|
+
const { driveId = "powerhouse" } = req.query;
|
|
604
|
+
|
|
605
|
+
try {
|
|
606
|
+
// Query your subgraph or database directly
|
|
607
|
+
// In production, you might want to add authentication headers here
|
|
608
|
+
const response = await fetch("http://localhost:4001/graphql", {
|
|
609
|
+
method: "POST",
|
|
610
|
+
headers: { "Content-Type": "application/json" },
|
|
611
|
+
body: JSON.stringify({
|
|
612
|
+
query: `
|
|
613
|
+
query GetTodoList($driveId: String) {
|
|
614
|
+
todoList(driveId: $driveId) {
|
|
615
|
+
id
|
|
616
|
+
name
|
|
617
|
+
completed
|
|
618
|
+
createdAt
|
|
619
|
+
updatedAt
|
|
620
|
+
}
|
|
621
|
+
}
|
|
622
|
+
`,
|
|
623
|
+
variables: { driveId },
|
|
624
|
+
}),
|
|
625
|
+
});
|
|
626
|
+
|
|
627
|
+
const data = await response.json();
|
|
628
|
+
|
|
629
|
+
// Return the todos array from the GraphQL response
|
|
630
|
+
res.status(200).json(data.data.todoList);
|
|
631
|
+
} catch (error) {
|
|
632
|
+
// Log the error for debugging (in production, use proper logging)
|
|
633
|
+
console.error("Failed to fetch todos:", error);
|
|
634
|
+
|
|
635
|
+
// Return a generic error message to the client
|
|
636
|
+
res.status(500).json({ error: "Failed to fetch todos" });
|
|
637
|
+
}
|
|
638
|
+
}
|
|
639
|
+
```
|
|
640
|
+
|
|
641
|
+
## Summary
|
|
642
|
+
|
|
643
|
+
You've successfully created a relational database processor that:
|
|
644
|
+
|
|
645
|
+
1. ✅ **Listens for document changes** - Automatically detects when todo documents are modified
|
|
646
|
+
2. ✅ **Stores data in a structured database** - Transforms document operations into relational data
|
|
647
|
+
3. ✅ **Provides type-safe database operations** - Uses TypeScript for compile-time safety
|
|
648
|
+
4. ✅ **Exposes data through GraphQL** - Makes processed data available via a unified API
|
|
649
|
+
5. ✅ **Can be consumed by frontend applications** - Ready for integration with any GraphQL client
|
|
650
|
+
|
|
651
|
+
|
|
652
|
+
This processor will automatically sync your document changes to the relational database, making the data available for complex queries, reporting, and integration with other systems.
|
|
653
|
+
|
|
654
|
+
**Real-World Applications:**
|
|
655
|
+
|
|
656
|
+
This pattern is commonly used for:
|
|
657
|
+
- **Analytics dashboards** showing document usage patterns
|
|
658
|
+
- **Business intelligence** reports on document data
|
|
659
|
+
- **Integration** with existing enterprise systems
|
|
660
|
+
- **Search and filtering** with complex SQL queries
|
|
661
|
+
- **Data archival** and compliance requirements
|
|
662
|
+
|
|
663
|
+
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Drive
|
|
1
|
+
# Drive analytics
|
|
2
2
|
|
|
3
3
|
Drive Analytics provides automated monitoring and insights into document drive operations within Powerhouse applications. This system tracks user interactions, document modifications, and drive activity to help developers understand usage patterns and system performance.
|
|
4
4
|
|
|
@@ -173,28 +173,28 @@ The returned hook accepts:
|
|
|
173
173
|
|
|
174
174
|
</details>
|
|
175
175
|
|
|
176
|
-
### 2.
|
|
176
|
+
### 2. useRelationalDb()
|
|
177
177
|
|
|
178
178
|
<details>
|
|
179
|
-
<summary>`
|
|
179
|
+
<summary>`useRelationalDb<Schema>()`: Access the enhanced database instance directly</summary>
|
|
180
180
|
|
|
181
181
|
### Hook Name and Signature
|
|
182
182
|
|
|
183
183
|
```typescript
|
|
184
|
-
function
|
|
184
|
+
function useRelationalDb<Schema>(): IRelationalDb<Schema>
|
|
185
185
|
```
|
|
186
186
|
|
|
187
187
|
### Description
|
|
188
188
|
|
|
189
|
-
Provides direct access to the enhanced Kysely database instance with live query capabilities. Use this when you need to perform
|
|
189
|
+
Provides direct access to the enhanced Kysely database instance with live query capabilities. Use this when you need to perform relational database operations outside of the typical query patterns.
|
|
190
190
|
|
|
191
191
|
### Usage Example
|
|
192
192
|
|
|
193
193
|
```typescript
|
|
194
|
-
import {
|
|
194
|
+
import { useRelationalDb } from '@powerhousedao/reactor-browser/relational';
|
|
195
195
|
|
|
196
196
|
function DatabaseOperations() {
|
|
197
|
-
const { db, isLoading, error } =
|
|
197
|
+
const { db, isLoading, error } = useRelationalDb<MyDatabase>();
|
|
198
198
|
|
|
199
199
|
const createUser = async (name: string, email: string) => {
|
|
200
200
|
if (!db) return;
|
|
@@ -241,19 +241,19 @@ function DatabaseOperations() {
|
|
|
241
241
|
### Related Hooks
|
|
242
242
|
|
|
243
243
|
- [`createProcessorQuery`](#1-createprocessorquery) - For optimized queries
|
|
244
|
-
- [`
|
|
244
|
+
- [`useRelationalQuery`](#3-userelationalquery) - For manual query control
|
|
245
245
|
|
|
246
246
|
</details>
|
|
247
247
|
|
|
248
|
-
### 3.
|
|
248
|
+
### 3. useRelationalQuery()
|
|
249
249
|
|
|
250
250
|
<details>
|
|
251
|
-
<summary>`
|
|
251
|
+
<summary>`useRelationalQuery<Schema, T, TParams>()`: Lower-level hook for manual query control</summary>
|
|
252
252
|
|
|
253
253
|
### Hook Name and Signature
|
|
254
254
|
|
|
255
255
|
```typescript
|
|
256
|
-
function
|
|
256
|
+
function useRelationalQuery<Schema, T, TParams>(
|
|
257
257
|
queryCallback: (db: EnhancedKysely<Schema>, parameters?: TParams) => QueryCallbackReturnType,
|
|
258
258
|
parameters?: TParams
|
|
259
259
|
): QueryResult<T>
|
|
@@ -266,10 +266,10 @@ Lower-level hook for creating live queries with manual control over the query ca
|
|
|
266
266
|
### Usage Example
|
|
267
267
|
|
|
268
268
|
```typescript
|
|
269
|
-
import {
|
|
269
|
+
import { useRelationalQuery } from '@powerhousedao/reactor-browser/relational';
|
|
270
270
|
|
|
271
271
|
function UserCount() {
|
|
272
|
-
const { result, isLoading, error } =
|
|
272
|
+
const { result, isLoading, error } = useRelationalQuery<MyDatabase, { count: number }>(
|
|
273
273
|
(db) => {
|
|
274
274
|
return db
|
|
275
275
|
.selectFrom('users')
|
|
@@ -309,7 +309,7 @@ function UserCount() {
|
|
|
309
309
|
### Related Hooks
|
|
310
310
|
|
|
311
311
|
- [`createProcessorQuery`](#1-createprocessorquery) - Recommended higher-level API
|
|
312
|
-
- [`
|
|
312
|
+
- [`useRelationalDb`](#2-usedelationaldb) - For direct database access
|
|
313
313
|
|
|
314
314
|
</details>
|
|
315
315
|
|
package/package.json
CHANGED
|
@@ -1,383 +0,0 @@
|
|
|
1
|
-
# Build a Todo-List Processor
|
|
2
|
-
|
|
3
|
-
## What You'll Learn
|
|
4
|
-
|
|
5
|
-
In this tutorial, you'll learn how to build a **relational database processor** that listens to changes in Powerhouse TodoList documents and automatically maintains a synchronized relational database. This is useful for creating queryable data stores, generating reports, or integrating with existing database-driven applications.
|
|
6
|
-
|
|
7
|
-
## What is a Processor?
|
|
8
|
-
|
|
9
|
-
A **processor** in Powerhouse is a background service that automatically responds to document changes. Think of it as a "listener" that watches for specific document operations (like creating, updating, or deleting todos) and then performs custom logic - in this case, updating a relational database.
|
|
10
|
-
|
|
11
|
-
**Key Benefits:**
|
|
12
|
-
- **Real-time synchronization**: Your database stays automatically up-to-date with document changes
|
|
13
|
-
- **Query performance**: Relational databases excel at complex queries and joins
|
|
14
|
-
|
|
15
|
-
## Tutorial Steps
|
|
16
|
-
|
|
17
|
-
1. **Generate the processor** - Create the basic processor structure
|
|
18
|
-
2. **Define your database schema** - Design the tables to store your data
|
|
19
|
-
3. **Generate TypeScript types** - Get type safety for database operations
|
|
20
|
-
4. **Configure the filter** - Specify which documents to listen to
|
|
21
|
-
5. **Customize the processor logic** - Implement how document changes update the database
|
|
22
|
-
6. **Use the data via Subgraph** - Query your processed data through GraphQL
|
|
23
|
-
|
|
24
|
-
---
|
|
25
|
-
|
|
26
|
-
## Step 1: Generate the Processor
|
|
27
|
-
|
|
28
|
-
First, we'll create the processor using the Powerhouse CLI. This command scaffolds all the necessary files and configuration.
|
|
29
|
-
|
|
30
|
-
```bash
|
|
31
|
-
ph generate --processor todo-processor --processor-type relational-db --document-types powerhouse/todolist
|
|
32
|
-
```
|
|
33
|
-
|
|
34
|
-
**Breaking down this command:**
|
|
35
|
-
- `--processor todo-processor`: Names your processor "todo-processor"
|
|
36
|
-
- `--processor-type relational-db`: Creates a processor that works with SQL databases
|
|
37
|
-
- `--document-types powerhouse/todolist`: Tells the processor to listen for changes in TodoList documents
|
|
38
|
-
|
|
39
|
-
**What gets created:**
|
|
40
|
-
- `processors/todo-processor/` directory with all necessary files
|
|
41
|
-
- Migration files for database schema management
|
|
42
|
-
- Factory function for processor instantiation
|
|
43
|
-
- Base processor class ready for customization
|
|
44
|
-
|
|
45
|
-
---
|
|
46
|
-
|
|
47
|
-
## Step 2: Define Your Database Schema
|
|
48
|
-
|
|
49
|
-
Next, we need to define what our database tables will look like. This happens in the **migration file**, which contains instructions for creating (and optionally destroying) database tables.
|
|
50
|
-
|
|
51
|
-
**File location:** `processors/todo-processor/migration.ts`
|
|
52
|
-
|
|
53
|
-
### Understanding Migrations
|
|
54
|
-
|
|
55
|
-
Migrations are scripts that modify your database structure. They have two functions:
|
|
56
|
-
- **`up()`**: Runs when the processor is added - creates tables and indexes
|
|
57
|
-
- **`down()`**: Runs when the processor is removed - cleans up by dropping tables
|
|
58
|
-
|
|
59
|
-
Here's our TodoList migration:
|
|
60
|
-
|
|
61
|
-
```ts
|
|
62
|
-
import { type IBaseRelationalDb } from "document-drive/processors/types"
|
|
63
|
-
|
|
64
|
-
export async function up(db: IBaseRelationalDb): Promise<void> {
|
|
65
|
-
// Create table
|
|
66
|
-
await db.schema
|
|
67
|
-
.createTable("todo") // Table name: "todo"
|
|
68
|
-
.addColumn("name", "varchar(255)") // Todo item text (up to 255 characters)
|
|
69
|
-
.addColumn("completed", "boolean") // Completion status (true/false)
|
|
70
|
-
.addPrimaryKeyConstraint("todo_pkey", ["name"]) // Primary key on 'name' column
|
|
71
|
-
.ifNotExists() // Only create if table doesn't exist
|
|
72
|
-
.execute(); // Execute the SQL command
|
|
73
|
-
|
|
74
|
-
// Optional: Log all tables for debugging
|
|
75
|
-
const tables = await db.introspection.getTables();
|
|
76
|
-
console.log(tables);
|
|
77
|
-
}
|
|
78
|
-
|
|
79
|
-
export async function down(db: IBaseRelationalDb): Promise<void> {
|
|
80
|
-
// Clean up: drop the table when processor is removed
|
|
81
|
-
await db.schema.dropTable("todo").execute();
|
|
82
|
-
}
|
|
83
|
-
```
|
|
84
|
-
|
|
85
|
-
**Design decisions explained:**
|
|
86
|
-
- **`name` as primary key**: Assumes todo names are unique (you might want to use an auto-incrementing ID instead)
|
|
87
|
-
- **Simple boolean for completion**: Easy to query for completed vs. incomplete todos
|
|
88
|
-
- **`ifNotExists()`**: Prevents errors if the processor restarts
|
|
89
|
-
|
|
90
|
-
---
|
|
91
|
-
|
|
92
|
-
## Step 3: Generate TypeScript Types
|
|
93
|
-
|
|
94
|
-
After defining your database schema, generate TypeScript types for type-safe database operations. This provides IDE autocomplete and catches errors at compile time.
|
|
95
|
-
|
|
96
|
-
```bash
|
|
97
|
-
ph generate --migration-file processors/todo-indexer/migrations.js --schema-file processors/todo-indexer/schema.ts
|
|
98
|
-
```
|
|
99
|
-
|
|
100
|
-
**What this does:**
|
|
101
|
-
- Analyzes your migration file
|
|
102
|
-
- Generates TypeScript interfaces matching your database tables
|
|
103
|
-
- Creates a `schema.ts` file with type definitions
|
|
104
|
-
|
|
105
|
-
**Result:** You'll get types like:
|
|
106
|
-
```ts
|
|
107
|
-
interface Todo {
|
|
108
|
-
name: string;
|
|
109
|
-
completed: boolean;
|
|
110
|
-
}
|
|
111
|
-
```
|
|
112
|
-
|
|
113
|
-
These types will be available in `processors/todo-processor/schema.ts` and ensure your database queries are type-safe.
|
|
114
|
-
|
|
115
|
-
---
|
|
116
|
-
|
|
117
|
-
## Step 4: Configure the Filter
|
|
118
|
-
|
|
119
|
-
The **filter** determines which document changes your processor should respond to. This is configured in the factory function.
|
|
120
|
-
|
|
121
|
-
**File location:** `processors/todo-processor/factory.ts`
|
|
122
|
-
|
|
123
|
-
```ts
|
|
124
|
-
export const todoProcessorProcessorFactory =
|
|
125
|
-
(module: IProcessorHostModule) =>
|
|
126
|
-
async (driveId: string): Promise<ProcessorRecord[]> => {
|
|
127
|
-
// Create a namespace for the processor and the provided drive id
|
|
128
|
-
const namespace = TodoProcessorProcessor.getNamespace(driveId);
|
|
129
|
-
|
|
130
|
-
// Create a filter for the processor
|
|
131
|
-
const filter: RelationalDbProcessorFilter = {
|
|
132
|
-
branch: ["main"], // Only listen to main branch changes
|
|
133
|
-
documentId: ["*"], // Listen to ALL documents (wildcard)
|
|
134
|
-
documentType: ["powerhouse/todo-list"], // Only TodoList document types
|
|
135
|
-
scope: ["global"], // Global scope (vs. user-specific)
|
|
136
|
-
};
|
|
137
|
-
|
|
138
|
-
// Create a namespaced store for the processor
|
|
139
|
-
const store = await createNamespacedDb<TodoProcessorProcessor>(
|
|
140
|
-
namespace,
|
|
141
|
-
module.relationalStore,
|
|
142
|
-
);
|
|
143
|
-
|
|
144
|
-
// Create the processor
|
|
145
|
-
const processor = new TodoProcessorProcessor(namespace, filter, store);
|
|
146
|
-
return [
|
|
147
|
-
{
|
|
148
|
-
processor,
|
|
149
|
-
filter,
|
|
150
|
-
},
|
|
151
|
-
];
|
|
152
|
-
};
|
|
153
|
-
```
|
|
154
|
-
|
|
155
|
-
**Filter options explained:**
|
|
156
|
-
- **`branch`**: Which document branches to monitor (usually "main" for production data)
|
|
157
|
-
- **`documentId`**: Specific document IDs or "*" for all documents
|
|
158
|
-
- **`documentType`**: The document model type - must match exactly
|
|
159
|
-
- **`scope`**: "global" for shared data, or specific scopes for user/organization data
|
|
160
|
-
|
|
161
|
-
**Namespace concept**: Each processor gets its own database namespace to avoid conflicts when multiple processors or drives exist.
|
|
162
|
-
|
|
163
|
-
---
|
|
164
|
-
|
|
165
|
-
## Step 5: Implement the Processor Logic
|
|
166
|
-
|
|
167
|
-
Now for the core functionality - how your processor responds to document changes. This is where you define what happens when TodoList documents are created, updated, or deleted.
|
|
168
|
-
|
|
169
|
-
**File location:** `processors/todo-processor/index.ts`
|
|
170
|
-
|
|
171
|
-
```ts
|
|
172
|
-
type DocumentType = ToDoListDocument;
|
|
173
|
-
|
|
174
|
-
export class TodoIndexerProcessor extends RelationalDbProcessor<DB> {
|
|
175
|
-
|
|
176
|
-
static override getNamespace(driveId: string): string {
|
|
177
|
-
// Default namespace: `${this.name}_${driveId.replaceAll("-", "_")}`
|
|
178
|
-
// Each drive gets its own database tables to prevent data mixing
|
|
179
|
-
return super.getNamespace(driveId);
|
|
180
|
-
}
|
|
181
|
-
|
|
182
|
-
override async initAndUpgrade(): Promise<void> {
|
|
183
|
-
// Run database migrations when processor starts
|
|
184
|
-
// This creates your tables if they don't exist
|
|
185
|
-
await up(this.relationalDb as IBaseRelationalDb);
|
|
186
|
-
}
|
|
187
|
-
|
|
188
|
-
override async onStrands(
|
|
189
|
-
strands: InternalTransmitterUpdate<DocumentType>[],
|
|
190
|
-
): Promise<void> {
|
|
191
|
-
// Early exit if no data to process
|
|
192
|
-
if (strands.length === 0) {
|
|
193
|
-
return;
|
|
194
|
-
}
|
|
195
|
-
|
|
196
|
-
// Process each strand (a strand represents changes to one document)
|
|
197
|
-
for (const strand of strands) {
|
|
198
|
-
if (strand.operations.length === 0) {
|
|
199
|
-
continue;
|
|
200
|
-
}
|
|
201
|
-
|
|
202
|
-
// Process each operation in the strand
|
|
203
|
-
for (const operation of strand.operations) {
|
|
204
|
-
// Simple example: Insert a new todo for every operation
|
|
205
|
-
// In a real implementation, you'd check the operation type and data
|
|
206
|
-
await this.relationalDb
|
|
207
|
-
.insertInto("todo")
|
|
208
|
-
.values({
|
|
209
|
-
task: strand.documentId, // Use document ID as task name
|
|
210
|
-
status: true, // Default to completed
|
|
211
|
-
})
|
|
212
|
-
.execute();
|
|
213
|
-
}
|
|
214
|
-
}
|
|
215
|
-
}
|
|
216
|
-
|
|
217
|
-
async onDisconnect() {
|
|
218
|
-
// Cleanup logic when processor shuts down
|
|
219
|
-
// Could include closing connections, saving state, etc.
|
|
220
|
-
}
|
|
221
|
-
}
|
|
222
|
-
```
|
|
223
|
-
|
|
224
|
-
### Understanding Strands and Operations
|
|
225
|
-
|
|
226
|
-
**Strands** represent a sequence of changes to a single document. Each strand contains:
|
|
227
|
-
- `documentId`: Which document changed
|
|
228
|
-
- `operations`: Array of operations (add todo, complete todo, etc.)
|
|
229
|
-
- `state`: The current document state
|
|
230
|
-
|
|
231
|
-
**Operations** are the actual changes made to the document:
|
|
232
|
-
- `ADD_TODO`: New todo item created
|
|
233
|
-
- `TOGGLE_TODO`: Todo completion status changed
|
|
234
|
-
- `DELETE_TODO`: Todo item removed
|
|
235
|
-
|
|
236
|
-
### Improving the Example
|
|
237
|
-
|
|
238
|
-
The provided example is simplified. In production, you'd want to:
|
|
239
|
-
|
|
240
|
-
1. **Parse operation types:**
|
|
241
|
-
```ts
|
|
242
|
-
switch (operation.type) {
|
|
243
|
-
case 'ADD_TODO':
|
|
244
|
-
// Insert new todo
|
|
245
|
-
break;
|
|
246
|
-
case 'CHECK_TODO':
|
|
247
|
-
// Update completion status
|
|
248
|
-
break;
|
|
249
|
-
case 'DELETE_TODO':
|
|
250
|
-
// Remove todo from database
|
|
251
|
-
break;
|
|
252
|
-
}
|
|
253
|
-
```
|
|
254
|
-
|
|
255
|
-
2. **Handle errors gracefully:**
|
|
256
|
-
```ts
|
|
257
|
-
try {
|
|
258
|
-
await this.relationalDb.insertInto("todo").values(values).execute();
|
|
259
|
-
} catch (error) {
|
|
260
|
-
console.error('Failed to insert todo:', error);
|
|
261
|
-
// Could implement retry logic, dead letter queue, etc.
|
|
262
|
-
}
|
|
263
|
-
```
|
|
264
|
-
|
|
265
|
-
3. **Use transactions for consistency:**
|
|
266
|
-
```ts
|
|
267
|
-
await this.relationalDb.transaction().execute(async (trx) => {
|
|
268
|
-
// Multiple operations that should all succeed or all fail
|
|
269
|
-
});
|
|
270
|
-
```
|
|
271
|
-
|
|
272
|
-
---
|
|
273
|
-
|
|
274
|
-
## Step 6: Query Data Through a Subgraph
|
|
275
|
-
|
|
276
|
-
Once your processor is storing data in the database, you can expose it via GraphQL using a **subgraph**. This creates a clean API for frontend applications to query the processed data.
|
|
277
|
-
|
|
278
|
-
### Generate a Subgraph
|
|
279
|
-
|
|
280
|
-
Create a new GraphQL subgraph that can query your processor's database:
|
|
281
|
-
|
|
282
|
-
```bash
|
|
283
|
-
ph generate --subgraph <subgraph-name>
|
|
284
|
-
```
|
|
285
|
-
|
|
286
|
-
**What this creates:**
|
|
287
|
-
- GraphQL schema definitions
|
|
288
|
-
- Resolver functions that fetch data
|
|
289
|
-
- Integration with your processor's database
|
|
290
|
-
|
|
291
|
-
### Configure the Subgraph
|
|
292
|
-
|
|
293
|
-
**File location:** `./subgraphs/<subgraph-name>/index.ts`
|
|
294
|
-
|
|
295
|
-
```ts
|
|
296
|
-
resolvers = {
|
|
297
|
-
Query: {
|
|
298
|
-
todoList: {
|
|
299
|
-
resolve: async (parent, args, context, info) => {
|
|
300
|
-
// Query the processor's database using the generated types
|
|
301
|
-
const todoList = await TodoProcessor.query(
|
|
302
|
-
args.driveId ?? "powerhouse", // Default drive if none specified
|
|
303
|
-
this.relationalDb // Database connection from processor
|
|
304
|
-
)
|
|
305
|
-
.selectFrom("todo") // FROM todo table
|
|
306
|
-
.selectAll() // SELECT * (all columns)
|
|
307
|
-
.execute(); // Execute and return results
|
|
308
|
-
return todoList
|
|
309
|
-
},
|
|
310
|
-
},
|
|
311
|
-
},
|
|
312
|
-
};
|
|
313
|
-
|
|
314
|
-
// GraphQL schema definition
|
|
315
|
-
typeDefs = gql`
|
|
316
|
-
type Query {
|
|
317
|
-
type Todo {
|
|
318
|
-
name: String! # Todo text (required)
|
|
319
|
-
completed: Boolean! # Completion status (required)
|
|
320
|
-
}
|
|
321
|
-
|
|
322
|
-
todoList(driveId: String): [Todo!]! # Query to get all todos for a drive
|
|
323
|
-
}
|
|
324
|
-
`;
|
|
325
|
-
```
|
|
326
|
-
|
|
327
|
-
### Understanding the GraphQL Integration
|
|
328
|
-
|
|
329
|
-
**Resolvers** are functions that fetch data for each GraphQL field:
|
|
330
|
-
- `parent`: Data from parent resolver (unused here)
|
|
331
|
-
- `args`: Arguments passed to the query (like `driveId`)
|
|
332
|
-
- `context`: Shared context (database connections, user info, etc.)
|
|
333
|
-
- `info`: Metadata about the GraphQL query
|
|
334
|
-
|
|
335
|
-
**Type Definitions** describe your GraphQL schema:
|
|
336
|
-
- `type Todo`: Defines the structure of a todo item
|
|
337
|
-
- `todoList(driveId: String): [Todo!]!`: A query that returns an array of todos
|
|
338
|
-
- `!` means the field is required/non-null
|
|
339
|
-
|
|
340
|
-
### Querying Your Data
|
|
341
|
-
|
|
342
|
-
Once deployed, frontend applications can query your data like this:
|
|
343
|
-
|
|
344
|
-
```graphql
|
|
345
|
-
query GetTodos($driveId: String) {
|
|
346
|
-
todoList(driveId: $driveId) {
|
|
347
|
-
name
|
|
348
|
-
completed
|
|
349
|
-
}
|
|
350
|
-
}
|
|
351
|
-
```
|
|
352
|
-
|
|
353
|
-
This would return:
|
|
354
|
-
```json
|
|
355
|
-
{
|
|
356
|
-
"data": {
|
|
357
|
-
"todoList": [
|
|
358
|
-
{"name": "Buy groceries", "completed": false},
|
|
359
|
-
{"name": "Write tutorial", "completed": true}
|
|
360
|
-
]
|
|
361
|
-
}
|
|
362
|
-
}
|
|
363
|
-
```
|
|
364
|
-
|
|
365
|
-
---
|
|
366
|
-
|
|
367
|
-
## Next Steps and Best Practices
|
|
368
|
-
|
|
369
|
-
### Testing Your Processor
|
|
370
|
-
|
|
371
|
-
1. **Unit tests**: Test individual functions with mock data
|
|
372
|
-
2. **Integration tests**: Test the full processor with real document operations
|
|
373
|
-
|
|
374
|
-
### Production Considerations
|
|
375
|
-
|
|
376
|
-
1. **Error handling**: Implement robust error handling and logging
|
|
377
|
-
2. **Monitoring**: Add metrics to track processor performance
|
|
378
|
-
3. **Scaling**: Consider database indexing and query optimization
|
|
379
|
-
4. **Security**: Validate input data and implement proper access controls
|
|
380
|
-
|
|
381
|
-
This processor tutorial demonstrates the power of Powerhouse's event-driven architecture, where document changes automatically flow through to specialized data stores optimized for different use cases.
|
|
382
|
-
|
|
383
|
-
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|