@powerhousedao/academy 4.1.0-dev.7 → 4.1.0-dev.70

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (108) hide show
  1. package/.vscode/settings.json +1 -1
  2. package/CHANGELOG.md +551 -0
  3. package/README.md +3 -3
  4. package/babel.config.js +1 -1
  5. package/blog/BeyondCommunication-ABlueprintForDevelopment.md +25 -24
  6. package/blog/TheChallengeOfChange.md +21 -21
  7. package/docs/academy/01-GetStarted/00-ExploreDemoPackage.mdx +67 -30
  8. package/docs/academy/01-GetStarted/01-CreateNewPowerhouseProject.md +36 -19
  9. package/docs/academy/01-GetStarted/02-DefineToDoListDocumentModel.md +24 -19
  10. package/docs/academy/01-GetStarted/03-ImplementOperationReducers.md +44 -41
  11. package/docs/academy/01-GetStarted/04-BuildToDoListEditor.md +10 -10
  12. package/docs/academy/01-GetStarted/05-VetraStudio.md +164 -0
  13. package/docs/academy/01-GetStarted/06-ReactorMCP.md +58 -0
  14. package/docs/academy/01-GetStarted/home.mdx +185 -90
  15. package/docs/academy/01-GetStarted/images/Modules.png +0 -0
  16. package/docs/academy/01-GetStarted/images/VetraStudioDrive.png +0 -0
  17. package/docs/academy/01-GetStarted/styles.module.css +5 -5
  18. package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/01-Prerequisites.md +46 -18
  19. package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/02-StandardDocumentModelWorkflow.md +118 -68
  20. package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/03-BuilderTools.md +75 -33
  21. package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/_category_.json +6 -6
  22. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/01-WhatIsADocumentModel.md +30 -21
  23. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/02-SpecifyTheStateSchema.md +41 -37
  24. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/03-SpecifyDocumentOperations.md +29 -25
  25. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/04-UseTheDocumentModelGenerator.md +36 -37
  26. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/05-ImplementDocumentReducers.md +128 -109
  27. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/06-ImplementDocumentModelTests.md +95 -86
  28. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/07-ExampleToDoListRepository.md +7 -9
  29. package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/_category_.json +6 -6
  30. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/01-BuildingDocumentEditors.md +65 -47
  31. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/02-ConfiguringDrives.md +77 -62
  32. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/03-BuildingADriveExplorer.md +360 -349
  33. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/00-DocumentToolbar.mdx +16 -10
  34. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/01-OperationHistory.md +10 -7
  35. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/02-RevisionHistoryTimeline.md +26 -11
  36. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/_category_.json +6 -6
  37. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/07-Authorization/01-RenownAuthenticationFlow.md +14 -7
  38. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/07-Authorization/02-Authorization.md +0 -1
  39. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/07-Authorization/_category_.json +5 -5
  40. package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/_category_.json +1 -1
  41. package/docs/academy/02-MasteryTrack/04-WorkWithData/01-GraphQLAtPowerhouse.md +45 -33
  42. package/docs/academy/02-MasteryTrack/04-WorkWithData/02-UsingTheAPI.mdx +61 -18
  43. package/docs/academy/02-MasteryTrack/04-WorkWithData/03-UsingSubgraphs.md +105 -456
  44. package/docs/academy/02-MasteryTrack/04-WorkWithData/04-analytics-processor.md +126 -110
  45. package/docs/academy/02-MasteryTrack/04-WorkWithData/05-RelationalDbProcessor.md +98 -65
  46. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/GraphQL References/QueryingADocumentWithGraphQL.md +23 -21
  47. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/best-practices.md +9 -9
  48. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/graphql/index.md +11 -23
  49. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/graphql/integration.md +25 -9
  50. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/intro.md +10 -10
  51. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/benchmarks.md +1 -1
  52. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/index.md +16 -11
  53. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/memory.md +6 -5
  54. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/schema.md +2 -2
  55. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/utilities.md +7 -5
  56. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/use-cases/maker.md +32 -58
  57. package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/use-cases/processors.md +1 -1
  58. package/docs/academy/02-MasteryTrack/04-WorkWithData/07-drive-analytics.md +105 -71
  59. package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_01-SetupBuilderEnvironment.md +22 -0
  60. package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_02-CreateNewPowerhouseProject.md +9 -8
  61. package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_03-GenerateAnAnalyticsProcessor.md +28 -32
  62. package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_04-UpdateAnalyticsProcessor.md +25 -26
  63. package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_category_.json +1 -1
  64. package/docs/academy/02-MasteryTrack/04-WorkWithData/_category_.json +7 -7
  65. package/docs/academy/02-MasteryTrack/05-Launch/01-IntroductionToPackages.md +3 -4
  66. package/docs/academy/02-MasteryTrack/05-Launch/02-PublishYourProject.md +69 -45
  67. package/docs/academy/02-MasteryTrack/05-Launch/03-SetupEnvironment.md +70 -40
  68. package/docs/academy/02-MasteryTrack/05-Launch/04-ConfigureEnvironment.md +1 -0
  69. package/docs/academy/02-MasteryTrack/05-Launch/_category_.json +7 -7
  70. package/docs/academy/02-MasteryTrack/_category_.json +6 -6
  71. package/docs/academy/03-ExampleUsecases/Chatroom/02-CreateNewPowerhouseProject.md +5 -3
  72. package/docs/academy/03-ExampleUsecases/Chatroom/03-DefineChatroomDocumentModel.md +38 -37
  73. package/docs/academy/03-ExampleUsecases/Chatroom/04-ImplementOperationReducers.md +45 -41
  74. package/docs/academy/03-ExampleUsecases/Chatroom/05-ImplementChatroomEditor.md +14 -14
  75. package/docs/academy/03-ExampleUsecases/Chatroom/06-LaunchALocalReactor.md +6 -6
  76. package/docs/academy/03-ExampleUsecases/Chatroom/_category_.json +1 -1
  77. package/docs/academy/04-APIReferences/00-PowerhouseCLI.md +143 -61
  78. package/docs/academy/04-APIReferences/01-ReactHooks.md +649 -141
  79. package/docs/academy/04-APIReferences/04-RelationalDatabase.md +121 -113
  80. package/docs/academy/04-APIReferences/05-PHDocumentMigrationGuide.md +48 -41
  81. package/docs/academy/04-APIReferences/_category_.json +6 -6
  82. package/docs/academy/05-Architecture/00-PowerhouseArchitecture.md +1 -2
  83. package/docs/academy/05-Architecture/01-WorkingWithTheReactor.md +11 -8
  84. package/docs/academy/05-Architecture/05-DocumentModelTheory/_category_.json +1 -1
  85. package/docs/academy/05-Architecture/_category_.json +6 -6
  86. package/docs/academy/06-ComponentLibrary/00-DocumentEngineering.md +25 -23
  87. package/docs/academy/06-ComponentLibrary/02-CreateCustomScalars.md +105 -93
  88. package/docs/academy/06-ComponentLibrary/03-IntegrateIntoAReactComponent.md +1 -0
  89. package/docs/academy/06-ComponentLibrary/_category_.json +7 -7
  90. package/docs/academy/07-Cookbook.md +267 -34
  91. package/docs/academy/08-Glossary.md +7 -1
  92. package/docs/bookofpowerhouse/01-Overview.md +2 -2
  93. package/docs/bookofpowerhouse/02-GeneralFrameworkAndPhilosophy.md +1 -7
  94. package/docs/bookofpowerhouse/03-PowerhouseSoftwareArchitecture.md +10 -7
  95. package/docs/bookofpowerhouse/04-DevelopmentApproaches.md +10 -4
  96. package/docs/bookofpowerhouse/05-SNOsandANewModelForOSSandPublicGoods.md +23 -30
  97. package/docs/bookofpowerhouse/06-SNOsInActionAndPlatformEconomies.md +0 -7
  98. package/docusaurus.config.ts +64 -66
  99. package/package.json +9 -7
  100. package/scripts/generate-combined-cli-docs.ts +43 -13
  101. package/sidebars.ts +2 -0
  102. package/src/components/HomepageFeatures/index.tsx +171 -78
  103. package/src/components/HomepageFeatures/styles.module.css +1 -2
  104. package/src/css/custom.css +89 -89
  105. package/src/pages/_archive-homepage.tsx +17 -16
  106. package/src/theme/DocCardList/index.tsx +9 -8
  107. package/static.json +6 -6
  108. package/tsconfig.tsbuildinfo +1 -0
@@ -1,4 +1,4 @@
1
- # Relational database processor
1
+ # Relational database processor
2
2
 
3
3
  In this chapter, we will implement a **Todo-List** relational database processor. This processor receives processed operations from the reactor and can use the `prevState`, `resultingState`, or data from the operations themselves to populate a database.
4
4
 
@@ -18,6 +18,7 @@ ph generate --processor todo-indexer --processor-type relationalDb --document-ty
18
18
  ```
19
19
 
20
20
  **Breaking down this command:**
21
+
21
22
  - `--processor todo-indexer`: Creates a processor with the name "todo-indexer"
22
23
  - `--processor-type relationalDb`: Specifies we want a relational database processor (vs other types like analytics or webhook processors)
23
24
  - `--document-types powerhouse/todolist`: Tells the processor to only listen for changes to documents of type "powerhouse/todolist"
@@ -25,6 +26,7 @@ ph generate --processor todo-indexer --processor-type relationalDb --document-ty
25
26
  This command creates a processor named `todo-indexer` of type `relational database` that listens for changes from documents of type `powerhouse/todolist`.
26
27
 
27
28
  **What gets generated:**
29
+
28
30
  - A processor class file (`processors/todo-indexer/index.ts`)
29
31
  - A database migration file (`processors/todo-indexer/migrations.ts`)
30
32
  - A factory file for configuration (`processors/todo-indexer/factory.ts`)
@@ -37,6 +39,7 @@ Next, define your database schema in the `processors/todo-indexer/migration.ts`
37
39
  **Understanding Database Migrations**
38
40
 
39
41
  Migrations are version-controlled database changes that ensure your database schema evolves safely over time. They contain:
42
+
40
43
  - **`up()` function**: Creates or modifies database structures when the processor starts
41
44
  - **`down()` function**: Safely removes changes when the processor is removed
42
45
 
@@ -47,17 +50,17 @@ The migration file contains `up` and `down` functions that are called when the p
47
50
  In the migration.ts file you'll find an example of the todo table default schema:
48
51
 
49
52
  ```ts
50
- import { type IRelationalDb } from "document-drive/processors/types"
53
+ import { type IRelationalDb } from "document-drive/processors/types";
51
54
 
52
55
  export async function up(db: IRelationalDb<any>): Promise<void> {
53
56
  // Create table - this runs when the processor starts
54
57
  await db.schema
55
- .createTable("todo") // Creates a new table named "todo"
56
- .addColumn("task", "varchar(255)") // Text column for the task description (max 255 characters)
57
- .addColumn("status", "boolean") // Boolean column for completion status (true/false)
58
+ .createTable("todo") // Creates a new table named "todo"
59
+ .addColumn("task", "varchar(255)") // Text column for the task description (max 255 characters)
60
+ .addColumn("status", "boolean") // Boolean column for completion status (true/false)
58
61
  .addPrimaryKeyConstraint("todo_pkey", ["task"]) // Makes "task" the primary key (unique identifier)
59
- .ifNotExists() // Only create if table doesn't already exist
60
- .execute(); // Execute the SQL command
62
+ .ifNotExists() // Only create if table doesn't already exist
63
+ .execute(); // Execute the SQL command
61
64
  }
62
65
 
63
66
  export async function down(db: IRelationalDb<any>): Promise<void> {
@@ -67,6 +70,7 @@ export async function down(db: IRelationalDb<any>): Promise<void> {
67
70
  ```
68
71
 
69
72
  **Design Considerations:**
73
+
70
74
  - We're using `task` as the primary key, which means each task description must be unique
71
75
  - The `varchar(255)` limit ensures reasonable memory usage
72
76
  - The `boolean` status makes it easy to filter completed vs. incomplete tasks
@@ -77,12 +81,13 @@ export async function down(db: IRelationalDb<any>): Promise<void> {
77
81
  After defining your database schema, generate TypeScript types for type-safe queries and better IDE support:
78
82
 
79
83
  ```bash
80
- ph generate --migration-file processors/todo-indexer/migrations.ts
84
+ ph generate --migration-file processors/todo-indexer/migrations.ts
81
85
  ```
82
86
 
83
87
  **Why Generate Types?**
84
88
 
85
89
  TypeScript types provide several benefits:
90
+
86
91
  - **Type Safety**: Catch errors at compile time instead of runtime
87
92
  - **IDE Support**: Get autocomplete and IntelliSense for your database queries
88
93
  - **Documentation**: Types serve as living documentation of your database structure
@@ -91,6 +96,7 @@ TypeScript types provide several benefits:
91
96
  Check your `processors/todo-indexer/schema.ts` file after generation - it will contain the TypeScript types for your database schema.
92
97
 
93
98
  **Example of generated types:**
99
+
94
100
  ```ts
95
101
  // This is what gets auto-generated based on your migration
96
102
  export interface Database {
@@ -108,6 +114,7 @@ This give you the opportunity to configure the processor filter in `processors/t
108
114
  **Understanding Processor Filters**
109
115
 
110
116
  Filters determine which document changes your processor will respond to. This is crucial for performance and functionality:
117
+
111
118
  - **Performance**: Only process relevant changes to avoid unnecessary work
112
119
  - **Isolation**: Different processors can handle different document types
113
120
  - **Scalability**: Distribute processing load across multiple processors
@@ -137,10 +144,10 @@ export const todoIndexerProcessorFactory =
137
144
  // Create a filter for the processor
138
145
  // This determines which document changes trigger the processor
139
146
  const filter: RelationalDbProcessorFilter = {
140
- branch: ["main"], // Only process changes from the "main" branch
141
- documentId: ["*"], // Process changes from any document ID (* = wildcard)
147
+ branch: ["main"], // Only process changes from the "main" branch
148
+ documentId: ["*"], // Process changes from any document ID (* = wildcard)
142
149
  documentType: ["powerhouse/todolist"], // Only process todolist documents
143
- scope: ["global"], // Process global changes (not user-specific)
150
+ scope: ["global"], // Process global changes (not user-specific)
144
151
  };
145
152
 
146
153
  // Create the processor instance
@@ -155,8 +162,9 @@ export const todoIndexerProcessorFactory =
155
162
  ```
156
163
 
157
164
  **Filter Options Explained:**
165
+
158
166
  - **`branch`**: Which document branches to monitor (usually "main" for production data)
159
- - **`documentId`**: Specific document IDs to watch ("*" means all documents)
167
+ - **`documentId`**: Specific document IDs to watch ("\*" means all documents)
160
168
  - **`documentType`**: Document types to process (ensures type safety)
161
169
  - **`scope`**: Whether to process global changes or user-specific ones
162
170
 
@@ -167,12 +175,14 @@ Now implement the actual processor logic in `processors/todo-indexer/index.ts` b
167
175
  **Understanding the Processor Lifecycle**
168
176
 
169
177
  The processor has several key methods:
178
+
170
179
  - **`initAndUpgrade()`**: Runs once when the processor starts (perfect for running migrations)
171
180
  - **`onStrands()`**: Runs every time relevant document changes occur (this is where the main logic goes)
172
181
  - **`onDisconnect()`**: Cleanup when the processor shuts down
173
182
 
174
183
  **What are "Strands"?**
175
184
  Strands represent a batch of operations that happened to documents. Each strand contains:
185
+
176
186
  - Document ID and metadata
177
187
  - Array of operations (create, update, delete, etc.)
178
188
  - Previous and resulting document states
@@ -181,10 +191,10 @@ Strands represent a batch of operations that happened to documents. Each strand
181
191
  import { type IRelationalDb } from "document-drive/processors/types";
182
192
  import { RelationalDbProcessor } from "document-drive/processors/relational";
183
193
  import { type InternalTransmitterUpdate } from "document-drive/server/listener/transmitter/internal";
184
- import type { ToDoListDocument } from "../../document-models/to-do-list/index.js";
194
+ import type { ToDoListDocument } from "../document-models/to-do-list/index.js";
185
195
 
186
- import { up } from "./migrations.js";
187
- import { type DB } from "./schema.js";
196
+ import { up } from "./todo-indexer/migrations.js";
197
+ import { type DB } from "./todo-indexer/schema.js";
188
198
 
189
199
  // Define the document type this processor handles
190
200
  type DocumentType = ToDoListDocument;
@@ -261,6 +271,7 @@ ph generate --subgraph todo
261
271
  **What is a Subgraph?**
262
272
 
263
273
  A subgraph is a GraphQL schema that exposes your processed data to clients. It:
274
+
264
275
  - Provides a standardized API for accessing your relational database data
265
276
  - Integrates with the Powerhouse supergraph for unified data access
266
277
  - Supports both queries (reading data) and mutations (modifying data)
@@ -268,61 +279,64 @@ A subgraph is a GraphQL schema that exposes your processed data to clients. It:
268
279
 
269
280
  ### Configure the Subgraph
270
281
 
271
- Open `./subgraphs/todo/index.ts` and configure the resolvers:
282
+ Open `./subgraphs/todo/schema.ts` and configure the schema:
272
283
 
273
284
  ```ts
274
- import { Subgraph } from "@powerhousedao/reactor-api";
275
285
  import { gql } from "graphql-tag";
286
+ import type { DocumentNode } from "graphql";
287
+
288
+ export const schema: DocumentNode = gql`
289
+ # Define the structure of a todo item as returned by GraphQL
290
+ type ToDoListEntry {
291
+ task: String! # The task description (! means required/non-null)
292
+ status: Boolean! # The completion status (true = done, false = pending)
293
+ }
294
+
295
+ # Define available queries
296
+ type Query {
297
+ todos(driveId: ID!): [ToDoListEntry] # Get array of todos for a specific drive
298
+ }
299
+ `;
300
+ ```
301
+
302
+ Open `./subgraphs/todo/resolvers.ts` and configure the resolvers:
303
+
304
+ ```ts
305
+ // subgraphs/search-todos/resolvers.ts
306
+ import { type Subgraph } from "@powerhousedao/reactor-api";
307
+ import { type ToDoListDocument } from "document-models/to-do-list/index.js";
276
308
  import { TodoIndexerProcessor } from "../../processors/todo-indexer/index.js";
277
309
 
278
- export class TodoSubgraph extends Subgraph {
279
- // Human-readable name for this subgraph
280
- name = "Todos";
310
+ export const getResolvers = (subgraph: Subgraph) => {
311
+ const reactor = subgraph.reactor;
312
+ const relationalDb = subgraph.relationalDb;
281
313
 
282
- // GraphQL resolvers - functions that fetch data for each field
283
- resolvers = {
314
+ return {
284
315
  Query: {
285
316
  todos: {
286
317
  // Resolver function for the "todos" query
287
318
  // Arguments: parent object, query arguments, context, GraphQL info
288
- resolve: async (_: any, args: {driveId: string}) => {
319
+ resolve: async (_: any, args: { driveId: string }) => {
289
320
  // Query the database using the processor's static query method
290
321
  // This gives us access to the namespaced database for the specific drive
291
- const todos = await TodoIndexerProcessor.query(args.driveId, this.relationalDb)
292
- .selectFrom("todo") // Select from the "todo" table
293
- .selectAll() // Get all columns
294
- .execute(); // Execute the query
322
+ const todos = await TodoIndexerProcessor.query(
323
+ args.driveId,
324
+ relationalDb,
325
+ )
326
+ .selectFrom("todo") // Select from the "todo" table
327
+ .selectAll() // Get all columns
328
+ .execute(); // Execute the query
295
329
 
296
330
  // Transform database results to match GraphQL schema
297
331
  return todos.map((todo) => ({
298
- task: todo.task, // Map database "task" column to GraphQL "task" field
299
- status: todo.status, // Map database "status" column to GraphQL "status" field
332
+ task: todo.task, // Map database "task" column to GraphQL "task" field
333
+ status: todo.status, // Map database "status" column to GraphQL "status" field
300
334
  }));
301
335
  },
302
336
  },
303
337
  },
304
338
  };
305
-
306
- // GraphQL schema definition using GraphQL Schema Definition Language (SDL)
307
- typeDefs = gql`
308
-
309
- # Define the structure of a todo item as returned by GraphQL
310
- type ToDoListEntry {
311
- task: String! # The task description (! means required/non-null)
312
- status: Boolean! # The completion status (true = done, false = pending)
313
- }
314
-
315
- # Define available queries
316
- type Query {
317
- todos(driveId: ID!): [ToDoListEntry] # Get array of todos for a specific drive
318
- }
319
- `;
320
-
321
- // Cleanup method called when the subgraph disconnects
322
- async onDisconnect() {
323
- // Add any cleanup logic here if needed
324
- }
325
- }
339
+ };
326
340
  ```
327
341
 
328
342
  ## Now query the data via the supergraph.
@@ -330,13 +344,14 @@ export class TodoSubgraph extends Subgraph {
330
344
  **Understanding the Supergraph**
331
345
 
332
346
  The Powerhouse supergraph is a unified GraphQL endpoint that combines:
347
+
333
348
  - **Document Models**: Direct access to your Powerhouse documents
334
349
  - **Subgraphs**: Custom data views from your processors
335
350
  - **Built-in APIs**: System functionality like authentication and drives
336
351
 
337
352
  This unified approach means you can query document state AND processed data in a single request, which is perfect for building rich user interfaces.
338
353
 
339
- The Powerhouse supergraph for any given remote drive or reactor can be found under `http://localhost:4001/graphql`. The gateway / supergraph available on `/graphql` combines all the subgraphs, except for the drive subgraph (which is accessible via `/d/:driveId`). To access the endpoint, start the reactor and navigate to the URL with `graphql` appended. The following commands explain how you can test & try the supergraph.
354
+ The Powerhouse supergraph for any given remote drive or reactor can be found under `http://localhost:4001/graphql`. The gateway / supergraph available on `/graphql` combines all the subgraphs, except for the drive subgraph (which is accessible via `/d/:driveId`). To access the endpoint, start the reactor and navigate to the URL with `graphql` appended. The following commands explain how you can test & try the supergraph.
340
355
 
341
356
  - Start the reactor:
342
357
 
@@ -344,13 +359,14 @@ The Powerhouse supergraph for any given remote drive or reactor can be found und
344
359
  ph reactor
345
360
  ```
346
361
 
347
- - Open the GraphQL editor in your browser:
362
+ - This will return an endpoint, but you'll need to change the url of the endpoint to the following URL:
348
363
 
349
364
  ```
350
365
  http://localhost:4001/graphql
351
366
  ```
352
- The supergraph allows you to both query & mutate data from the same endpoint.
353
- Read more about [subgraphs](/academy/MasteryTrack/WorkWithData/UsingSubgraphs)
367
+
368
+ The supergraph allows you to both query & mutate data from the same endpoint.
369
+ Read more about [subgraphs](/academy/MasteryTrack/WorkWithData/UsingSubgraphs)
354
370
 
355
371
  <details>
356
372
  <summary>**Example: Complete Data Flow from Document Operations to Relational Database**</summary>
@@ -358,6 +374,7 @@ Read more about [subgraphs](/academy/MasteryTrack/WorkWithData/UsingSubgraphs)
358
374
  **Understanding the Complete Data Pipeline**
359
375
 
360
376
  This comprehensive example demonstrates the **entire data flow** in a Powerhouse application:
377
+
361
378
  1. **Storage Layer**: Create a drive (document storage container)
362
379
  2. **Document Layer**: Create a todo document and add operations
363
380
  3. **Processing Layer**: Watch the relational database processor automatically index changes
@@ -378,7 +395,8 @@ mutation DriveCreation($name: String!) {
378
395
  }
379
396
  ```
380
397
 
381
- Variables:
398
+ Variables:
399
+
382
400
  ```json
383
401
  {
384
402
  "driveId": "powerhouse",
@@ -401,6 +419,7 @@ mutation Mutation($driveId: String, $name: String) {
401
419
  ```
402
420
 
403
421
  Variables:
422
+
404
423
  ```json
405
424
  {
406
425
  "driveId": "powerhouse",
@@ -409,6 +428,7 @@ Variables:
409
428
  ```
410
429
 
411
430
  Result:
431
+
412
432
  ```json
413
433
  {
414
434
  "data": {
@@ -426,12 +446,17 @@ Result:
426
446
  **What's Happening**: Each time we add a todo item, we're creating a new **operation** in the document's history. Our relational database processor is listening for these operations in real-time.
427
447
 
428
448
  ```graphql
429
- mutation Mutation($driveId: String, $docId: PHID, $input: ToDoList_AddTodoItemInput) {
449
+ mutation Mutation(
450
+ $driveId: String
451
+ $docId: PHID
452
+ $input: ToDoList_AddTodoItemInput
453
+ ) {
430
454
  ToDoList_addTodoItem(driveId: $driveId, docId: $docId, input: $input)
431
455
  }
432
456
  ```
433
457
 
434
458
  Variables:
459
+
435
460
  ```json
436
461
  {
437
462
  "driveId": "powerhouse",
@@ -445,6 +470,7 @@ Variables:
445
470
  ```
446
471
 
447
472
  Result:
473
+
448
474
  ```json
449
475
  {
450
476
  "data": {
@@ -453,7 +479,8 @@ Result:
453
479
  }
454
480
  ```
455
481
 
456
- 💡 **What Happens Next**:
482
+ 💡 **What Happens Next**:
483
+
457
484
  1. **Document Model**: Stores the operation and updates document state
458
485
  2. **Reactor**: Broadcasts the operation to all listening processors
459
486
  3. **Our Processor**: Automatically receives the operation and creates a database record
@@ -486,6 +513,7 @@ query Query($driveId: ID!) {
486
513
  ```
487
514
 
488
515
  Variables:
516
+
489
517
  ```json
490
518
  {
491
519
  "driveId": "powerhouse"
@@ -493,6 +521,7 @@ Variables:
493
521
  ```
494
522
 
495
523
  Response:
524
+
496
525
  ```json
497
526
  {
498
527
  "data": {
@@ -538,12 +567,14 @@ Response:
538
567
  ### **🔍 Data Analysis: Understanding What You're Seeing**
539
568
 
540
569
  **Document Model Data (`ToDoList.getDocuments`):**
570
+
541
571
  - ✅ **Current State**: Shows the final todo items as they exist in the document
542
572
  - ✅ **User-Friendly**: Displays actual todo text like "complete mutation"
543
573
  - ✅ **Real-Time**: Always reflects the latest document state
544
574
  - ❌ **Limited History**: Doesn't show how the document changed over time
545
575
 
546
576
  **Processed Relational Data (`todos`):**
577
+
547
578
  - ✅ **Operation History**: Shows each individual operation that occurred
548
579
  - ✅ **Audit Trail**: You can see the sequence (0, 1, 2) of operations
549
580
  - ✅ **Analytics Ready**: Perfect for counting operations, tracking changes
@@ -553,11 +584,13 @@ Response:
553
584
  ---
554
585
 
555
586
  **Key Differences:**
587
+
556
588
  - **Document Query**: Gets the current state directly from the document model
557
589
  - **Subgraph Query**: Gets processed/transformed data from your relational database
558
590
  - **Combined Power**: You can query both in a single GraphQL request for rich UIs
559
591
 
560
- This demonstrates how the supergraph provides a unified interface to both your document models and your custom subgraphs, allowing you to query and mutate data from the same endpoint.
592
+ This demonstrates how the supergraph provides a unified interface to both your document models and your custom subgraphs, allowing you to query and mutate data from the same endpoint.
593
+
561
594
  </details>
562
595
 
563
596
  ## Use the Data in Frontend Applications
@@ -565,6 +598,7 @@ This demonstrates how the supergraph provides a unified interface to both your d
565
598
  **Integration Options**
566
599
 
567
600
  Your processed data can now be consumed by any GraphQL client:
601
+
568
602
  - **React**: Using Apollo Client, urql, or Relay
569
603
  - **Next.js**: API routes, getServerSideProps, or app router
570
604
  - **Mobile Apps**: React Native, Flutter, or native iOS/Android
@@ -580,6 +614,7 @@ Your processed data can now be consumed by any GraphQL client:
580
614
  **Why API Routes?**
581
615
 
582
616
  Next.js API routes are useful when you need to:
617
+
583
618
  - Add server-side authentication or authorization
584
619
  - Transform data before sending to the client
585
620
  - Implement caching or rate limiting
@@ -588,11 +623,11 @@ Next.js API routes are useful when you need to:
588
623
 
589
624
  ```ts
590
625
  // pages/api/todos.ts
591
- import { type NextApiRequest, type NextApiResponse } from "next"
626
+ import { type NextApiRequest, type NextApiResponse } from "next";
592
627
 
593
628
  export default async function handler(
594
629
  req: NextApiRequest,
595
- res: NextApiResponse
630
+ res: NextApiResponse,
596
631
  ) {
597
632
  // Only allow GET requests for this endpoint
598
633
  if (req.method !== "GET") {
@@ -625,13 +660,13 @@ export default async function handler(
625
660
  });
626
661
 
627
662
  const data = await response.json();
628
-
663
+
629
664
  // Return the todos array from the GraphQL response
630
665
  res.status(200).json(data.data.todoList);
631
666
  } catch (error) {
632
667
  // Log the error for debugging (in production, use proper logging)
633
668
  console.error("Failed to fetch todos:", error);
634
-
669
+
635
670
  // Return a generic error message to the client
636
671
  res.status(500).json({ error: "Failed to fetch todos" });
637
672
  }
@@ -648,16 +683,14 @@ You've successfully created a relational database processor that:
648
683
  4. ✅ **Exposes data through GraphQL** - Makes processed data available via a unified API
649
684
  5. ✅ **Can be consumed by frontend applications** - Ready for integration with any GraphQL client
650
685
 
651
-
652
686
  This processor will automatically sync your document changes to the relational database, making the data available for complex queries, reporting, and integration with other systems.
653
687
 
654
688
  **Real-World Applications:**
655
689
 
656
690
  This pattern is commonly used for:
691
+
657
692
  - **Analytics dashboards** showing document usage patterns
658
693
  - **Business intelligence** reports on document data
659
694
  - **Integration** with existing enterprise systems
660
695
  - **Search and filtering** with complex SQL queries
661
696
  - **Data archival** and compliance requirements
662
-
663
-
@@ -10,10 +10,12 @@ But the **same queries can be used for any other document model**.
10
10
  The queries below focus on 2 different ways of receiving the data.
11
11
  We will show how you can receive:
12
12
 
13
- > #### 1. The complete state of the document:
13
+ > #### 1. The complete state of the document:
14
+ >
14
15
  > Such as the array of accounts, Special Purpose Vehicles (SPVs), fixed income types, fee types, portfolio details, and transactions associated with a particular RWA-report.
15
16
 
16
17
  > #### 2. Only the latest changes and updates:
18
+ >
17
19
  > Or specific operations of a document by registering a listener with a specific filter.
18
20
 
19
21
  ### Adding the specific document drive to Connect with a URL
@@ -26,20 +28,20 @@ Get access to an organisations drive instances by adding this drive to your Conn
26
28
 
27
29
  Whenever you want to start a query from a document within connect you can open up switchboard by looking for the switchboard logo in the top right hand corner of the document editor interface, or by clicking a document in the connect drive explorer and opening the document in switchboard. This feature will not be available for your local drives as they are not hosted on switchboard.
28
30
 
29
- ![untitled](./rwa-reports/raw-reports1.png)
30
- Right click a document and find a direct link to switchboard GraphQL playground*
31
+ ![untitled](./rwa-reports/raw-reports1.png)
32
+ Right click a document and find a direct link to switchboard GraphQL playground\*
31
33
 
32
34
  ## Querying data from Connect in Apollo Studio
33
35
 
34
36
  Aside from switchboard you're able to make use of GraphQL interfaces such as Apollo Studio.
35
- When opening the document in switchboard the endpoint will be visible at the top of the interface. Copy this endpoint and use it as your API endpoint in Apollo.
37
+ When opening the document in switchboard the endpoint will be visible at the top of the interface. Copy this endpoint and use it as your API endpoint in Apollo.
36
38
 
37
39
  ![untitled](./rwa-reports/raw-reports2.png)
38
- *The endpoint you'll be using for any other GraphQL playgrounds or sandboxes*
40
+ _The endpoint you'll be using for any other GraphQL playgrounds or sandboxes_
39
41
 
40
42
  ### 1. Querying the complete state of a document
41
43
 
42
- This example query is structured to request a document by its unique identifier (id).
44
+ This example query is structured to request a document by its unique identifier (id).
43
45
  It extracts common fields such as id, name, documentType, revision, created, and lastModified.
44
46
 
45
47
  Additionally, it retrieves specific data related to the 'Real World Assets' document model, including accounts, SPVs, fixed income types, fee types, portfolio, and transactions. The RWA section of the query is designed to pull in detailed information about the financial structure and transactions of real-world assets managed within the document model.
@@ -58,6 +60,7 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
58
60
  #### Real World Assets (RWA) Specific Fields
59
61
 
60
62
  **State**
63
+
61
64
  - `accounts`
62
65
  - `id`: Unique identifier for the account.
63
66
  - `reference`: Reference code or number associated with the account.
@@ -94,7 +97,6 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
94
97
  - `coupon`: Coupon rate of the fixed income asset.
95
98
 
96
99
  **Cash**
97
-
98
100
  - `id`: Unique identifier for the cash holding.
99
101
  - `spvId`: Identifier for the SPV associated with the cash holding.
100
102
  - `currency`: Currency of the cash holding
@@ -102,7 +104,7 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
102
104
  </details>
103
105
 
104
106
  ```graphql title="An example query for the full state of a document"
105
- query {
107
+ query {
106
108
  document(id: "") {
107
109
  name
108
110
  documentType
@@ -158,15 +160,15 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
158
160
  }
159
161
  }
160
162
  }
161
-
162
163
  ```
163
164
 
164
165
  ### 2. Querying for the latest updates or specific documents
165
166
 
166
- This query is particularly useful when you only need the latest changes from the document drive.
167
+ This query is particularly useful when you only need the latest changes from the document drive.
167
168
 
168
169
  ### 2.1 Registering a listener
169
- For this purpose we support adding listeners through a graphQL mutation such as the PullResponderListenerbelow.
170
+
171
+ For this purpose we support adding listeners through a graphQL mutation such as the PullResponderListenerbelow.
170
172
 
171
173
  ```graphql
172
174
  mutation registerListener($filter: InputListenerFilter!) {
@@ -178,8 +180,8 @@ mutation registerListener($filter: InputListenerFilter!) {
178
180
 
179
181
  ### 2.2 Defining the filter
180
182
 
181
- Through this listener you can define the filter with query variables.
182
- This allows you to filter for specific document ID's or a lists, documentTypes, scopes or branches.
183
+ Through this listener you can define the filter with query variables.
184
+ This allows you to filter for specific document ID's or a lists, documentTypes, scopes or branches.
183
185
  Branches allow you to query different versions of a document in case there is a conflict accross different versions of the document or when contributors are maintaining separate versions with the help of branching
184
186
 
185
187
  In this case we're filtering by document type makerdao/rwa-portfolio.
@@ -195,20 +197,20 @@ Branches allow you to query different versions of a document in case there is a
195
197
  }
196
198
  ```
197
199
 
198
- This combination of query + query variables will return a listenerID which can be used in the next step to query for specific strands.
200
+ This combination of query + query variables will return a listenerID which can be used in the next step to query for specific strands.
199
201
 
200
202
  ![untitled](./rwa-reports/rwaRegister.png)
201
- *An example of registering a listener and receiving a listenerId back.*
203
+ _An example of registering a listener and receiving a listenerId back._
202
204
 
203
205
  :::info
204
206
  A strand in this scenario can be understood as a list of operations that has been applied to the RWA portfolio or any other document. As a query variable you'll want to add the received listenerId from the previous step together with the pullstrands query below
205
207
  :::
206
208
 
207
- ```graphql title="Pullstrands query"
208
- query pullStrands ($listenerId:ID, $since: Date) {
209
+ ```graphql title="Pullstrands query"
210
+ query pullStrands($listenerId: ID, $since: Date) {
209
211
  system {
210
212
  sync {
211
- strands (listenerId: $listenerId, since: $since) {
213
+ strands(listenerId: $listenerId, since: $since) {
212
214
  driveId
213
215
  documentId
214
216
  scope
@@ -235,10 +237,10 @@ query pullStrands ($listenerId:ID, $since: Date) {
235
237
  ```
236
238
 
237
239
  ![untitled](./rwa-reports/listener-raw.png)
238
- *An example of using the ListenerID to pull specific strands (or document operations)*
240
+ _An example of using the ListenerID to pull specific strands (or document operations)_
239
241
 
240
- In case you'd only like to receive the latest operations of a document the latest timestamp can be used as a filter in the since query variable to only get the most relevant or latest changes.
242
+ In case you'd only like to receive the latest operations of a document the latest timestamp can be used as a filter in the since query variable to only get the most relevant or latest changes.
241
243
 
242
244
  :::info
243
245
  A "strand" within the context of Powerhouse's Document Synchronization Protocol also refers to a single synchronization channel that connects exactly one unit of synchronization to another, with all four parameters (drive_url, doc_id, scope, branch) set to fixed values. This setup means that synchronization happens at a granular level, focusing specifically on one precise aspect of synchronization between two distinct points of instances of a document or document drive.
244
- :::
246
+ :::
@@ -33,28 +33,28 @@ For example: Looking at the dimension section in the filter options: To see avai
33
33
  ### Guidelines for selecting and combining dimensions
34
34
 
35
35
  1. **Understand the Purpose of Analysis**
36
- Before selecting dimensions, clarify the objective of your analysis. Are you looking to track expenses for a specific project, analyze budget utilization, or examine transaction patterns? Your objective will guide which dimensions are most relevant.
36
+ Before selecting dimensions, clarify the objective of your analysis. Are you looking to track expenses for a specific project, analyze budget utilization, or examine transaction patterns? Your objective will guide which dimensions are most relevant.
37
37
 
38
38
  2. **Choose Relevant Dimensions**
39
- Select dimensions that align with your analytical goals. For instance, use the 'Project' dimension for project-based financial tracking or 'Wallet' for blockchain transaction analysis.
39
+ Select dimensions that align with your analytical goals. For instance, use the 'Project' dimension for project-based financial tracking or 'Wallet' for blockchain transaction analysis.
40
40
 
41
41
  3. **Combining Dimensions for Depth**
42
- Combine multiple dimensions to gain more nuanced insights. For example, you might combine 'Budget' and 'Category' to understand how different categories of expenses contribute to overall budget utilization within a specific area.
42
+ Combine multiple dimensions to gain more nuanced insights. For example, you might combine 'Budget' and 'Category' to understand how different categories of expenses contribute to overall budget utilization within a specific area.
43
43
 
44
44
  4. **Hierarchy and Path Considerations**
45
- Pay attention to the hierarchical structure in dimension paths. For instance, paths like atlas/scopes/SUP/I/PHOENIX/ suggest a structured breakdown that can be crucial for detailed analysis.
45
+ Pay attention to the hierarchical structure in dimension paths. For instance, paths like atlas/scopes/SUP/I/PHOENIX/ suggest a structured breakdown that can be crucial for detailed analysis.
46
46
 
47
47
  5. **Utilize Descriptions for Context**
48
- Where available, use the descriptions provided with dimension values to understand the context and relevance of each dimension to your analysis. This is particularly helpful in dimensions with null labels, where the path and description provide critical information.
48
+ Where available, use the descriptions provided with dimension values to understand the context and relevance of each dimension to your analysis. This is particularly helpful in dimensions with null labels, where the path and description provide critical information.
49
49
 
50
50
  6. **Avoid Over-Complication**
51
- While combining dimensions can provide depth, avoid overly complex combinations that might lead to confusing or inconclusive results. Stick to combinations that directly serve your analysis objectives.
51
+ While combining dimensions can provide depth, avoid overly complex combinations that might lead to confusing or inconclusive results. Stick to combinations that directly serve your analysis objectives.
52
52
 
53
53
  7. **Use Icons for Quick Reference**
54
- Where icons are available, they can be used as a quick visual reference to identify different dimensions or categories, particularly in user interfaces where rapid identification is beneficial.
54
+ Where icons are available, they can be used as a quick visual reference to identify different dimensions or categories, particularly in user interfaces where rapid identification is beneficial.
55
55
 
56
56
  8. **Experiment and Iterate**
57
- Don't hesitate to experiment with different combinations of dimensions to see which provide the most meaningful insights. The flexibility of the dimensions allows for various permutations and combinations to suit diverse analytical needs.
57
+ Don't hesitate to experiment with different combinations of dimensions to see which provide the most meaningful insights. The flexibility of the dimensions allows for various permutations and combinations to suit diverse analytical needs.
58
58
 
59
59
  9. **Stay Updated**
60
- Keep abreast of any changes or additions to the dimensions within the analytics engine, as this can impact ongoing and future analyses.
60
+ Keep abreast of any changes or additions to the dimensions within the analytics engine, as this can impact ongoing and future analyses.