@powerhousedao/academy 5.0.0-staging.3 → 5.0.0-staging.30
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.vscode/settings.json +1 -1
- package/CHANGELOG.md +157 -0
- package/README.md +3 -3
- package/babel.config.js +1 -1
- package/blog/BeyondCommunication-ABlueprintForDevelopment.md +25 -24
- package/blog/TheChallengeOfChange.md +21 -21
- package/docs/academy/01-GetStarted/00-ExploreDemoPackage.mdx +61 -24
- package/docs/academy/01-GetStarted/01-CreateNewPowerhouseProject.md +21 -12
- package/docs/academy/01-GetStarted/02-DefineToDoListDocumentModel.md +24 -19
- package/docs/academy/01-GetStarted/03-ImplementOperationReducers.md +44 -41
- package/docs/academy/01-GetStarted/04-BuildToDoListEditor.md +10 -10
- package/docs/academy/01-GetStarted/05-SpecDrivenAI.md +143 -0
- package/docs/academy/01-GetStarted/home.mdx +185 -90
- package/docs/academy/01-GetStarted/styles.module.css +5 -5
- package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/01-Prerequisites.md +46 -18
- package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/02-StandardDocumentModelWorkflow.md +118 -68
- package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/03-BuilderTools.md +75 -33
- package/docs/academy/02-MasteryTrack/01-BuilderEnvironment/_category_.json +6 -6
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/01-WhatIsADocumentModel.md +30 -21
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/02-SpecifyTheStateSchema.md +41 -37
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/03-SpecifyDocumentOperations.md +29 -25
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/04-UseTheDocumentModelGenerator.md +36 -37
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/05-ImplementDocumentReducers.md +128 -109
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/06-ImplementDocumentModelTests.md +95 -86
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/07-ExampleToDoListRepository.md +7 -9
- package/docs/academy/02-MasteryTrack/02-DocumentModelCreation/_category_.json +6 -6
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/01-BuildingDocumentEditors.md +65 -47
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/02-ConfiguringDrives.md +77 -62
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/03-BuildingADriveExplorer.md +360 -349
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/00-DocumentToolbar.mdx +16 -10
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/01-OperationHistory.md +10 -7
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/02-RevisionHistoryTimeline.md +26 -11
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/06-DocumentTools/_category_.json +6 -6
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/07-Authorization/01-RenownAuthenticationFlow.md +14 -7
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/07-Authorization/02-Authorization.md +0 -1
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/07-Authorization/_category_.json +5 -5
- package/docs/academy/02-MasteryTrack/03-BuildingUserExperiences/_category_.json +1 -1
- package/docs/academy/02-MasteryTrack/04-WorkWithData/01-GraphQLAtPowerhouse.md +45 -33
- package/docs/academy/02-MasteryTrack/04-WorkWithData/02-UsingTheAPI.mdx +61 -18
- package/docs/academy/02-MasteryTrack/04-WorkWithData/03-UsingSubgraphs.md +50 -54
- package/docs/academy/02-MasteryTrack/04-WorkWithData/04-analytics-processor.md +126 -110
- package/docs/academy/02-MasteryTrack/04-WorkWithData/05-RelationalDbProcessor.md +75 -45
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/GraphQL References/QueryingADocumentWithGraphQL.md +23 -21
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/best-practices.md +9 -9
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/graphql/index.md +11 -23
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/graphql/integration.md +25 -9
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/intro.md +10 -10
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/benchmarks.md +1 -1
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/index.md +16 -11
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/memory.md +6 -5
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/schema.md +2 -2
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/typescript/utilities.md +7 -5
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/use-cases/maker.md +32 -58
- package/docs/academy/02-MasteryTrack/04-WorkWithData/06-Analytics Engine/use-cases/processors.md +1 -1
- package/docs/academy/02-MasteryTrack/04-WorkWithData/07-drive-analytics.md +105 -71
- package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_01-SetupBuilderEnvironment.md +22 -0
- package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_02-CreateNewPowerhouseProject.md +9 -8
- package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_03-GenerateAnAnalyticsProcessor.md +28 -32
- package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_04-UpdateAnalyticsProcessor.md +25 -26
- package/docs/academy/02-MasteryTrack/04-WorkWithData/_ARCHIVE-AnalyticsProcessorTutorial/_category_.json +1 -1
- package/docs/academy/02-MasteryTrack/04-WorkWithData/_category_.json +7 -7
- package/docs/academy/02-MasteryTrack/05-Launch/01-IntroductionToPackages.md +3 -4
- package/docs/academy/02-MasteryTrack/05-Launch/02-PublishYourProject.md +69 -45
- package/docs/academy/02-MasteryTrack/05-Launch/03-SetupEnvironment.md +70 -40
- package/docs/academy/02-MasteryTrack/05-Launch/04-ConfigureEnvironment.md +1 -0
- package/docs/academy/02-MasteryTrack/05-Launch/_category_.json +7 -7
- package/docs/academy/02-MasteryTrack/_category_.json +6 -6
- package/docs/academy/03-ExampleUsecases/Chatroom/02-CreateNewPowerhouseProject.md +5 -3
- package/docs/academy/03-ExampleUsecases/Chatroom/03-DefineChatroomDocumentModel.md +38 -37
- package/docs/academy/03-ExampleUsecases/Chatroom/04-ImplementOperationReducers.md +45 -41
- package/docs/academy/03-ExampleUsecases/Chatroom/05-ImplementChatroomEditor.md +14 -14
- package/docs/academy/03-ExampleUsecases/Chatroom/06-LaunchALocalReactor.md +6 -6
- package/docs/academy/03-ExampleUsecases/Chatroom/_category_.json +1 -1
- package/docs/academy/04-APIReferences/00-PowerhouseCLI.md +14 -7
- package/docs/academy/04-APIReferences/01-ReactHooks.md +177 -129
- package/docs/academy/04-APIReferences/04-RelationalDatabase.md +121 -113
- package/docs/academy/04-APIReferences/05-PHDocumentMigrationGuide.md +48 -41
- package/docs/academy/04-APIReferences/_category_.json +6 -6
- package/docs/academy/05-Architecture/00-PowerhouseArchitecture.md +1 -2
- package/docs/academy/05-Architecture/01-WorkingWithTheReactor.md +11 -8
- package/docs/academy/05-Architecture/05-DocumentModelTheory/_category_.json +1 -1
- package/docs/academy/05-Architecture/_category_.json +6 -6
- package/docs/academy/06-ComponentLibrary/00-DocumentEngineering.md +25 -23
- package/docs/academy/06-ComponentLibrary/02-CreateCustomScalars.md +105 -93
- package/docs/academy/06-ComponentLibrary/03-IntegrateIntoAReactComponent.md +1 -0
- package/docs/academy/06-ComponentLibrary/_category_.json +7 -7
- package/docs/academy/07-Cookbook.md +267 -34
- package/docs/academy/08-Glossary.md +7 -1
- package/docs/bookofpowerhouse/01-Overview.md +2 -2
- package/docs/bookofpowerhouse/02-GeneralFrameworkAndPhilosophy.md +1 -7
- package/docs/bookofpowerhouse/03-PowerhouseSoftwareArchitecture.md +10 -7
- package/docs/bookofpowerhouse/04-DevelopmentApproaches.md +10 -4
- package/docs/bookofpowerhouse/05-SNOsandANewModelForOSSandPublicGoods.md +23 -30
- package/docs/bookofpowerhouse/06-SNOsInActionAndPlatformEconomies.md +0 -7
- package/docusaurus.config.ts +64 -66
- package/package.json +1 -1
- package/scripts/generate-combined-cli-docs.ts +43 -13
- package/sidebars.ts +1 -0
- package/src/components/HomepageFeatures/index.tsx +171 -78
- package/src/components/HomepageFeatures/styles.module.css +1 -2
- package/src/css/custom.css +89 -89
- package/src/pages/_archive-homepage.tsx +17 -16
- package/src/theme/DocCardList/index.tsx +9 -8
- package/static.json +6 -6
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Relational database processor
|
|
1
|
+
# Relational database processor
|
|
2
2
|
|
|
3
3
|
In this chapter, we will implement a **Todo-List** relational database processor. This processor receives processed operations from the reactor and can use the `prevState`, `resultingState`, or data from the operations themselves to populate a database.
|
|
4
4
|
|
|
@@ -18,6 +18,7 @@ ph generate --processor todo-indexer --processor-type relationalDb --document-ty
|
|
|
18
18
|
```
|
|
19
19
|
|
|
20
20
|
**Breaking down this command:**
|
|
21
|
+
|
|
21
22
|
- `--processor todo-indexer`: Creates a processor with the name "todo-indexer"
|
|
22
23
|
- `--processor-type relationalDb`: Specifies we want a relational database processor (vs other types like analytics or webhook processors)
|
|
23
24
|
- `--document-types powerhouse/todolist`: Tells the processor to only listen for changes to documents of type "powerhouse/todolist"
|
|
@@ -25,6 +26,7 @@ ph generate --processor todo-indexer --processor-type relationalDb --document-ty
|
|
|
25
26
|
This command creates a processor named `todo-indexer` of type `relational database` that listens for changes from documents of type `powerhouse/todolist`.
|
|
26
27
|
|
|
27
28
|
**What gets generated:**
|
|
29
|
+
|
|
28
30
|
- A processor class file (`processors/todo-indexer/index.ts`)
|
|
29
31
|
- A database migration file (`processors/todo-indexer/migrations.ts`)
|
|
30
32
|
- A factory file for configuration (`processors/todo-indexer/factory.ts`)
|
|
@@ -37,6 +39,7 @@ Next, define your database schema in the `processors/todo-indexer/migration.ts`
|
|
|
37
39
|
**Understanding Database Migrations**
|
|
38
40
|
|
|
39
41
|
Migrations are version-controlled database changes that ensure your database schema evolves safely over time. They contain:
|
|
42
|
+
|
|
40
43
|
- **`up()` function**: Creates or modifies database structures when the processor starts
|
|
41
44
|
- **`down()` function**: Safely removes changes when the processor is removed
|
|
42
45
|
|
|
@@ -47,17 +50,17 @@ The migration file contains `up` and `down` functions that are called when the p
|
|
|
47
50
|
In the migration.ts file you'll find an example of the todo table default schema:
|
|
48
51
|
|
|
49
52
|
```ts
|
|
50
|
-
import { type IRelationalDb } from "document-drive/processors/types"
|
|
53
|
+
import { type IRelationalDb } from "document-drive/processors/types";
|
|
51
54
|
|
|
52
55
|
export async function up(db: IRelationalDb<any>): Promise<void> {
|
|
53
56
|
// Create table - this runs when the processor starts
|
|
54
57
|
await db.schema
|
|
55
|
-
.createTable("todo")
|
|
56
|
-
.addColumn("task", "varchar(255)")
|
|
57
|
-
.addColumn("status", "boolean")
|
|
58
|
+
.createTable("todo") // Creates a new table named "todo"
|
|
59
|
+
.addColumn("task", "varchar(255)") // Text column for the task description (max 255 characters)
|
|
60
|
+
.addColumn("status", "boolean") // Boolean column for completion status (true/false)
|
|
58
61
|
.addPrimaryKeyConstraint("todo_pkey", ["task"]) // Makes "task" the primary key (unique identifier)
|
|
59
|
-
.ifNotExists()
|
|
60
|
-
.execute();
|
|
62
|
+
.ifNotExists() // Only create if table doesn't already exist
|
|
63
|
+
.execute(); // Execute the SQL command
|
|
61
64
|
}
|
|
62
65
|
|
|
63
66
|
export async function down(db: IRelationalDb<any>): Promise<void> {
|
|
@@ -67,6 +70,7 @@ export async function down(db: IRelationalDb<any>): Promise<void> {
|
|
|
67
70
|
```
|
|
68
71
|
|
|
69
72
|
**Design Considerations:**
|
|
73
|
+
|
|
70
74
|
- We're using `task` as the primary key, which means each task description must be unique
|
|
71
75
|
- The `varchar(255)` limit ensures reasonable memory usage
|
|
72
76
|
- The `boolean` status makes it easy to filter completed vs. incomplete tasks
|
|
@@ -77,12 +81,13 @@ export async function down(db: IRelationalDb<any>): Promise<void> {
|
|
|
77
81
|
After defining your database schema, generate TypeScript types for type-safe queries and better IDE support:
|
|
78
82
|
|
|
79
83
|
```bash
|
|
80
|
-
ph generate --migration-file processors/todo-indexer/migrations.ts
|
|
84
|
+
ph generate --migration-file processors/todo-indexer/migrations.ts
|
|
81
85
|
```
|
|
82
86
|
|
|
83
87
|
**Why Generate Types?**
|
|
84
88
|
|
|
85
89
|
TypeScript types provide several benefits:
|
|
90
|
+
|
|
86
91
|
- **Type Safety**: Catch errors at compile time instead of runtime
|
|
87
92
|
- **IDE Support**: Get autocomplete and IntelliSense for your database queries
|
|
88
93
|
- **Documentation**: Types serve as living documentation of your database structure
|
|
@@ -91,6 +96,7 @@ TypeScript types provide several benefits:
|
|
|
91
96
|
Check your `processors/todo-indexer/schema.ts` file after generation - it will contain the TypeScript types for your database schema.
|
|
92
97
|
|
|
93
98
|
**Example of generated types:**
|
|
99
|
+
|
|
94
100
|
```ts
|
|
95
101
|
// This is what gets auto-generated based on your migration
|
|
96
102
|
export interface Database {
|
|
@@ -108,6 +114,7 @@ This give you the opportunity to configure the processor filter in `processors/t
|
|
|
108
114
|
**Understanding Processor Filters**
|
|
109
115
|
|
|
110
116
|
Filters determine which document changes your processor will respond to. This is crucial for performance and functionality:
|
|
117
|
+
|
|
111
118
|
- **Performance**: Only process relevant changes to avoid unnecessary work
|
|
112
119
|
- **Isolation**: Different processors can handle different document types
|
|
113
120
|
- **Scalability**: Distribute processing load across multiple processors
|
|
@@ -137,10 +144,10 @@ export const todoIndexerProcessorFactory =
|
|
|
137
144
|
// Create a filter for the processor
|
|
138
145
|
// This determines which document changes trigger the processor
|
|
139
146
|
const filter: RelationalDbProcessorFilter = {
|
|
140
|
-
branch: ["main"],
|
|
141
|
-
documentId: ["*"],
|
|
147
|
+
branch: ["main"], // Only process changes from the "main" branch
|
|
148
|
+
documentId: ["*"], // Process changes from any document ID (* = wildcard)
|
|
142
149
|
documentType: ["powerhouse/todolist"], // Only process todolist documents
|
|
143
|
-
scope: ["global"],
|
|
150
|
+
scope: ["global"], // Process global changes (not user-specific)
|
|
144
151
|
};
|
|
145
152
|
|
|
146
153
|
// Create the processor instance
|
|
@@ -155,8 +162,9 @@ export const todoIndexerProcessorFactory =
|
|
|
155
162
|
```
|
|
156
163
|
|
|
157
164
|
**Filter Options Explained:**
|
|
165
|
+
|
|
158
166
|
- **`branch`**: Which document branches to monitor (usually "main" for production data)
|
|
159
|
-
- **`documentId`**: Specific document IDs to watch ("
|
|
167
|
+
- **`documentId`**: Specific document IDs to watch ("\*" means all documents)
|
|
160
168
|
- **`documentType`**: Document types to process (ensures type safety)
|
|
161
169
|
- **`scope`**: Whether to process global changes or user-specific ones
|
|
162
170
|
|
|
@@ -167,12 +175,14 @@ Now implement the actual processor logic in `processors/todo-indexer/index.ts` b
|
|
|
167
175
|
**Understanding the Processor Lifecycle**
|
|
168
176
|
|
|
169
177
|
The processor has several key methods:
|
|
178
|
+
|
|
170
179
|
- **`initAndUpgrade()`**: Runs once when the processor starts (perfect for running migrations)
|
|
171
180
|
- **`onStrands()`**: Runs every time relevant document changes occur (this is where the main logic goes)
|
|
172
181
|
- **`onDisconnect()`**: Cleanup when the processor shuts down
|
|
173
182
|
|
|
174
183
|
**What are "Strands"?**
|
|
175
184
|
Strands represent a batch of operations that happened to documents. Each strand contains:
|
|
185
|
+
|
|
176
186
|
- Document ID and metadata
|
|
177
187
|
- Array of operations (create, update, delete, etc.)
|
|
178
188
|
- Previous and resulting document states
|
|
@@ -261,6 +271,7 @@ ph generate --subgraph todo
|
|
|
261
271
|
**What is a Subgraph?**
|
|
262
272
|
|
|
263
273
|
A subgraph is a GraphQL schema that exposes your processed data to clients. It:
|
|
274
|
+
|
|
264
275
|
- Provides a standardized API for accessing your relational database data
|
|
265
276
|
- Integrates with the Powerhouse supergraph for unified data access
|
|
266
277
|
- Supports both queries (reading data) and mutations (modifying data)
|
|
@@ -275,19 +286,17 @@ import { gql } from "graphql-tag";
|
|
|
275
286
|
import type { DocumentNode } from "graphql";
|
|
276
287
|
|
|
277
288
|
export const schema: DocumentNode = gql`
|
|
289
|
+
# Define the structure of a todo item as returned by GraphQL
|
|
290
|
+
type ToDoListEntry {
|
|
291
|
+
task: String! # The task description (! means required/non-null)
|
|
292
|
+
status: Boolean! # The completion status (true = done, false = pending)
|
|
293
|
+
}
|
|
278
294
|
|
|
279
|
-
# Define
|
|
280
|
-
type
|
|
281
|
-
|
|
282
|
-
|
|
283
|
-
}
|
|
284
|
-
|
|
285
|
-
# Define available queries
|
|
286
|
-
type Query {
|
|
287
|
-
todos(driveId: ID!): [ToDoListEntry] # Get array of todos for a specific drive
|
|
288
|
-
}
|
|
295
|
+
# Define available queries
|
|
296
|
+
type Query {
|
|
297
|
+
todos(driveId: ID!): [ToDoListEntry] # Get array of todos for a specific drive
|
|
298
|
+
}
|
|
289
299
|
`;
|
|
290
|
-
|
|
291
300
|
```
|
|
292
301
|
|
|
293
302
|
Open `./subgraphs/todo/resolvers.ts` and configure the resolvers:
|
|
@@ -307,18 +316,21 @@ export const getResolvers = (subgraph: Subgraph) => {
|
|
|
307
316
|
todos: {
|
|
308
317
|
// Resolver function for the "todos" query
|
|
309
318
|
// Arguments: parent object, query arguments, context, GraphQL info
|
|
310
|
-
resolve: async (_: any, args: {driveId: string}) => {
|
|
319
|
+
resolve: async (_: any, args: { driveId: string }) => {
|
|
311
320
|
// Query the database using the processor's static query method
|
|
312
321
|
// This gives us access to the namespaced database for the specific drive
|
|
313
|
-
const todos = await TodoIndexerProcessor.query(
|
|
314
|
-
.
|
|
315
|
-
|
|
316
|
-
|
|
322
|
+
const todos = await TodoIndexerProcessor.query(
|
|
323
|
+
args.driveId,
|
|
324
|
+
relationalDb,
|
|
325
|
+
)
|
|
326
|
+
.selectFrom("todo") // Select from the "todo" table
|
|
327
|
+
.selectAll() // Get all columns
|
|
328
|
+
.execute(); // Execute the query
|
|
317
329
|
|
|
318
330
|
// Transform database results to match GraphQL schema
|
|
319
331
|
return todos.map((todo) => ({
|
|
320
|
-
task: todo.task,
|
|
321
|
-
status: todo.status,
|
|
332
|
+
task: todo.task, // Map database "task" column to GraphQL "task" field
|
|
333
|
+
status: todo.status, // Map database "status" column to GraphQL "status" field
|
|
322
334
|
}));
|
|
323
335
|
},
|
|
324
336
|
},
|
|
@@ -327,19 +339,19 @@ export const getResolvers = (subgraph: Subgraph) => {
|
|
|
327
339
|
};
|
|
328
340
|
```
|
|
329
341
|
|
|
330
|
-
|
|
331
342
|
## Now query the data via the supergraph.
|
|
332
343
|
|
|
333
344
|
**Understanding the Supergraph**
|
|
334
345
|
|
|
335
346
|
The Powerhouse supergraph is a unified GraphQL endpoint that combines:
|
|
347
|
+
|
|
336
348
|
- **Document Models**: Direct access to your Powerhouse documents
|
|
337
349
|
- **Subgraphs**: Custom data views from your processors
|
|
338
350
|
- **Built-in APIs**: System functionality like authentication and drives
|
|
339
351
|
|
|
340
352
|
This unified approach means you can query document state AND processed data in a single request, which is perfect for building rich user interfaces.
|
|
341
353
|
|
|
342
|
-
The Powerhouse supergraph for any given remote drive or reactor can be found under `http://localhost:4001/graphql`. The gateway / supergraph available on `/graphql` combines all the subgraphs, except for the drive subgraph (which is accessible via `/d/:driveId`). To access the endpoint, start the reactor and navigate to the URL with `graphql` appended. The following commands explain how you can test & try the supergraph.
|
|
354
|
+
The Powerhouse supergraph for any given remote drive or reactor can be found under `http://localhost:4001/graphql`. The gateway / supergraph available on `/graphql` combines all the subgraphs, except for the drive subgraph (which is accessible via `/d/:driveId`). To access the endpoint, start the reactor and navigate to the URL with `graphql` appended. The following commands explain how you can test & try the supergraph.
|
|
343
355
|
|
|
344
356
|
- Start the reactor:
|
|
345
357
|
|
|
@@ -352,8 +364,9 @@ The Powerhouse supergraph for any given remote drive or reactor can be found und
|
|
|
352
364
|
```
|
|
353
365
|
http://localhost:4001/graphql
|
|
354
366
|
```
|
|
355
|
-
|
|
356
|
-
|
|
367
|
+
|
|
368
|
+
The supergraph allows you to both query & mutate data from the same endpoint.
|
|
369
|
+
Read more about [subgraphs](/academy/MasteryTrack/WorkWithData/UsingSubgraphs)
|
|
357
370
|
|
|
358
371
|
<details>
|
|
359
372
|
<summary>**Example: Complete Data Flow from Document Operations to Relational Database**</summary>
|
|
@@ -361,6 +374,7 @@ Read more about [subgraphs](/academy/MasteryTrack/WorkWithData/UsingSubgraphs)
|
|
|
361
374
|
**Understanding the Complete Data Pipeline**
|
|
362
375
|
|
|
363
376
|
This comprehensive example demonstrates the **entire data flow** in a Powerhouse application:
|
|
377
|
+
|
|
364
378
|
1. **Storage Layer**: Create a drive (document storage container)
|
|
365
379
|
2. **Document Layer**: Create a todo document and add operations
|
|
366
380
|
3. **Processing Layer**: Watch the relational database processor automatically index changes
|
|
@@ -381,7 +395,8 @@ mutation DriveCreation($name: String!) {
|
|
|
381
395
|
}
|
|
382
396
|
```
|
|
383
397
|
|
|
384
|
-
Variables:
|
|
398
|
+
Variables:
|
|
399
|
+
|
|
385
400
|
```json
|
|
386
401
|
{
|
|
387
402
|
"driveId": "powerhouse",
|
|
@@ -404,6 +419,7 @@ mutation Mutation($driveId: String, $name: String) {
|
|
|
404
419
|
```
|
|
405
420
|
|
|
406
421
|
Variables:
|
|
422
|
+
|
|
407
423
|
```json
|
|
408
424
|
{
|
|
409
425
|
"driveId": "powerhouse",
|
|
@@ -412,6 +428,7 @@ Variables:
|
|
|
412
428
|
```
|
|
413
429
|
|
|
414
430
|
Result:
|
|
431
|
+
|
|
415
432
|
```json
|
|
416
433
|
{
|
|
417
434
|
"data": {
|
|
@@ -429,12 +446,17 @@ Result:
|
|
|
429
446
|
**What's Happening**: Each time we add a todo item, we're creating a new **operation** in the document's history. Our relational database processor is listening for these operations in real-time.
|
|
430
447
|
|
|
431
448
|
```graphql
|
|
432
|
-
mutation Mutation(
|
|
449
|
+
mutation Mutation(
|
|
450
|
+
$driveId: String
|
|
451
|
+
$docId: PHID
|
|
452
|
+
$input: ToDoList_AddTodoItemInput
|
|
453
|
+
) {
|
|
433
454
|
ToDoList_addTodoItem(driveId: $driveId, docId: $docId, input: $input)
|
|
434
455
|
}
|
|
435
456
|
```
|
|
436
457
|
|
|
437
458
|
Variables:
|
|
459
|
+
|
|
438
460
|
```json
|
|
439
461
|
{
|
|
440
462
|
"driveId": "powerhouse",
|
|
@@ -448,6 +470,7 @@ Variables:
|
|
|
448
470
|
```
|
|
449
471
|
|
|
450
472
|
Result:
|
|
473
|
+
|
|
451
474
|
```json
|
|
452
475
|
{
|
|
453
476
|
"data": {
|
|
@@ -456,7 +479,8 @@ Result:
|
|
|
456
479
|
}
|
|
457
480
|
```
|
|
458
481
|
|
|
459
|
-
💡 **What Happens Next**:
|
|
482
|
+
💡 **What Happens Next**:
|
|
483
|
+
|
|
460
484
|
1. **Document Model**: Stores the operation and updates document state
|
|
461
485
|
2. **Reactor**: Broadcasts the operation to all listening processors
|
|
462
486
|
3. **Our Processor**: Automatically receives the operation and creates a database record
|
|
@@ -489,6 +513,7 @@ query Query($driveId: ID!) {
|
|
|
489
513
|
```
|
|
490
514
|
|
|
491
515
|
Variables:
|
|
516
|
+
|
|
492
517
|
```json
|
|
493
518
|
{
|
|
494
519
|
"driveId": "powerhouse"
|
|
@@ -496,6 +521,7 @@ Variables:
|
|
|
496
521
|
```
|
|
497
522
|
|
|
498
523
|
Response:
|
|
524
|
+
|
|
499
525
|
```json
|
|
500
526
|
{
|
|
501
527
|
"data": {
|
|
@@ -541,12 +567,14 @@ Response:
|
|
|
541
567
|
### **🔍 Data Analysis: Understanding What You're Seeing**
|
|
542
568
|
|
|
543
569
|
**Document Model Data (`ToDoList.getDocuments`):**
|
|
570
|
+
|
|
544
571
|
- ✅ **Current State**: Shows the final todo items as they exist in the document
|
|
545
572
|
- ✅ **User-Friendly**: Displays actual todo text like "complete mutation"
|
|
546
573
|
- ✅ **Real-Time**: Always reflects the latest document state
|
|
547
574
|
- ❌ **Limited History**: Doesn't show how the document changed over time
|
|
548
575
|
|
|
549
576
|
**Processed Relational Data (`todos`):**
|
|
577
|
+
|
|
550
578
|
- ✅ **Operation History**: Shows each individual operation that occurred
|
|
551
579
|
- ✅ **Audit Trail**: You can see the sequence (0, 1, 2) of operations
|
|
552
580
|
- ✅ **Analytics Ready**: Perfect for counting operations, tracking changes
|
|
@@ -556,11 +584,13 @@ Response:
|
|
|
556
584
|
---
|
|
557
585
|
|
|
558
586
|
**Key Differences:**
|
|
587
|
+
|
|
559
588
|
- **Document Query**: Gets the current state directly from the document model
|
|
560
589
|
- **Subgraph Query**: Gets processed/transformed data from your relational database
|
|
561
590
|
- **Combined Power**: You can query both in a single GraphQL request for rich UIs
|
|
562
591
|
|
|
563
|
-
This demonstrates how the supergraph provides a unified interface to both your document models and your custom subgraphs, allowing you to query and mutate data from the same endpoint.
|
|
592
|
+
This demonstrates how the supergraph provides a unified interface to both your document models and your custom subgraphs, allowing you to query and mutate data from the same endpoint.
|
|
593
|
+
|
|
564
594
|
</details>
|
|
565
595
|
|
|
566
596
|
## Use the Data in Frontend Applications
|
|
@@ -568,6 +598,7 @@ This demonstrates how the supergraph provides a unified interface to both your d
|
|
|
568
598
|
**Integration Options**
|
|
569
599
|
|
|
570
600
|
Your processed data can now be consumed by any GraphQL client:
|
|
601
|
+
|
|
571
602
|
- **React**: Using Apollo Client, urql, or Relay
|
|
572
603
|
- **Next.js**: API routes, getServerSideProps, or app router
|
|
573
604
|
- **Mobile Apps**: React Native, Flutter, or native iOS/Android
|
|
@@ -583,6 +614,7 @@ Your processed data can now be consumed by any GraphQL client:
|
|
|
583
614
|
**Why API Routes?**
|
|
584
615
|
|
|
585
616
|
Next.js API routes are useful when you need to:
|
|
617
|
+
|
|
586
618
|
- Add server-side authentication or authorization
|
|
587
619
|
- Transform data before sending to the client
|
|
588
620
|
- Implement caching or rate limiting
|
|
@@ -591,11 +623,11 @@ Next.js API routes are useful when you need to:
|
|
|
591
623
|
|
|
592
624
|
```ts
|
|
593
625
|
// pages/api/todos.ts
|
|
594
|
-
import { type NextApiRequest, type NextApiResponse } from "next"
|
|
626
|
+
import { type NextApiRequest, type NextApiResponse } from "next";
|
|
595
627
|
|
|
596
628
|
export default async function handler(
|
|
597
629
|
req: NextApiRequest,
|
|
598
|
-
res: NextApiResponse
|
|
630
|
+
res: NextApiResponse,
|
|
599
631
|
) {
|
|
600
632
|
// Only allow GET requests for this endpoint
|
|
601
633
|
if (req.method !== "GET") {
|
|
@@ -628,13 +660,13 @@ export default async function handler(
|
|
|
628
660
|
});
|
|
629
661
|
|
|
630
662
|
const data = await response.json();
|
|
631
|
-
|
|
663
|
+
|
|
632
664
|
// Return the todos array from the GraphQL response
|
|
633
665
|
res.status(200).json(data.data.todoList);
|
|
634
666
|
} catch (error) {
|
|
635
667
|
// Log the error for debugging (in production, use proper logging)
|
|
636
668
|
console.error("Failed to fetch todos:", error);
|
|
637
|
-
|
|
669
|
+
|
|
638
670
|
// Return a generic error message to the client
|
|
639
671
|
res.status(500).json({ error: "Failed to fetch todos" });
|
|
640
672
|
}
|
|
@@ -651,16 +683,14 @@ You've successfully created a relational database processor that:
|
|
|
651
683
|
4. ✅ **Exposes data through GraphQL** - Makes processed data available via a unified API
|
|
652
684
|
5. ✅ **Can be consumed by frontend applications** - Ready for integration with any GraphQL client
|
|
653
685
|
|
|
654
|
-
|
|
655
686
|
This processor will automatically sync your document changes to the relational database, making the data available for complex queries, reporting, and integration with other systems.
|
|
656
687
|
|
|
657
688
|
**Real-World Applications:**
|
|
658
689
|
|
|
659
690
|
This pattern is commonly used for:
|
|
691
|
+
|
|
660
692
|
- **Analytics dashboards** showing document usage patterns
|
|
661
693
|
- **Business intelligence** reports on document data
|
|
662
694
|
- **Integration** with existing enterprise systems
|
|
663
695
|
- **Search and filtering** with complex SQL queries
|
|
664
696
|
- **Data archival** and compliance requirements
|
|
665
|
-
|
|
666
|
-
|
|
@@ -10,10 +10,12 @@ But the **same queries can be used for any other document model**.
|
|
|
10
10
|
The queries below focus on 2 different ways of receiving the data.
|
|
11
11
|
We will show how you can receive:
|
|
12
12
|
|
|
13
|
-
> #### 1. The complete state of the document:
|
|
13
|
+
> #### 1. The complete state of the document:
|
|
14
|
+
>
|
|
14
15
|
> Such as the array of accounts, Special Purpose Vehicles (SPVs), fixed income types, fee types, portfolio details, and transactions associated with a particular RWA-report.
|
|
15
16
|
|
|
16
17
|
> #### 2. Only the latest changes and updates:
|
|
18
|
+
>
|
|
17
19
|
> Or specific operations of a document by registering a listener with a specific filter.
|
|
18
20
|
|
|
19
21
|
### Adding the specific document drive to Connect with a URL
|
|
@@ -26,20 +28,20 @@ Get access to an organisations drive instances by adding this drive to your Conn
|
|
|
26
28
|
|
|
27
29
|
Whenever you want to start a query from a document within connect you can open up switchboard by looking for the switchboard logo in the top right hand corner of the document editor interface, or by clicking a document in the connect drive explorer and opening the document in switchboard. This feature will not be available for your local drives as they are not hosted on switchboard.
|
|
28
30
|
|
|
29
|
-

|
|
30
|
-
Right click a document and find a direct link to switchboard GraphQL playground
|
|
31
|
+

|
|
32
|
+
Right click a document and find a direct link to switchboard GraphQL playground\*
|
|
31
33
|
|
|
32
34
|
## Querying data from Connect in Apollo Studio
|
|
33
35
|
|
|
34
36
|
Aside from switchboard you're able to make use of GraphQL interfaces such as Apollo Studio.
|
|
35
|
-
When opening the document in switchboard the endpoint will be visible at the top of the interface.
|
|
37
|
+
When opening the document in switchboard the endpoint will be visible at the top of the interface. Copy this endpoint and use it as your API endpoint in Apollo.
|
|
36
38
|
|
|
37
39
|

|
|
38
|
-
|
|
40
|
+
_The endpoint you'll be using for any other GraphQL playgrounds or sandboxes_
|
|
39
41
|
|
|
40
42
|
### 1. Querying the complete state of a document
|
|
41
43
|
|
|
42
|
-
This example query is structured to request a document by its unique identifier (id).
|
|
44
|
+
This example query is structured to request a document by its unique identifier (id).
|
|
43
45
|
It extracts common fields such as id, name, documentType, revision, created, and lastModified.
|
|
44
46
|
|
|
45
47
|
Additionally, it retrieves specific data related to the 'Real World Assets' document model, including accounts, SPVs, fixed income types, fee types, portfolio, and transactions. The RWA section of the query is designed to pull in detailed information about the financial structure and transactions of real-world assets managed within the document model.
|
|
@@ -58,6 +60,7 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
|
|
|
58
60
|
#### Real World Assets (RWA) Specific Fields
|
|
59
61
|
|
|
60
62
|
**State**
|
|
63
|
+
|
|
61
64
|
- `accounts`
|
|
62
65
|
- `id`: Unique identifier for the account.
|
|
63
66
|
- `reference`: Reference code or number associated with the account.
|
|
@@ -94,7 +97,6 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
|
|
|
94
97
|
- `coupon`: Coupon rate of the fixed income asset.
|
|
95
98
|
|
|
96
99
|
**Cash**
|
|
97
|
-
|
|
98
100
|
- `id`: Unique identifier for the cash holding.
|
|
99
101
|
- `spvId`: Identifier for the SPV associated with the cash holding.
|
|
100
102
|
- `currency`: Currency of the cash holding
|
|
@@ -102,7 +104,7 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
|
|
|
102
104
|
</details>
|
|
103
105
|
|
|
104
106
|
```graphql title="An example query for the full state of a document"
|
|
105
|
-
|
|
107
|
+
query {
|
|
106
108
|
document(id: "") {
|
|
107
109
|
name
|
|
108
110
|
documentType
|
|
@@ -158,15 +160,15 @@ Additionally, it retrieves specific data related to the 'Real World Assets' docu
|
|
|
158
160
|
}
|
|
159
161
|
}
|
|
160
162
|
}
|
|
161
|
-
|
|
162
163
|
```
|
|
163
164
|
|
|
164
165
|
### 2. Querying for the latest updates or specific documents
|
|
165
166
|
|
|
166
|
-
This query is particularly useful when you only need the latest changes from the document drive.
|
|
167
|
+
This query is particularly useful when you only need the latest changes from the document drive.
|
|
167
168
|
|
|
168
169
|
### 2.1 Registering a listener
|
|
169
|
-
|
|
170
|
+
|
|
171
|
+
For this purpose we support adding listeners through a graphQL mutation such as the PullResponderListenerbelow.
|
|
170
172
|
|
|
171
173
|
```graphql
|
|
172
174
|
mutation registerListener($filter: InputListenerFilter!) {
|
|
@@ -178,8 +180,8 @@ mutation registerListener($filter: InputListenerFilter!) {
|
|
|
178
180
|
|
|
179
181
|
### 2.2 Defining the filter
|
|
180
182
|
|
|
181
|
-
Through this listener you can define the filter with query variables.
|
|
182
|
-
This allows you to filter for specific document ID's or a lists, documentTypes, scopes or branches.
|
|
183
|
+
Through this listener you can define the filter with query variables.
|
|
184
|
+
This allows you to filter for specific document ID's or a lists, documentTypes, scopes or branches.
|
|
183
185
|
Branches allow you to query different versions of a document in case there is a conflict accross different versions of the document or when contributors are maintaining separate versions with the help of branching
|
|
184
186
|
|
|
185
187
|
In this case we're filtering by document type makerdao/rwa-portfolio.
|
|
@@ -195,20 +197,20 @@ Branches allow you to query different versions of a document in case there is a
|
|
|
195
197
|
}
|
|
196
198
|
```
|
|
197
199
|
|
|
198
|
-
This combination of query + query variables will return a listenerID which can be used in the next step to query for specific strands.
|
|
200
|
+
This combination of query + query variables will return a listenerID which can be used in the next step to query for specific strands.
|
|
199
201
|
|
|
200
202
|

|
|
201
|
-
|
|
203
|
+
_An example of registering a listener and receiving a listenerId back._
|
|
202
204
|
|
|
203
205
|
:::info
|
|
204
206
|
A strand in this scenario can be understood as a list of operations that has been applied to the RWA portfolio or any other document. As a query variable you'll want to add the received listenerId from the previous step together with the pullstrands query below
|
|
205
207
|
:::
|
|
206
208
|
|
|
207
|
-
```graphql title="Pullstrands query"
|
|
208
|
-
query pullStrands
|
|
209
|
+
```graphql title="Pullstrands query"
|
|
210
|
+
query pullStrands($listenerId: ID, $since: Date) {
|
|
209
211
|
system {
|
|
210
212
|
sync {
|
|
211
|
-
strands
|
|
213
|
+
strands(listenerId: $listenerId, since: $since) {
|
|
212
214
|
driveId
|
|
213
215
|
documentId
|
|
214
216
|
scope
|
|
@@ -235,10 +237,10 @@ query pullStrands ($listenerId:ID, $since: Date) {
|
|
|
235
237
|
```
|
|
236
238
|
|
|
237
239
|

|
|
238
|
-
|
|
240
|
+
_An example of using the ListenerID to pull specific strands (or document operations)_
|
|
239
241
|
|
|
240
|
-
In case you'd only like to receive the latest operations of a document the latest timestamp can be used as a filter in the since query variable to only get the most relevant or latest changes.
|
|
242
|
+
In case you'd only like to receive the latest operations of a document the latest timestamp can be used as a filter in the since query variable to only get the most relevant or latest changes.
|
|
241
243
|
|
|
242
244
|
:::info
|
|
243
245
|
A "strand" within the context of Powerhouse's Document Synchronization Protocol also refers to a single synchronization channel that connects exactly one unit of synchronization to another, with all four parameters (drive_url, doc_id, scope, branch) set to fixed values. This setup means that synchronization happens at a granular level, focusing specifically on one precise aspect of synchronization between two distinct points of instances of a document or document drive.
|
|
244
|
-
:::
|
|
246
|
+
:::
|
|
@@ -33,28 +33,28 @@ For example: Looking at the dimension section in the filter options: To see avai
|
|
|
33
33
|
### Guidelines for selecting and combining dimensions
|
|
34
34
|
|
|
35
35
|
1. **Understand the Purpose of Analysis**
|
|
36
|
-
Before selecting dimensions, clarify the objective of your analysis. Are you looking to track expenses for a specific project, analyze budget utilization, or examine transaction patterns? Your objective will guide which dimensions are most relevant.
|
|
36
|
+
Before selecting dimensions, clarify the objective of your analysis. Are you looking to track expenses for a specific project, analyze budget utilization, or examine transaction patterns? Your objective will guide which dimensions are most relevant.
|
|
37
37
|
|
|
38
38
|
2. **Choose Relevant Dimensions**
|
|
39
|
-
Select dimensions that align with your analytical goals. For instance, use the 'Project' dimension for project-based financial tracking or 'Wallet' for blockchain transaction analysis.
|
|
39
|
+
Select dimensions that align with your analytical goals. For instance, use the 'Project' dimension for project-based financial tracking or 'Wallet' for blockchain transaction analysis.
|
|
40
40
|
|
|
41
41
|
3. **Combining Dimensions for Depth**
|
|
42
|
-
Combine multiple dimensions to gain more nuanced insights. For example, you might combine 'Budget' and 'Category' to understand how different categories of expenses contribute to overall budget utilization within a specific area.
|
|
42
|
+
Combine multiple dimensions to gain more nuanced insights. For example, you might combine 'Budget' and 'Category' to understand how different categories of expenses contribute to overall budget utilization within a specific area.
|
|
43
43
|
|
|
44
44
|
4. **Hierarchy and Path Considerations**
|
|
45
|
-
Pay attention to the hierarchical structure in dimension paths. For instance, paths like atlas/scopes/SUP/I/PHOENIX/ suggest a structured breakdown that can be crucial for detailed analysis.
|
|
45
|
+
Pay attention to the hierarchical structure in dimension paths. For instance, paths like atlas/scopes/SUP/I/PHOENIX/ suggest a structured breakdown that can be crucial for detailed analysis.
|
|
46
46
|
|
|
47
47
|
5. **Utilize Descriptions for Context**
|
|
48
|
-
Where available, use the descriptions provided with dimension values to understand the context and relevance of each dimension to your analysis. This is particularly helpful in dimensions with null labels, where the path and description provide critical information.
|
|
48
|
+
Where available, use the descriptions provided with dimension values to understand the context and relevance of each dimension to your analysis. This is particularly helpful in dimensions with null labels, where the path and description provide critical information.
|
|
49
49
|
|
|
50
50
|
6. **Avoid Over-Complication**
|
|
51
|
-
While combining dimensions can provide depth, avoid overly complex combinations that might lead to confusing or inconclusive results. Stick to combinations that directly serve your analysis objectives.
|
|
51
|
+
While combining dimensions can provide depth, avoid overly complex combinations that might lead to confusing or inconclusive results. Stick to combinations that directly serve your analysis objectives.
|
|
52
52
|
|
|
53
53
|
7. **Use Icons for Quick Reference**
|
|
54
|
-
Where icons are available, they can be used as a quick visual reference to identify different dimensions or categories, particularly in user interfaces where rapid identification is beneficial.
|
|
54
|
+
Where icons are available, they can be used as a quick visual reference to identify different dimensions or categories, particularly in user interfaces where rapid identification is beneficial.
|
|
55
55
|
|
|
56
56
|
8. **Experiment and Iterate**
|
|
57
|
-
Don't hesitate to experiment with different combinations of dimensions to see which provide the most meaningful insights. The flexibility of the dimensions allows for various permutations and combinations to suit diverse analytical needs.
|
|
57
|
+
Don't hesitate to experiment with different combinations of dimensions to see which provide the most meaningful insights. The flexibility of the dimensions allows for various permutations and combinations to suit diverse analytical needs.
|
|
58
58
|
|
|
59
59
|
9. **Stay Updated**
|
|
60
|
-
Keep abreast of any changes or additions to the dimensions within the analytics engine, as this can impact ongoing and future analyses.
|
|
60
|
+
Keep abreast of any changes or additions to the dimensions within the analytics engine, as this can impact ongoing and future analyses.
|
|
@@ -74,7 +74,7 @@ In the provided GraphQL query, each field and element plays a specific role in d
|
|
|
74
74
|
- `series(filter: $filter)`: This field represents a collection of data points or a data set that matches the criteria specified by the filter. It's an array of results, where each result is a time-bound statistical representation of the filtered data. And passes the `$filter` variable as an argument to determine the scope of the data returned.
|
|
75
75
|
|
|
76
76
|
- `period`: Within each series, this field denotes the aggregation period for the data (e.g., monthly, quarterly, annually). It's a label describing the time span each series covers.
|
|
77
|
-
start: This is the starting date and time of the data series, indicating when the period begins.
|
|
77
|
+
start: This is the starting date and time of the data series, indicating when the period begins.
|
|
78
78
|
|
|
79
79
|
- `end`: This is the ending date and time of the data series, indicating when the period ends.
|
|
80
80
|
|
|
@@ -85,11 +85,11 @@ start: This is the starting date and time of the data series, indicating when th
|
|
|
85
85
|
- `unit`: This indicates the unit of measurement for the metric, such as quantities, currency (e.g., DAI), or percentages.
|
|
86
86
|
|
|
87
87
|
- `dimensions`: A nested array that provides context for the metric by breaking it down into finer categories or segments, such as 'project' or 'category'. Each dimension can contain:
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
88
|
+
- `name`: The identifier or key for the dimension.
|
|
89
|
+
- `path`: A structured representation of the dimension's hierarchy or location within a dataset.
|
|
90
|
+
- `label`: A human-readable label for the dimension, which can be used for display purposes.
|
|
91
|
+
- `description`: A brief explanation of the dimension to give users an understanding of what it represents.
|
|
92
|
+
- `icon`: A graphical representation or icon associated with the dimension for easier identification in user interfaces.
|
|
93
93
|
|
|
94
94
|
- `value`: The actual numerical value of the metric for each row within the specified period.
|
|
95
95
|
- `sum`: A total or aggregated value of the metric over the entire period, providing a summarized figure.
|
|
@@ -106,7 +106,6 @@ In the analytics engine, the filter object in a GraphQL query is crucial for tai
|
|
|
106
106
|
|
|
107
107
|
A filter object must include all the following parameters:
|
|
108
108
|
|
|
109
|
-
|
|
110
109
|
1. Start Date (`start`): The beginning date of the data retrieval period.
|
|
111
110
|
|
|
112
111
|
2. End Date (`end`): The end date of the data retrieval period.
|
|
@@ -120,32 +119,21 @@ A filter object must include all the following parameters:
|
|
|
120
119
|
In the analytics engine, dimensions play a critical role in segmenting and analyzing data. The `select` field within a dimension and its correlation with the level of detail (`lod`)
|
|
121
120
|
parameter are particularly crucial for tailoring your query to retrieve the most relevant and precise data. Here's a detailed explanation of their importance:
|
|
122
121
|
|
|
123
|
-
The `select` Field in Dimensions
|
|
124
|
-
- **Function**: The select field specifies the exact segment or category within a dimension that you want to analyze. For example, in a dimension named "project," the select field could specify a particular project like "SES" or "Atlas."
|
|
125
|
-
- **Path Specification**: The value set in the select field often represents a path in the data hierarchy. This path directs the query to focus on specific segments or nodes within the broader dimension category.
|
|
126
|
-
- **Precision in Data Retrieval**: By setting the right path in the select field, you ensure that the query fetches data pertinent only to that specific segment, leading to more targeted and relevant insights.
|
|
127
|
-
- **Example**: If you set the **select** field to "`atlas/scopes/SUP/I/PHOENIX/`" in the "`budget`" dimension, the query will retrieve data related to the budget allocated specifically for the "`PHOENIX`" project under the "`SUP`" scope in the "`Atlas`" system.
|
|
122
|
+
The `select` Field in Dimensions - **Function**: The select field specifies the exact segment or category within a dimension that you want to analyze. For example, in a dimension named "project," the select field could specify a particular project like "SES" or "Atlas." - **Path Specification**: The value set in the select field often represents a path in the data hierarchy. This path directs the query to focus on specific segments or nodes within the broader dimension category. - **Precision in Data Retrieval**: By setting the right path in the select field, you ensure that the query fetches data pertinent only to that specific segment, leading to more targeted and relevant insights. - **Example**: If you set the **select** field to "`atlas/scopes/SUP/I/PHOENIX/`" in the "`budget`" dimension, the query will retrieve data related to the budget allocated specifically for the "`PHOENIX`" project under the "`SUP`" scope in the "`Atlas`" system.
|
|
128
123
|
|
|
129
|
-
The level of detail (`lod`) Parameter
|
|
130
|
-
- **Granularity** Within Dimensions: While the select field specifies what to select within a dimension, the `lod` parameter determines how detailed or summarized the information should be.
|
|
131
|
-
- **Hierarchy Levels**: Different levels in `lod` represent different levels of detail in the data hierarchy. A higher `lod` value typically means a more detailed breakdown, while a lower value indicates a more summarized or aggregated view.
|
|
132
|
-
- **Correlation with `select` Path**: The lod value you choose should correspond appropriately to the path specified in the `select` field. A mismatch might lead to data that is either too granular or too generalized than what is needed.
|
|
133
|
-
- **Impact on Analysis**: The level of detail can significantly affect the analysis. For instance, a high lod can provide in-depth insights into a specific area, useful for detailed audits or close examination of a particular segment. Conversely, a low lod is better for broader overviews or when comparing larger categories.
|
|
124
|
+
The level of detail (`lod`) Parameter - **Granularity** Within Dimensions: While the select field specifies what to select within a dimension, the `lod` parameter determines how detailed or summarized the information should be. - **Hierarchy Levels**: Different levels in `lod` represent different levels of detail in the data hierarchy. A higher `lod` value typically means a more detailed breakdown, while a lower value indicates a more summarized or aggregated view. - **Correlation with `select` Path**: The lod value you choose should correspond appropriately to the path specified in the `select` field. A mismatch might lead to data that is either too granular or too generalized than what is needed. - **Impact on Analysis**: The level of detail can significantly affect the analysis. For instance, a high lod can provide in-depth insights into a specific area, useful for detailed audits or close examination of a particular segment. Conversely, a low lod is better for broader overviews or when comparing larger categories.
|
|
134
125
|
|
|
135
|
-
Importance of Correct Configuration
|
|
136
|
-
- **Accuracy of Results**: Setting the correct path in the select field and aligning it with an appropriate lod ensures the accuracy and relevance of the query results. Incorrect or mismatched configurations might lead to misleading insights or data that doesn't serve the intended analytical purpose.
|
|
137
|
-
- **Customized Analysis**: Together, the select field and lod allow for a high degree of customization in data queries. Users can tailor their data requests precisely to their specific requirements, whether they need a broad overview or a detailed breakdown.
|
|
138
|
-
- Follow the right upper or lower case letter style from metrics, granularity and dimensions.
|
|
126
|
+
Importance of Correct Configuration - **Accuracy of Results**: Setting the correct path in the select field and aligning it with an appropriate lod ensures the accuracy and relevance of the query results. Incorrect or mismatched configurations might lead to misleading insights or data that doesn't serve the intended analytical purpose. - **Customized Analysis**: Together, the select field and lod allow for a high degree of customization in data queries. Users can tailor their data requests precisely to their specific requirements, whether they need a broad overview or a detailed breakdown. - Follow the right upper or lower case letter style from metrics, granularity and dimensions.
|
|
139
127
|
|
|
140
128
|
6. **Currency (`currency`)**: The currency format for the financial data (e.g., DAI, MKR).
|
|
141
129
|
|
|
142
130
|
The filter object can be created by using the UI menu from the graphql apollo studio explorer:
|
|
143
131
|
|
|
144
132
|

|
|
145
|
-
|
|
133
|
+
_Select the filter_
|
|
146
134
|
|
|
147
135
|

|
|
148
|
-
|
|
136
|
+
_Select all filter fields and sub fields_
|
|
149
137
|
|
|
150
138
|
## Troubleshooting
|
|
151
139
|
|