@powerhousedao/academy 3.2.0-dev.6 → 3.2.0-dev.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md
CHANGED
|
@@ -1,3 +1,23 @@
|
|
|
1
|
+
## 3.2.0-dev.8 (2025-07-01)
|
|
2
|
+
|
|
3
|
+
### 🚀 Features
|
|
4
|
+
|
|
5
|
+
- **academy:** add Drive Analytics documentation and examples ([daedc28a3](https://github.com/powerhouse-inc/powerhouse/commit/daedc28a3))
|
|
6
|
+
|
|
7
|
+
### ❤️ Thank You
|
|
8
|
+
|
|
9
|
+
- Guillermo Puente @gpuente
|
|
10
|
+
|
|
11
|
+
## 3.2.0-dev.7 (2025-06-28)
|
|
12
|
+
|
|
13
|
+
### 🚀 Features
|
|
14
|
+
|
|
15
|
+
- starting to stub out a complete example of the analytics processor ([a84ed2dcf](https://github.com/powerhouse-inc/powerhouse/commit/a84ed2dcf))
|
|
16
|
+
|
|
17
|
+
### ❤️ Thank You
|
|
18
|
+
|
|
19
|
+
- Benjamin Jordan (@thegoldenmule)
|
|
20
|
+
|
|
1
21
|
## 3.2.0-dev.6 (2025-06-27)
|
|
2
22
|
|
|
3
23
|
### 🚀 Features
|
|
@@ -7,7 +7,7 @@ An Analytics Processor is an object that can track analytics for operations and
|
|
|
7
7
|
The `ph-cli` utility can be used to generate the scaffolding for an Analytics Processor.
|
|
8
8
|
|
|
9
9
|
```
|
|
10
|
-
|
|
10
|
+
ph generate -p MyAnalyticsProcessor --processor-type analytics
|
|
11
11
|
```
|
|
12
12
|
|
|
13
13
|
This will generate two files: a class that implements `IProcessor` and a `ProcessorFactory` function that creates an instance of your processor. We can start with the factory.
|
|
@@ -340,3 +340,283 @@ export class DriveProcessorProcessor implements IProcessor {
|
|
|
340
340
|
async onDisconnect() {}
|
|
341
341
|
}
|
|
342
342
|
```
|
|
343
|
+
|
|
344
|
+
## Learn by example: Contributor Billing
|
|
345
|
+
|
|
346
|
+
Here we have documented the entire development process for the contributor billing processor.
|
|
347
|
+
|
|
348
|
+
First, we setup the repository locally.
|
|
349
|
+
|
|
350
|
+
```
|
|
351
|
+
$ git clone git@github.com:powerhouse-inc/contributor-billing.git
|
|
352
|
+
$ cd contributor-billing
|
|
353
|
+
~/contributor-billing $ pnpm install
|
|
354
|
+
```
|
|
355
|
+
|
|
356
|
+
Now we can generate an analytics processor, using default settings.
|
|
357
|
+
|
|
358
|
+
```
|
|
359
|
+
~/contributor-billing $ ph generate -p LineItemProcessor --processor-type analytics
|
|
360
|
+
```
|
|
361
|
+
|
|
362
|
+
We can see what was generated:
|
|
363
|
+
|
|
364
|
+
```
|
|
365
|
+
~/contributor-billing $ tree processors
|
|
366
|
+
processors
|
|
367
|
+
├── index.ts
|
|
368
|
+
└── line-item-processor
|
|
369
|
+
└── index.ts
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
> Note that `processors/index.ts` will only be created if it does not already exist. In this case, you will be responsible for adding the processor to the `processorFactory` function.
|
|
373
|
+
|
|
374
|
+
### `IProcessorFactory`
|
|
375
|
+
|
|
376
|
+
Let's check out the generated `index.ts` file.
|
|
377
|
+
|
|
378
|
+
```ts
|
|
379
|
+
/**
|
|
380
|
+
* This is a scaffold file meant for customization.
|
|
381
|
+
* Delete the file and run the code generator again to have it reset
|
|
382
|
+
*/
|
|
383
|
+
|
|
384
|
+
import { ProcessorRecord } from "document-drive/processors/types";
|
|
385
|
+
import { LineItemProcessorProcessor } from "./line-item-processor/index.js";
|
|
386
|
+
|
|
387
|
+
export const processorFactory =
|
|
388
|
+
(module: any) =>
|
|
389
|
+
(driveId: string): ProcessorRecord[] => {
|
|
390
|
+
return [
|
|
391
|
+
{
|
|
392
|
+
processor: new LineItemProcessorProcessor(module.analyticsStore),
|
|
393
|
+
filter: {
|
|
394
|
+
branch: ["main"],
|
|
395
|
+
documentId: ["*"],
|
|
396
|
+
scope: ["*"],
|
|
397
|
+
documentType: ["*"],
|
|
398
|
+
},
|
|
399
|
+
},
|
|
400
|
+
];
|
|
401
|
+
};
|
|
402
|
+
```
|
|
403
|
+
|
|
404
|
+
This is described in more detail in the [ProcessorFactory](#processorfactory) section, but for our purposes, we only want our processor to run on our document type, so we should update the filter accordingly.
|
|
405
|
+
|
|
406
|
+
```ts
|
|
407
|
+
filter: {
|
|
408
|
+
branch: ["main"],
|
|
409
|
+
documentId: ["*"],
|
|
410
|
+
scope: ["*"],
|
|
411
|
+
documentType: ["powerhouse/billing-statement"],
|
|
412
|
+
},
|
|
413
|
+
```
|
|
414
|
+
|
|
415
|
+
### Dimension Design
|
|
416
|
+
|
|
417
|
+
Before we get into the meat of the processor, we should be sure to spend some upfront time designing the data we want to query. One way to do this is to start at the end: what do we want to see? In the case of billing statements, we will want to be able to generate reports that show:
|
|
418
|
+
|
|
419
|
+
- Total spent on headcount vs non-headcount
|
|
420
|
+
- Total spent across all budgets
|
|
421
|
+
- Stacked bar chart of total spent each month, grouped by budget
|
|
422
|
+
- Stacked bar chart of total spent each month, grouped by expense category
|
|
423
|
+
- Total spent each year across all budgets
|
|
424
|
+
- Total spent each year, grouped by budget
|
|
425
|
+
- Total spent per month, grouped by budget
|
|
426
|
+
- Total spent last 30 days, grouped by budget
|
|
427
|
+
|
|
428
|
+
From here, we can deconstruct the different criteria we would like to group data across:
|
|
429
|
+
|
|
430
|
+
- time period
|
|
431
|
+
- budget
|
|
432
|
+
- category
|
|
433
|
+
- contributor
|
|
434
|
+
|
|
435
|
+
The analytics engine gives us a good way to bucket based on time period, so we can focus on the other criteria, which we will specify as _dimensions_. Let's use these dimentions to stub out some of the queries we would want to run, using the `useAnalyticsQuery` hook.
|
|
436
|
+
|
|
437
|
+
### Query Design
|
|
438
|
+
|
|
439
|
+
Let's start with "Total spent on headcount vs non-headcount". First, we need to define the time-based criteria.
|
|
440
|
+
|
|
441
|
+
> Time and Dates can be very confusing. This is why we use the `DateTime` class from `luxon`-- see the [luxon docs](https://moment.github.io/luxon/#/math) for a quickstart.
|
|
442
|
+
|
|
443
|
+
```ts
|
|
444
|
+
// easy way to get the start and end of the current year
|
|
445
|
+
const start = DateTime.now().startOf("year");
|
|
446
|
+
const end = DateTime.now().endOf("year");
|
|
447
|
+
|
|
448
|
+
// this means we'll aggregate results across the entire time period
|
|
449
|
+
const granularity = "total";
|
|
450
|
+
```
|
|
451
|
+
|
|
452
|
+
Next, we want to define the metrics we want to analyze. These are the numerical values we will be aggregating over.
|
|
453
|
+
|
|
454
|
+
```ts
|
|
455
|
+
// the two numerical values we want to analyze are cash and powt, which are declared separately in the document model
|
|
456
|
+
const metrics = ["Cash", "Powt"];
|
|
457
|
+
```
|
|
458
|
+
|
|
459
|
+
Now, we can define the dimensions we want to group by. We can imagine that we will have a `contributor` dimension, which will tell us whether or not the contributor is headcount: `/billing-statement/contributor/headcount` or `/billing-statement/contributor/non-headcount`.
|
|
460
|
+
|
|
461
|
+
> It's best practice to namespace dimensions so that we are sure our data is not colliding with other processors. In this case, we will prepend the `billing-statement` namespace, which is simply a prefix we made up.
|
|
462
|
+
|
|
463
|
+
```ts
|
|
464
|
+
const totalSpendOnHeadcount = useAnalyticsQuery({
|
|
465
|
+
start, end, granularity, metrics,
|
|
466
|
+
select: {
|
|
467
|
+
contributor: "/billing-statement/contributor"
|
|
468
|
+
},
|
|
469
|
+
lod: {
|
|
470
|
+
contributor: 3,
|
|
471
|
+
},
|
|
472
|
+
});
|
|
473
|
+
```
|
|
474
|
+
|
|
475
|
+
It is very important to note that the `lod` parameter is used to specify the level of detail we want to see. In this case, we want to see results grouped by contributor, so we set `lod` to `3`. This means we will get separate metric results for `/billing-statement/contributor/headcount` and `/billing-statement/contributor/non-headcount`.
|
|
476
|
+
|
|
477
|
+
We can use these same strategies to create queries for the other criteria we want to group by.
|
|
478
|
+
|
|
479
|
+
```ts
|
|
480
|
+
const totalSpend = useAnalyticsQuery({
|
|
481
|
+
start,
|
|
482
|
+
end,
|
|
483
|
+
granularity: "total", // <--- this means we'll get results for the entire time period
|
|
484
|
+
metrics: ["Cash", "Powt"],
|
|
485
|
+
select: {
|
|
486
|
+
budget: "/billing-statement"
|
|
487
|
+
},
|
|
488
|
+
lod: {
|
|
489
|
+
budget: 0, // <--- this means we'll get all results lumped together
|
|
490
|
+
},
|
|
491
|
+
});
|
|
492
|
+
|
|
493
|
+
const monthlySpendByBudget = useAnalyticsQuery({
|
|
494
|
+
start,
|
|
495
|
+
end,
|
|
496
|
+
granularity: "monthly", // <--- this means we'll get results grouped by month
|
|
497
|
+
metrics: ["Cash", "Powt"],
|
|
498
|
+
select: {
|
|
499
|
+
budget: "/billing-statement/budget"
|
|
500
|
+
},
|
|
501
|
+
lod: {
|
|
502
|
+
budget: 3, // <--- this means we'll get results grouped by "/billing-statement/budget/budget1", "/billing-statement/budget/budget2", etc.
|
|
503
|
+
},
|
|
504
|
+
});
|
|
505
|
+
|
|
506
|
+
const monthlySpendByCategory = useAnalyticsQuery({
|
|
507
|
+
start,
|
|
508
|
+
end,
|
|
509
|
+
granularity: "monthly", // <--- this means we'll get results grouped by month
|
|
510
|
+
metrics: ["Cash", "Powt"],
|
|
511
|
+
select: {
|
|
512
|
+
category: "/billing-statement/category"
|
|
513
|
+
},
|
|
514
|
+
lod: {
|
|
515
|
+
category: 3, // <--- this means we'll get results grouped by "/billing-statement/category/category1", "/billing-statement/category/category2", etc.
|
|
516
|
+
},
|
|
517
|
+
});
|
|
518
|
+
|
|
519
|
+
const yearlySpendByBudget = useAnalyticsQuery({
|
|
520
|
+
start: DateTime.fromObject({ year: 2022 }),
|
|
521
|
+
end: DateTime.now().endOf("year"),
|
|
522
|
+
granularity: "yearly", // <--- this means we'll get results grouped by year
|
|
523
|
+
metrics: ["Cash", "Powt"],
|
|
524
|
+
select: {
|
|
525
|
+
budget: "/billing-statement/budget"
|
|
526
|
+
},
|
|
527
|
+
lod: {
|
|
528
|
+
budget: 3, // <--- this means we'll get results grouped by "/billing-statement/budget/budget1", "/billing-statement/budget/budget2", etc.
|
|
529
|
+
},
|
|
530
|
+
});
|
|
531
|
+
|
|
532
|
+
const monthlySpendByBudget = useAnalyticsQuery({
|
|
533
|
+
start,
|
|
534
|
+
end,
|
|
535
|
+
granularity: "monthly", // <--- this means we'll get results grouped by month
|
|
536
|
+
metrics: ["Cash", "Powt"],
|
|
537
|
+
select: {
|
|
538
|
+
budget: "/billing-statement/budget"
|
|
539
|
+
},
|
|
540
|
+
lod: {
|
|
541
|
+
budget: 3, // <--- this means we'll get results grouped by "/billing-statement/budget/budget1", "/billing-statement/budget/budget2", etc.
|
|
542
|
+
},
|
|
543
|
+
});
|
|
544
|
+
|
|
545
|
+
const last30DaysSpendByBudget = useAnalyticsQuery({
|
|
546
|
+
start: DateTime.now().minus({ days: 30 }),
|
|
547
|
+
end: DateTime.now(),
|
|
548
|
+
granularity: "day", // <--- this means we'll get results grouped by day
|
|
549
|
+
metrics: ["Cash", "Powt"],
|
|
550
|
+
select: {
|
|
551
|
+
budget: "/billing-statement/budget"
|
|
552
|
+
},
|
|
553
|
+
lod: {
|
|
554
|
+
budget: 3, // <--- this means we'll get results grouped by "/billing-statement/budget/budget1", "/billing-statement/budget/budget2", etc.
|
|
555
|
+
},
|
|
556
|
+
});
|
|
557
|
+
```
|
|
558
|
+
|
|
559
|
+
### Source Design
|
|
560
|
+
|
|
561
|
+
The final consideration is the source design. While dimensions and sources both use path syntax, _the paths are unrelated_. That is, a path used in an AnalyticsSeries `source` does not affect a path used in a `dimension`, and vice versa. The `source` attribute of an analytics series is a composable mechanism to track down _where the data came from_.
|
|
562
|
+
|
|
563
|
+
This turns out to be an important consideration, as when we query data, we will likely also want to subscribe to a set of sources to later update the data.
|
|
564
|
+
|
|
565
|
+
For instance, say we take our monthly spend by category query:
|
|
566
|
+
|
|
567
|
+
```ts
|
|
568
|
+
const monthlySpendByCategory = useAnalyticsQuery({
|
|
569
|
+
start,
|
|
570
|
+
end,
|
|
571
|
+
granularity: "monthly",
|
|
572
|
+
metrics: ["Cash", "Powt"],
|
|
573
|
+
select: {
|
|
574
|
+
category: "/billing-statement/category"
|
|
575
|
+
},
|
|
576
|
+
lod: {
|
|
577
|
+
category: 3,
|
|
578
|
+
},
|
|
579
|
+
});
|
|
580
|
+
```
|
|
581
|
+
|
|
582
|
+
This gives us the results we're looking for but, by design, there may be many different `AnalyticsSeries` objects that relate to affect this query. Thus, the hook does not know what to listen to. This is where our `source` design comes in. Generally, we will want to relate analytics by drive and/or document.
|
|
583
|
+
|
|
584
|
+
```ts
|
|
585
|
+
// this source will match all analytics updates from any document in the drive
|
|
586
|
+
const driveSource = AnalyticsPath.fromString(`billing-statement/${drive.header.id}`);
|
|
587
|
+
|
|
588
|
+
// this source will match all analytics updates from a specific document in a drive
|
|
589
|
+
const documentSource = AnalyticsPath.fromString(`billing-statement/${drive.header.id}/${document.header.id}`);
|
|
590
|
+
```
|
|
591
|
+
|
|
592
|
+
```ts
|
|
593
|
+
const { state, data: drive } = useSelectedDrive();
|
|
594
|
+
|
|
595
|
+
const results = useAnalyticsQuery({
|
|
596
|
+
start, end,
|
|
597
|
+
granularity: "monthly",
|
|
598
|
+
metrics: ["Cash", "Powt"],
|
|
599
|
+
select: {
|
|
600
|
+
category: "/billing-statement/category"
|
|
601
|
+
},
|
|
602
|
+
lod: {
|
|
603
|
+
category: 3,
|
|
604
|
+
},
|
|
605
|
+
},
|
|
606
|
+
{
|
|
607
|
+
sources: [
|
|
608
|
+
`/billing-statement/${drive.header.id}/`
|
|
609
|
+
],
|
|
610
|
+
});
|
|
611
|
+
```
|
|
612
|
+
|
|
613
|
+
### `IProcessor`
|
|
614
|
+
|
|
615
|
+
Now that we have designed out our data, we can open up `line-item-processor/index.ts` to add the custom logic we're looking for. This will be in the `onStrands` function.
|
|
616
|
+
|
|
617
|
+
```ts
|
|
618
|
+
|
|
619
|
+
```
|
|
620
|
+
|
|
621
|
+
|
|
622
|
+
|
|
@@ -0,0 +1,467 @@
|
|
|
1
|
+
# Drive Analytics
|
|
2
|
+
|
|
3
|
+
Drive Analytics provides automated monitoring and insights into document drive operations within Powerhouse applications. This system tracks user interactions, document modifications, and drive activity to help developers understand usage patterns and system performance.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
The Drive Analytics system consists of two specialized processors that automatically collect metrics from document drives:
|
|
8
|
+
|
|
9
|
+
1. **Drive Analytics Processor**: Tracks file and folder operations (creation, deletion, moves, etc.)
|
|
10
|
+
2. **Document Analytics Processor**: Tracks document content changes and state modifications
|
|
11
|
+
|
|
12
|
+
These processors run in the background, converting operations into structured time-series data that can be queried and visualized in real-time.
|
|
13
|
+
|
|
14
|
+
## Available Metrics in Connect
|
|
15
|
+
|
|
16
|
+
Connect applications have Drive Analytics enabled by default through the `ReactorAnalyticsProvider`. When enabled, the system automatically tracks:
|
|
17
|
+
|
|
18
|
+
### Drive Operations Metrics
|
|
19
|
+
- **File Creation**: New documents added to drives
|
|
20
|
+
- **Folder Creation**: New directories created
|
|
21
|
+
- **File Updates**: Document content modifications
|
|
22
|
+
- **Node Updates**: Metadata changes
|
|
23
|
+
- **File Moves**: Documents relocated between folders
|
|
24
|
+
- **File Copies**: Document duplication
|
|
25
|
+
- **File Deletions**: Documents removed from drives
|
|
26
|
+
|
|
27
|
+
### Document Operations Metrics
|
|
28
|
+
- **State Changes**: Document model state modifications
|
|
29
|
+
|
|
30
|
+
## Data Sources and Structure
|
|
31
|
+
|
|
32
|
+
Drive Analytics organizes data using hierarchical source paths that allow precise querying of different analytics contexts:
|
|
33
|
+
|
|
34
|
+
### Drive Analytics Sources
|
|
35
|
+
Pattern: `ph/drive/{driveId}/{branch}/{scope}`
|
|
36
|
+
- **driveId**: Unique identifier for the document drive
|
|
37
|
+
- **branch**: Branch name (e.g., "main", "dev")
|
|
38
|
+
- **scope**: Operation scope ("global" for shared operations, "local" for device-specific)
|
|
39
|
+
|
|
40
|
+
Example: `ph/drive/abc123/main/global`
|
|
41
|
+
|
|
42
|
+
### Document Analytics Sources
|
|
43
|
+
Pattern: `ph/doc/{driveId}/{documentId}/{branch}/{scope}`
|
|
44
|
+
- **driveId**: Drive containing the document
|
|
45
|
+
- **documentId**: Specific document identifier
|
|
46
|
+
- **branch**: Branch name
|
|
47
|
+
- **scope**: Operation scope
|
|
48
|
+
|
|
49
|
+
Example: `ph/doc/abc123/doc456/main/global`
|
|
50
|
+
|
|
51
|
+
## Available Metrics
|
|
52
|
+
|
|
53
|
+
### DriveOperations
|
|
54
|
+
Tracks file system operations within drives:
|
|
55
|
+
- **Value**: Always 1 (counter metric)
|
|
56
|
+
- **Purpose**: Count drive-level operations like file creation, deletion, moves
|
|
57
|
+
- **Source Pattern**: `ph/drive/*`
|
|
58
|
+
|
|
59
|
+
### DocumentOperations
|
|
60
|
+
Tracks document content and state changes:
|
|
61
|
+
- **Value**: Always 1 (counter metric)
|
|
62
|
+
- **Purpose**: Count document-specific operations like state changes
|
|
63
|
+
- **Source Pattern**: `ph/doc/*`
|
|
64
|
+
|
|
65
|
+
## Complete Dimensions Reference
|
|
66
|
+
|
|
67
|
+
### Drive Analytics Dimensions
|
|
68
|
+
|
|
69
|
+
#### 1. Drive Dimension
|
|
70
|
+
**Pattern**: `ph/drive/{driveId}/{branch}/{scope}/{revision}`
|
|
71
|
+
**Purpose**: Identifies the drive context with revision information
|
|
72
|
+
```tsx
|
|
73
|
+
// Examples
|
|
74
|
+
"ph/drive/abc123/main/global/42"
|
|
75
|
+
"ph/drive/my-drive/feature-branch/local/15"
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
#### 2. Operation Dimension
|
|
79
|
+
**Pattern**: `ph/drive/operation/{operationType}/{operationIndex}`
|
|
80
|
+
**Purpose**: Identifies specific operation types and their sequence
|
|
81
|
+
|
|
82
|
+
**Available Operation Types**:
|
|
83
|
+
- **ADD_FILE**: Create new file
|
|
84
|
+
- **ADD_FOLDER**: Create new folder
|
|
85
|
+
- **UPDATE_FILE**: Modify file content
|
|
86
|
+
- **UPDATE_NODE**: Modify node metadata
|
|
87
|
+
- **MOVE_NODE**: Move file/folder to different location
|
|
88
|
+
- **COPY_NODE**: Duplicate existing file/folder
|
|
89
|
+
- **DELETE_NODE**: Remove file/folder
|
|
90
|
+
|
|
91
|
+
```tsx
|
|
92
|
+
// Examples
|
|
93
|
+
"ph/drive/operation/ADD_FILE/5"
|
|
94
|
+
"ph/drive/operation/DELETE_NODE/23"
|
|
95
|
+
"ph/drive/operation/MOVE_NODE/12"
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
#### 3. Target Dimension
|
|
99
|
+
**Pattern**: `ph/drive/target/{targetType}/{targetId}`
|
|
100
|
+
**Purpose**: Identifies what was targeted by the operation
|
|
101
|
+
|
|
102
|
+
**Target Types**:
|
|
103
|
+
- **DRIVE**: Operation affects the drive itself
|
|
104
|
+
- **NODE**: Operation affects a specific file/folder
|
|
105
|
+
|
|
106
|
+
```tsx
|
|
107
|
+
// Examples
|
|
108
|
+
"ph/drive/target/DRIVE/abc123"
|
|
109
|
+
"ph/drive/target/NODE/file456"
|
|
110
|
+
"ph/drive/target/NODE/folder789"
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
#### 4. Action Type Dimension
|
|
114
|
+
**Pattern**: `ph/drive/actionType/{actionType}/{targetId}`
|
|
115
|
+
**Purpose**: Categorizes operations by their effect
|
|
116
|
+
|
|
117
|
+
**Action Types**:
|
|
118
|
+
- **CREATED**: New items added (ADD_FILE, ADD_FOLDER)
|
|
119
|
+
- **DUPLICATED**: Items copied (COPY_NODE)
|
|
120
|
+
- **UPDATED**: Existing items modified (UPDATE_FILE, UPDATE_NODE)
|
|
121
|
+
- **MOVED**: Items relocated (MOVE_NODE)
|
|
122
|
+
- **REMOVED**: Items deleted (DELETE_NODE)
|
|
123
|
+
|
|
124
|
+
```tsx
|
|
125
|
+
// Examples
|
|
126
|
+
"ph/drive/actionType/CREATED/file123"
|
|
127
|
+
"ph/drive/actionType/MOVED/folder456"
|
|
128
|
+
"ph/drive/actionType/REMOVED/doc789"
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
### Document Analytics Dimensions
|
|
132
|
+
|
|
133
|
+
#### 1. Drive Dimension
|
|
134
|
+
**Pattern**: `ph/doc/drive/{driveId}/{branch}/{scope}/{revision}`
|
|
135
|
+
**Purpose**: Drive context for document operations
|
|
136
|
+
```tsx
|
|
137
|
+
// Examples
|
|
138
|
+
"ph/doc/drive/abc123/main/global/42"
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
#### 2. Operation Dimension
|
|
142
|
+
**Pattern**: `ph/doc/operation/{operationType}/{operationIndex}`
|
|
143
|
+
**Purpose**: Document-specific operation identification
|
|
144
|
+
```tsx
|
|
145
|
+
// Examples (document model operations vary by document type)
|
|
146
|
+
"ph/doc/operation/SET_STATE/15"
|
|
147
|
+
"ph/doc/operation/ADD_ITEM/8"
|
|
148
|
+
"ph/doc/operation/UPDATE_PROPERTY/22"
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
#### 3. Target Dimension
|
|
152
|
+
**Pattern**: `ph/doc/target/{driveId}/{targetType}/{documentId}`
|
|
153
|
+
**Purpose**: Document target identification
|
|
154
|
+
|
|
155
|
+
**Target Types**:
|
|
156
|
+
- **DRIVE**: Document is the drive document itself (driveId === documentId)
|
|
157
|
+
- **NODE**: Document is a regular document within the drive
|
|
158
|
+
|
|
159
|
+
```tsx
|
|
160
|
+
// Examples
|
|
161
|
+
"ph/doc/target/abc123/DRIVE/abc123" // Drive document
|
|
162
|
+
"ph/doc/target/abc123/NODE/doc456" // Regular document
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
## Query Parameters
|
|
166
|
+
|
|
167
|
+
### Time Range
|
|
168
|
+
- **start**: DateTime object for query start time
|
|
169
|
+
- **end**: DateTime object for query end time
|
|
170
|
+
- **granularity**: Time bucketing (Total, Hourly, Daily, Weekly, Monthly)
|
|
171
|
+
|
|
172
|
+
### Filtering with Select
|
|
173
|
+
|
|
174
|
+
Use the `select` parameter to filter by specific dimension values:
|
|
175
|
+
|
|
176
|
+
```tsx
|
|
177
|
+
select: {
|
|
178
|
+
// Filter by specific drives
|
|
179
|
+
drive: [
|
|
180
|
+
AnalyticsPath.fromString("ph/drive/abc123"),
|
|
181
|
+
AnalyticsPath.fromString("ph/drive/xyz789")
|
|
182
|
+
],
|
|
183
|
+
|
|
184
|
+
// Filter by operation types
|
|
185
|
+
operation: [
|
|
186
|
+
AnalyticsPath.fromString("ph/drive/operation/ADD_FILE"),
|
|
187
|
+
AnalyticsPath.fromString("ph/drive/operation/UPDATE_FILE")
|
|
188
|
+
],
|
|
189
|
+
|
|
190
|
+
// Filter by action types
|
|
191
|
+
actionType: [
|
|
192
|
+
AnalyticsPath.fromString("ph/drive/actionType/CREATED"),
|
|
193
|
+
AnalyticsPath.fromString("ph/drive/actionType/UPDATED")
|
|
194
|
+
],
|
|
195
|
+
|
|
196
|
+
// Filter by targets
|
|
197
|
+
target: [
|
|
198
|
+
AnalyticsPath.fromString("ph/drive/target/NODE")
|
|
199
|
+
]
|
|
200
|
+
}
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
### Level of Detail (LOD)
|
|
204
|
+
|
|
205
|
+
Control how deeply dimensions are grouped:
|
|
206
|
+
|
|
207
|
+
```tsx
|
|
208
|
+
lod: {
|
|
209
|
+
drive: 1, // Group by drive only (ignore branch/scope/revision)
|
|
210
|
+
operation: 1, // Group by operation type only (ignore index)
|
|
211
|
+
actionType: 1, // Group by action type only (ignore target ID)
|
|
212
|
+
target: 1 // Group by target type only (ignore target ID)
|
|
213
|
+
}
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
## Querying Analytics Data
|
|
217
|
+
|
|
218
|
+
### Using the useAnalyticsQuery Hook
|
|
219
|
+
|
|
220
|
+
The primary way to access drive analytics is through the `useAnalyticsQuery` hook:
|
|
221
|
+
|
|
222
|
+
```tsx
|
|
223
|
+
import { useAnalyticsQuery, AnalyticsGranularity, AnalyticsPath, DateTime } from '@powerhousedao/reactor-browser/analytics';
|
|
224
|
+
|
|
225
|
+
function DriveUsageChart({ driveId }: { driveId: string }) {
|
|
226
|
+
const { data, isLoading } = useAnalyticsQuery({
|
|
227
|
+
start: DateTime.now().minus({ days: 7 }),
|
|
228
|
+
end: DateTime.now(),
|
|
229
|
+
granularity: AnalyticsGranularity.Daily,
|
|
230
|
+
metrics: ["DriveOperations"],
|
|
231
|
+
select: {
|
|
232
|
+
drive: [AnalyticsPath.fromString(`ph/drive/${driveId}`)],
|
|
233
|
+
actionType: [
|
|
234
|
+
AnalyticsPath.fromString("ph/drive/actionType/CREATED"),
|
|
235
|
+
AnalyticsPath.fromString("ph/drive/actionType/UPDATED"),
|
|
236
|
+
AnalyticsPath.fromString("ph/drive/actionType/REMOVED")
|
|
237
|
+
]
|
|
238
|
+
},
|
|
239
|
+
lod: {
|
|
240
|
+
drive: 1,
|
|
241
|
+
actionType: 1
|
|
242
|
+
}
|
|
243
|
+
});
|
|
244
|
+
|
|
245
|
+
if (isLoading) return <div>Loading analytics...</div>;
|
|
246
|
+
|
|
247
|
+
return (
|
|
248
|
+
<div>
|
|
249
|
+
{/* Render your chart using the analytics data */}
|
|
250
|
+
{data?.rows.map(row => (
|
|
251
|
+
<div key={row.metric}>
|
|
252
|
+
{row.metric}: {row.value}
|
|
253
|
+
</div>
|
|
254
|
+
))}
|
|
255
|
+
</div>
|
|
256
|
+
);
|
|
257
|
+
}
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
### Using the useDriveAnalytics Hook
|
|
261
|
+
|
|
262
|
+
For common drive analytics queries, use the specialized `useDriveAnalytics` hook:
|
|
263
|
+
|
|
264
|
+
```tsx
|
|
265
|
+
import { useDriveAnalytics } from '@powerhousedao/common/drive-analytics';
|
|
266
|
+
import { AnalyticsGranularity } from '@powerhousedao/reactor-browser/analytics';
|
|
267
|
+
|
|
268
|
+
function DriveInsights({ driveIds }: { driveIds: string[] }) {
|
|
269
|
+
const analytics = useDriveAnalytics({
|
|
270
|
+
filters: {
|
|
271
|
+
driveId: driveIds,
|
|
272
|
+
operation: ["ADD_FILE", "UPDATE_FILE", "DELETE_NODE"],
|
|
273
|
+
actionType: ["CREATED", "UPDATED", "REMOVED"]
|
|
274
|
+
},
|
|
275
|
+
from: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString(), // 7 days ago
|
|
276
|
+
to: new Date().toISOString(),
|
|
277
|
+
granularity: AnalyticsGranularity.Daily,
|
|
278
|
+
levelOfDetail: { drive: 1, operation: 1 }
|
|
279
|
+
});
|
|
280
|
+
|
|
281
|
+
if (analytics.isLoading) return <div>Loading...</div>;
|
|
282
|
+
|
|
283
|
+
return (
|
|
284
|
+
<div>
|
|
285
|
+
<h3>Drive Activity Summary</h3>
|
|
286
|
+
{analytics.data?.rows.map((row, index) => (
|
|
287
|
+
<div key={index}>
|
|
288
|
+
<strong>{row.dimensions.find(d => d.name === 'actionType')?.path}</strong>: {row.value}
|
|
289
|
+
</div>
|
|
290
|
+
))}
|
|
291
|
+
</div>
|
|
292
|
+
);
|
|
293
|
+
}
|
|
294
|
+
```
|
|
295
|
+
|
|
296
|
+
### Using the useDocumentAnalytics Hook
|
|
297
|
+
|
|
298
|
+
For document-specific analytics queries, use the `useDocumentAnalytics` hook:
|
|
299
|
+
|
|
300
|
+
```tsx
|
|
301
|
+
import { useDocumentAnalytics } from '@powerhousedao/common/drive-analytics';
|
|
302
|
+
import { AnalyticsGranularity } from '@powerhousedao/reactor-browser/analytics';
|
|
303
|
+
|
|
304
|
+
function DocumentInsights({ driveId, documentIds }: { driveId: string, documentIds: string[] }) {
|
|
305
|
+
const analytics = useDocumentAnalytics({
|
|
306
|
+
filters: {
|
|
307
|
+
driveId: [driveId],
|
|
308
|
+
documentId: documentIds,
|
|
309
|
+
target: ["NODE"], // Focus on document nodes vs drive documents
|
|
310
|
+
branch: ["main"],
|
|
311
|
+
scope: ["global"]
|
|
312
|
+
},
|
|
313
|
+
from: new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString(), // 24 hours ago
|
|
314
|
+
to: new Date().toISOString(),
|
|
315
|
+
granularity: AnalyticsGranularity.Hourly,
|
|
316
|
+
levelOfDetail: {
|
|
317
|
+
drive: 1,
|
|
318
|
+
operation: 1,
|
|
319
|
+
target: 1
|
|
320
|
+
}
|
|
321
|
+
});
|
|
322
|
+
|
|
323
|
+
if (analytics.isLoading) return <div>Loading...</div>;
|
|
324
|
+
|
|
325
|
+
return (
|
|
326
|
+
<div>
|
|
327
|
+
<h3>Document Activity Summary</h3>
|
|
328
|
+
{analytics.data?.rows.map((row, index) => (
|
|
329
|
+
<div key={index}>
|
|
330
|
+
Document Operations: {row.value}
|
|
331
|
+
</div>
|
|
332
|
+
))}
|
|
333
|
+
</div>
|
|
334
|
+
);
|
|
335
|
+
}
|
|
336
|
+
```
|
|
337
|
+
|
|
338
|
+
## Advanced Query Examples
|
|
339
|
+
|
|
340
|
+
### Filter by Multiple Criteria
|
|
341
|
+
|
|
342
|
+
```tsx
|
|
343
|
+
// Get file creations and updates for specific drives in the last 24 hours
|
|
344
|
+
const { data } = useAnalyticsQuery({
|
|
345
|
+
start: DateTime.now().minus({ hours: 24 }),
|
|
346
|
+
end: DateTime.now(),
|
|
347
|
+
granularity: AnalyticsGranularity.Hourly,
|
|
348
|
+
metrics: ["DriveOperations"],
|
|
349
|
+
select: {
|
|
350
|
+
drive: [
|
|
351
|
+
AnalyticsPath.fromString("ph/drive/project-a"),
|
|
352
|
+
AnalyticsPath.fromString("ph/drive/project-b")
|
|
353
|
+
],
|
|
354
|
+
operation: [
|
|
355
|
+
AnalyticsPath.fromString("ph/drive/operation/ADD_FILE"),
|
|
356
|
+
AnalyticsPath.fromString("ph/drive/operation/UPDATE_FILE")
|
|
357
|
+
],
|
|
358
|
+
target: [
|
|
359
|
+
AnalyticsPath.fromString("ph/drive/target/NODE")
|
|
360
|
+
]
|
|
361
|
+
},
|
|
362
|
+
lod: {
|
|
363
|
+
drive: 1,
|
|
364
|
+
operation: 1
|
|
365
|
+
}
|
|
366
|
+
});
|
|
367
|
+
```
|
|
368
|
+
|
|
369
|
+
### Compare Document vs Drive Operations
|
|
370
|
+
|
|
371
|
+
```tsx
|
|
372
|
+
// Using the specialized hooks for easier comparison
|
|
373
|
+
const driveOps = useDriveAnalytics({
|
|
374
|
+
filters: { driveId: [driveId] },
|
|
375
|
+
from: DateTime.now().minus({ days: 1 }).toISO(),
|
|
376
|
+
to: DateTime.now().toISO(),
|
|
377
|
+
granularity: AnalyticsGranularity.Total
|
|
378
|
+
});
|
|
379
|
+
|
|
380
|
+
const docOps = useDocumentAnalytics({
|
|
381
|
+
filters: { driveId: [driveId] },
|
|
382
|
+
from: DateTime.now().minus({ days: 1 }).toISO(),
|
|
383
|
+
to: DateTime.now().toISO(),
|
|
384
|
+
granularity: AnalyticsGranularity.Total
|
|
385
|
+
});
|
|
386
|
+
|
|
387
|
+
// Or using useAnalyticsQuery directly
|
|
388
|
+
const driveOpsQuery = useAnalyticsQuery({
|
|
389
|
+
start: DateTime.now().minus({ days: 1 }),
|
|
390
|
+
end: DateTime.now(),
|
|
391
|
+
granularity: AnalyticsGranularity.Total,
|
|
392
|
+
metrics: ["DriveOperations"],
|
|
393
|
+
select: {
|
|
394
|
+
drive: [AnalyticsPath.fromString(`ph/drive/${driveId}`)]
|
|
395
|
+
}
|
|
396
|
+
});
|
|
397
|
+
|
|
398
|
+
const docOpsQuery = useAnalyticsQuery({
|
|
399
|
+
start: DateTime.now().minus({ days: 1 }),
|
|
400
|
+
end: DateTime.now(),
|
|
401
|
+
granularity: AnalyticsGranularity.Total,
|
|
402
|
+
metrics: ["DocumentOperations"],
|
|
403
|
+
select: {
|
|
404
|
+
drive: [AnalyticsPath.fromString(`ph/doc/drive/${driveId}`)]
|
|
405
|
+
}
|
|
406
|
+
});
|
|
407
|
+
```
|
|
408
|
+
|
|
409
|
+
### Real-time Activity Monitoring
|
|
410
|
+
|
|
411
|
+
```tsx
|
|
412
|
+
// Monitor specific drive for real-time updates
|
|
413
|
+
const { data } = useAnalyticsQuery(
|
|
414
|
+
{
|
|
415
|
+
start: DateTime.now().minus({ minutes: 10 }),
|
|
416
|
+
end: DateTime.now(),
|
|
417
|
+
granularity: AnalyticsGranularity.Total,
|
|
418
|
+
metrics: ["DriveOperations"],
|
|
419
|
+
select: {
|
|
420
|
+
drive: [AnalyticsPath.fromString(`ph/drive/${driveId}`)]
|
|
421
|
+
}
|
|
422
|
+
},
|
|
423
|
+
{
|
|
424
|
+
sources: [AnalyticsPath.fromString(`ph/drive/${driveId}`)],
|
|
425
|
+
refetchInterval: 5000 // Poll every 5 seconds
|
|
426
|
+
}
|
|
427
|
+
);
|
|
428
|
+
```
|
|
429
|
+
|
|
430
|
+
## Real-time Updates
|
|
431
|
+
|
|
432
|
+
Analytics queries can automatically update when new data is available by specifying sources:
|
|
433
|
+
|
|
434
|
+
```tsx
|
|
435
|
+
const { data } = useAnalyticsQuery(
|
|
436
|
+
{
|
|
437
|
+
start: DateTime.now().minus({ hours: 1 }),
|
|
438
|
+
end: DateTime.now(),
|
|
439
|
+
granularity: AnalyticsGranularity.Total,
|
|
440
|
+
metrics: ["DriveOperations"]
|
|
441
|
+
},
|
|
442
|
+
{
|
|
443
|
+
sources: [AnalyticsPath.fromString(`ph/drive/${driveId}`)]
|
|
444
|
+
}
|
|
445
|
+
);
|
|
446
|
+
|
|
447
|
+
// This query will automatically refetch when new operations occur in the specified drive
|
|
448
|
+
```
|
|
449
|
+
|
|
450
|
+
|
|
451
|
+
## Configuration in Connect
|
|
452
|
+
|
|
453
|
+
Drive Analytics is automatically enabled in Connect applications through feature flags:
|
|
454
|
+
|
|
455
|
+
```tsx
|
|
456
|
+
// In apps/connect/src/context/reactor-analytics.tsx
|
|
457
|
+
export function ReactorAnalyticsProvider({ children }: PropsWithChildren) {
|
|
458
|
+
return (
|
|
459
|
+
<AnalyticsProvider options={{ databaseName: "connect:analytics" }}>
|
|
460
|
+
{connectConfig.analytics.driveAnalyticsEnabled && (
|
|
461
|
+
<DriveAnalyticsProcessor />
|
|
462
|
+
)}
|
|
463
|
+
{children}
|
|
464
|
+
</AnalyticsProvider>
|
|
465
|
+
);
|
|
466
|
+
}
|
|
467
|
+
```
|