bulltrackers-module 1.0.105 → 1.0.107

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/README.MD +222 -222
  2. package/functions/appscript-api/helpers/errors.js +19 -19
  3. package/functions/appscript-api/index.js +58 -58
  4. package/functions/computation-system/helpers/orchestration_helpers.js +667 -113
  5. package/functions/computation-system/utils/data_loader.js +191 -191
  6. package/functions/computation-system/utils/utils.js +149 -254
  7. package/functions/core/utils/firestore_utils.js +433 -433
  8. package/functions/core/utils/pubsub_utils.js +53 -53
  9. package/functions/dispatcher/helpers/dispatch_helpers.js +47 -47
  10. package/functions/dispatcher/index.js +52 -52
  11. package/functions/etoro-price-fetcher/helpers/handler_helpers.js +124 -124
  12. package/functions/fetch-insights/helpers/handler_helpers.js +91 -91
  13. package/functions/generic-api/helpers/api_helpers.js +379 -379
  14. package/functions/generic-api/index.js +150 -150
  15. package/functions/invalid-speculator-handler/helpers/handler_helpers.js +75 -75
  16. package/functions/orchestrator/helpers/discovery_helpers.js +226 -226
  17. package/functions/orchestrator/helpers/update_helpers.js +92 -92
  18. package/functions/orchestrator/index.js +147 -147
  19. package/functions/price-backfill/helpers/handler_helpers.js +116 -123
  20. package/functions/social-orchestrator/helpers/orchestrator_helpers.js +61 -61
  21. package/functions/social-task-handler/helpers/handler_helpers.js +288 -288
  22. package/functions/task-engine/handler_creator.js +78 -78
  23. package/functions/task-engine/helpers/discover_helpers.js +125 -125
  24. package/functions/task-engine/helpers/update_helpers.js +118 -118
  25. package/functions/task-engine/helpers/verify_helpers.js +162 -162
  26. package/functions/task-engine/utils/firestore_batch_manager.js +258 -258
  27. package/index.js +105 -113
  28. package/package.json +45 -45
  29. package/functions/computation-system/computation_dependencies.json +0 -120
  30. package/functions/computation-system/helpers/worker_helpers.js +0 -340
  31. package/functions/computation-system/utils/computation_state_manager.js +0 -178
  32. package/functions/computation-system/utils/dependency_graph.js +0 -191
  33. package/functions/speculator-cleanup-orchestrator/helpers/cleanup_helpers.js +0 -160
package/README.MD CHANGED
@@ -1,223 +1,223 @@
1
-
2
- -----
3
-
4
- # bulltrackers-module
5
-
6
- **Version:** 1.0.53
7
-
8
- ## Overview
9
-
10
- This package encapsulates the core backend logic for the BullTrackers platform. It is designed as a modular, stateless library to be used within a Google Cloud Functions environment.
11
-
12
- It promotes a clean, testable, and maintainable architecture by enforcing a clear separation of concerns and using a consistent dependency injection pattern. All business logic is exposed via a single, nested `pipe` object.
13
-
14
- ## Core Concepts
15
-
16
- * **eToro User IDs (CIDs):** Sequential integers assigned upon registration. This sequential nature allows for "Block Sampling".
17
- * **Block Sampling:** The process of randomly sampling users within 1 million ID blocks (e.g., 1M-2M, 2M-3M) to ensure a representative dataset across different user registration cohorts.
18
- * **User Types:** The system differentiates between two primary user types:
19
- * **`normal`**: Standard users.
20
- * **`speculator`**: Users identified by specific portfolio activities, such as holding certain high-risk assets.
21
- * **Dependency Injection (DI):** No function in this module initializes its own clients (like Firestore or Pub/Sub) or accesses global configuration. All logic is exposed as stateless functions that receive two arguments: `(config, dependencies)`. This makes the entire module testable and portable.
22
-
23
- ## Architecture: The `pipe` Object
24
-
25
- This module exports a single root object named `pipe`. This object organizes all application logic into distinct, self-contained "pipelines," each corresponding to a major microservice or domain.
26
-
27
- ```javascript
28
- // Example of the module's structure
29
- const { pipe } = require('bulltrackers-module');
30
-
31
- // You initialize dependencies ONCE in your main function entry point
32
- const dependencies = {
33
- db: new Firestore(),
34
- pubsub: new PubSub(),
35
- logger: myLogger,
36
- headerManager: new pipe.core.IntelligentHeaderManager(...),
37
- proxyManager: new pipe.core.IntelligentProxyManager(...),
38
- // ...etc.
39
- };
40
-
41
- // You then pass those dependencies into any pipe function
42
- async function runDiscovery(config) {
43
- await pipe.orchestrator.runDiscoveryOrchestrator(config, dependencies);
44
- }
45
- ```
46
-
47
- -----
48
-
49
- ## Module Structure
50
-
51
- ### `pipe.core`
52
-
53
- Contains the fundamental, reusable classes and utilities that power the other pipes.
54
-
55
- * **`IntelligentHeaderManager`**: A class that manages a pool of browser headers, rotating them based on historical success rates to avoid API blocks. Performance is tracked in Firestore.
56
- * **`IntelligentProxyManager`**: A class that manages a pool of Google Apps Script proxy URLs. It selects available proxies, handles request fetching, and locks proxies that fail (e.g., due to quota errors).
57
- * **`FirestoreBatchManager`**: A utility class used by the Task Engine to manage stateful, asynchronous batch writes to Firestore. It handles sharding portfolio data, batching timestamp updates, and flushing processed speculator IDs.
58
- * **`firestoreUtils`**: A collection of stateless helper functions for common Firestore operations, such as resetting proxy locks, getting block capacities, fetching exclusion IDs, and querying users who need updates.
59
- * **`pubsubUtils`**: A collection of stateless helper functions for publishing messages to Pub/Sub in batches.
60
-
61
- ### `pipe.orchestrator`
62
-
63
- The "brain" of the system. This pipe is responsible for deciding *what* work needs to be done, but not *how* to do it. It's typically run on a schedule (e.g., via Cloud Scheduler).
64
-
65
- * **`runDiscoveryOrchestrator(config, dependencies)`**: The main entry point for the discovery process. It checks which user blocks are under-capacity, gets candidate CIDs to scan, and dispatches `discover` tasks.
66
- * **`runUpdateOrchestrator(config, dependencies)`**: The main entry point for the daily update process. It queries Firestore for all verified users who haven't been updated recently and dispatches `update` tasks.
67
- * **Discovery Helpers** (`checkDiscoveryNeed`, `getDiscoveryCandidates`, `dispatchDiscovery`): Sub-pipes that handle the logic for finding underpopulated blocks, generating new CIDs (while respecting exclusion lists), and publishing discovery tasks.
68
- * **Update Helpers** (`getUpdateTargets`, `dispatchUpdates`): Sub-pipes that find all `normal` and `speculator` users eligible for an update and publish tasks to the `Dispatcher`.
69
-
70
- ### `pipe.dispatcher`
71
-
72
- The "throttle" of the system. Its sole purpose is to receive a large number of tasks (typically from the `orchestrator`) and slowly release them in small batches to a Pub/Sub topic. This prevents the `taskEngine` from scaling too quickly and incurring high costs.
73
-
74
- * **`handleRequest(message, context, config, dependencies)`**: The main entry point, designed to be triggered by a Pub/Sub message containing an array of `tasks`.
75
- * **`dispatchTasksInBatches(tasks, dependencies, config)`**: The core logic that loops through tasks, publishing them in batches with a configurable delay.
76
-
77
- ### `pipe.taskEngine`
78
-
79
- The "factory" or "workhorse" of the system. This pipe is event-driven, triggered by individual Pub/Sub messages from the `orchestrator` or `dispatcher`. It executes the core data-fetching operations.
80
-
81
- * **`handleRequest(message, context, config, dependencies)`**: The main entry point. It parses the Pub/Sub message, identifies the task `type`, and routes it to the correct sub-pipe handler.
82
- * **`handleDiscover(task, taskId, dependencies, config)`**: Fetches public ranking data for a batch of CIDs. It filters out private/inactive users, applies heuristics (for speculators), and chains to the `verify` task for any promising candidates.
83
- * **`handleVerify(task, taskId, dependencies, config)`**: Fetches the portfolio of a *single* user. It verifies they meet the criteria (e.g., holds a speculator asset) and, if so, saves their data and increments the block count.
84
- * **`handleUpdate(task, taskId, dependencies, config)`**: Fetches the latest portfolio for an *existing, verified* user and stores the new data in a sharded Firestore document.
85
-
86
- ### `pipe.computationSystem`
87
-
88
- The "refinery" of the system. This pipe is run on a schedule *after* data collection. It loads all the raw portfolio data, runs a suite of calculations (e.g., risk scores, profit migrations), and saves the results in a separate "unified insights" collection for the API to read.
89
-
90
- * **`runOrchestration(config, dependencies)`**: The main entry point. It identifies which dates need processing (or backfilling), loads all data for those dates, initializes all calculation classes, and streams the data through them.
91
- * **`dataLoader`**: A set of helpers for loading the sharded portfolio data from Firestore efficiently.
92
- * **`computationUtils`**: A set of helpers for categorizing calculations (historical vs. daily), committing results in chunks, and finding the earliest available data date.
93
-
94
- ### `pipe.api`
95
-
96
- The "tap" that serves the processed data. This pipe is an Express.js application designed to be run as a single Cloud Function, providing a public-facing API for the frontend.
97
-
98
- * **`createApiApp(config, dependencies, unifiedCalculations)`**: The main entry point. It returns a fully configured Express app instance, complete with middleware (CORS, JSON) and all API routes.
99
- * **API Helpers** (`buildCalculationMap`, `validateRequest`, `fetchUnifiedData`, etc.): Sub-pipes that handle request validation, fetching data from the "unified insights" collection, and generating debug manifests.
100
-
101
- ### `pipe.maintenance`
102
-
103
- A collection of standalone utility and cleanup functions, typically run on a schedule.
104
-
105
- * **`runSpeculatorCleanup(config, dependencies)`**: De-registers stale speculators who haven't held a qualifying asset for a set grace period, and decrements the block counts.
106
- * **`handleInvalidSpeculator(message, context, config, dependencies)`**: An event-driven function that listens for Pub/Sub messages containing CIDs of users found to be private or invalid, adding them to an exclusion list.
107
- * **`runFetchInsights(config, dependencies)`**: A scheduled function to fetch general eToro market insights (e.g., buy/sell percentages).
108
- * **`runFetchPrices(config, dependencies)`**: A scheduled function to fetch daily closing prices for all instruments.
109
-
110
- ### `pipe.proxy`
111
-
112
- The logic for the Google Apps Script web app that acts as the secure proxy layer. This code is *not* run in the Node.js environment; it's intended to be deployed directly into an Apps Script project.
113
-
114
- * **`handlePost(e)`**: The main `doPost(e)` function for the Apps Script. It parses the request from the `IntelligentProxyManager`, executes it using `UrlFetchApp`, and returns the response.
115
-
116
- -----
117
-
118
- ## Data Flow
119
-
120
- The system operates in four distinct, decoupled stages:
121
-
122
- 1. **Stage 1: Discovery (Scheduled)**
123
-
124
- * `pipe.orchestrator.runDiscoveryOrchestrator` runs.
125
- * It determines it needs 5,000 new `normal` users.
126
- * It creates 50 `discover` tasks (100 CIDs each) and publishes them to the `task-engine` topic.
127
- * `pipe.taskEngine` functions spin up, `handleDiscover` finds 200 promising users.
128
- * It publishes 200 `verify` tasks back to the `task-engine` topic.
129
- * `pipe.taskEngine` functions `handleVerify` these users, and 75 are saved to the database.
130
-
131
- 2. **Stage 2: Update (Scheduled)**
132
-
133
- * `pipe.orchestrator.runUpdateOrchestrator` runs.
134
- * It finds 50,000 existing users that need an update.
135
- * It publishes *one* message containing all 50,000 tasks to the `dispatcher` topic.
136
- * `pipe.dispatcher.handleRequest` receives the message.
137
- * It loops, publishing 500 tasks at a time to the `task-engine` topic, with a 30-second delay between batches.
138
- * `pipe.taskEngine` functions `handleUpdate` the portfolios, scaling gradually.
139
-
140
- 3. **Stage 3: Computation (Scheduled)**
141
-
142
- * `pipe.computationSystem.runOrchestration` runs (e.g., at 1:00 AM).
143
- * It loads all portfolio data collected in Stages 1 & 2.
144
- * It runs all calculations and saves the results to the `unified_insights` collection.
145
-
146
- 4. **Stage 4: Serving (On-Demand)**
147
-
148
- * A frontend user loads a chart.
149
- * The browser sends a request to the `pipe.api` function.
150
- * The API validates the request, reads the *pre-computed* data from `unified_insights`, and returns it.
151
-
152
- ## Usage
153
-
154
- In your Google Cloud Function `index.js` file:
155
-
156
- ```javascript
157
- /**
158
- * @fileoverview Unified Cloud Functions entry (Refactored for Pipe Architecture)
159
- */
160
-
161
- // Import FieldPath here
162
- const { Firestore, FieldValue, FieldPath } = require('@google-cloud/firestore');
163
- const { PubSub } = require('@google-cloud/pubsub');
164
- const { logger } = require('sharedsetup')(__filename);
165
- const { pipe } = require('bulltrackers-module');
166
- const { calculations } = require('aiden-shared-calculations-unified');
167
- const fs = require('fs');
168
- const path = require('path');
169
-
170
- // --- Initialize Clients ---
171
- const db = new Firestore();
172
- const pubsub = new PubSub();
173
-
174
- // --- Load Configs ---
175
- const cfg = Object.fromEntries(fs.readdirSync(path.join(__dirname, 'config'))
176
- .filter(f => f.endsWith('_config.js'))
177
- .map(f => [path.basename(f, '_config.js'), require(path.join(__dirname, 'config', f))])
178
- );
179
-
180
- // --- Instantiate Core Managers ---
181
- const headerManager = new pipe.core.IntelligentHeaderManager(db, logger, cfg.core);
182
- const proxyManager = new pipe.core.IntelligentProxyManager (db, logger, cfg.core);
183
- const batchManager = new pipe.core.FirestoreBatchManager (db, headerManager, logger, cfg.taskEngine);
184
-
185
- // --- Master Dependencies ---
186
- const allDependencies = {
187
- db : db,
188
- pubsub : pubsub,
189
- logger : logger,
190
- FieldValue : FieldValue,
191
- FieldPath : FieldPath,
192
- headerManager : headerManager,
193
- proxyManager : proxyManager,
194
- batchManager : batchManager,
195
- firestoreUtils : pipe.core.firestoreUtils,
196
- pubsubUtils : pipe.core.pubsubUtils
197
- };
198
-
199
- // --- Export Cloud Function Handlers ---
200
- const handlers = {
201
- discoveryOrchestrator : () => pipe.orchestrator .runDiscoveryOrchestrator (cfg.orchestrator , allDependencies),
202
- updateOrchestrator : () => pipe.orchestrator .runUpdateOrchestrator (cfg.orchestrator , allDependencies),
203
- speculatorCleanupOrchestrator: () => pipe.maintenance .runSpeculatorCleanup (cfg.cleanup , allDependencies),
204
- fetchEtoroInsights : () => pipe.maintenance .runFetchInsights (cfg.fetchInsights , allDependencies),
205
- updateClosingPrices : () => pipe.maintenance .runFetchPrices (cfg.priceFetcher , allDependencies),
206
- computationSystem : () => pipe.computationSystem.runOrchestration (cfg.computationSystem, allDependencies),
207
-
208
- taskEngineHandler : (m, c) => pipe.taskEngine .handleRequest (m, c, cfg.taskEngine , allDependencies),
209
- dispatcher : (m, c) => pipe.dispatcher .handleRequest (m, c, cfg.dispatcher , allDependencies),
210
- invalidSpeculatorHandler : (m, c) => pipe.maintenance .handleInvalidSpeculator(m, c, cfg.invalidSpeculator, allDependencies)
211
- };
212
-
213
- Object.entries(handlers).forEach(([name, fn]) => exports[name] = fn);
214
-
215
- // --- API Export ---
216
- exports.genericApiV2 = pipe.api.createApiApp(cfg.genericApiV2, allDependencies, calculations);
217
-
218
- // --- Local API Server ---
219
- if (require.main === module) {
220
- const port = process.env.PORT || 8080;
221
- exports.genericApiV2.listen(port, () => console.log(`API listening on port ${port}`));
222
- }
1
+
2
+ -----
3
+
4
+ # bulltrackers-module
5
+
6
+ **Version:** 1.0.53
7
+
8
+ ## Overview
9
+
10
+ This package encapsulates the core backend logic for the BullTrackers platform. It is designed as a modular, stateless library to be used within a Google Cloud Functions environment.
11
+
12
+ It promotes a clean, testable, and maintainable architecture by enforcing a clear separation of concerns and using a consistent dependency injection pattern. All business logic is exposed via a single, nested `pipe` object.
13
+
14
+ ## Core Concepts
15
+
16
+ * **eToro User IDs (CIDs):** Sequential integers assigned upon registration. This sequential nature allows for "Block Sampling".
17
+ * **Block Sampling:** The process of randomly sampling users within 1 million ID blocks (e.g., 1M-2M, 2M-3M) to ensure a representative dataset across different user registration cohorts.
18
+ * **User Types:** The system differentiates between two primary user types:
19
+ * **`normal`**: Standard users.
20
+ * **`speculator`**: Users identified by specific portfolio activities, such as holding certain high-risk assets.
21
+ * **Dependency Injection (DI):** No function in this module initializes its own clients (like Firestore or Pub/Sub) or accesses global configuration. All logic is exposed as stateless functions that receive two arguments: `(config, dependencies)`. This makes the entire module testable and portable.
22
+
23
+ ## Architecture: The `pipe` Object
24
+
25
+ This module exports a single root object named `pipe`. This object organizes all application logic into distinct, self-contained "pipelines," each corresponding to a major microservice or domain.
26
+
27
+ ```javascript
28
+ // Example of the module's structure
29
+ const { pipe } = require('bulltrackers-module');
30
+
31
+ // You initialize dependencies ONCE in your main function entry point
32
+ const dependencies = {
33
+ db: new Firestore(),
34
+ pubsub: new PubSub(),
35
+ logger: myLogger,
36
+ headerManager: new pipe.core.IntelligentHeaderManager(...),
37
+ proxyManager: new pipe.core.IntelligentProxyManager(...),
38
+ // ...etc.
39
+ };
40
+
41
+ // You then pass those dependencies into any pipe function
42
+ async function runDiscovery(config) {
43
+ await pipe.orchestrator.runDiscoveryOrchestrator(config, dependencies);
44
+ }
45
+ ```
46
+
47
+ -----
48
+
49
+ ## Module Structure
50
+
51
+ ### `pipe.core`
52
+
53
+ Contains the fundamental, reusable classes and utilities that power the other pipes.
54
+
55
+ * **`IntelligentHeaderManager`**: A class that manages a pool of browser headers, rotating them based on historical success rates to avoid API blocks. Performance is tracked in Firestore.
56
+ * **`IntelligentProxyManager`**: A class that manages a pool of Google Apps Script proxy URLs. It selects available proxies, handles request fetching, and locks proxies that fail (e.g., due to quota errors).
57
+ * **`FirestoreBatchManager`**: A utility class used by the Task Engine to manage stateful, asynchronous batch writes to Firestore. It handles sharding portfolio data, batching timestamp updates, and flushing processed speculator IDs.
58
+ * **`firestoreUtils`**: A collection of stateless helper functions for common Firestore operations, such as resetting proxy locks, getting block capacities, fetching exclusion IDs, and querying users who need updates.
59
+ * **`pubsubUtils`**: A collection of stateless helper functions for publishing messages to Pub/Sub in batches.
60
+
61
+ ### `pipe.orchestrator`
62
+
63
+ The "brain" of the system. This pipe is responsible for deciding *what* work needs to be done, but not *how* to do it. It's typically run on a schedule (e.g., via Cloud Scheduler).
64
+
65
+ * **`runDiscoveryOrchestrator(config, dependencies)`**: The main entry point for the discovery process. It checks which user blocks are under-capacity, gets candidate CIDs to scan, and dispatches `discover` tasks.
66
+ * **`runUpdateOrchestrator(config, dependencies)`**: The main entry point for the daily update process. It queries Firestore for all verified users who haven't been updated recently and dispatches `update` tasks.
67
+ * **Discovery Helpers** (`checkDiscoveryNeed`, `getDiscoveryCandidates`, `dispatchDiscovery`): Sub-pipes that handle the logic for finding underpopulated blocks, generating new CIDs (while respecting exclusion lists), and publishing discovery tasks.
68
+ * **Update Helpers** (`getUpdateTargets`, `dispatchUpdates`): Sub-pipes that find all `normal` and `speculator` users eligible for an update and publish tasks to the `Dispatcher`.
69
+
70
+ ### `pipe.dispatcher`
71
+
72
+ The "throttle" of the system. Its sole purpose is to receive a large number of tasks (typically from the `orchestrator`) and slowly release them in small batches to a Pub/Sub topic. This prevents the `taskEngine` from scaling too quickly and incurring high costs.
73
+
74
+ * **`handleRequest(message, context, config, dependencies)`**: The main entry point, designed to be triggered by a Pub/Sub message containing an array of `tasks`.
75
+ * **`dispatchTasksInBatches(tasks, dependencies, config)`**: The core logic that loops through tasks, publishing them in batches with a configurable delay.
76
+
77
+ ### `pipe.taskEngine`
78
+
79
+ The "factory" or "workhorse" of the system. This pipe is event-driven, triggered by individual Pub/Sub messages from the `orchestrator` or `dispatcher`. It executes the core data-fetching operations.
80
+
81
+ * **`handleRequest(message, context, config, dependencies)`**: The main entry point. It parses the Pub/Sub message, identifies the task `type`, and routes it to the correct sub-pipe handler.
82
+ * **`handleDiscover(task, taskId, dependencies, config)`**: Fetches public ranking data for a batch of CIDs. It filters out private/inactive users, applies heuristics (for speculators), and chains to the `verify` task for any promising candidates.
83
+ * **`handleVerify(task, taskId, dependencies, config)`**: Fetches the portfolio of a *single* user. It verifies they meet the criteria (e.g., holds a speculator asset) and, if so, saves their data and increments the block count.
84
+ * **`handleUpdate(task, taskId, dependencies, config)`**: Fetches the latest portfolio for an *existing, verified* user and stores the new data in a sharded Firestore document.
85
+
86
+ ### `pipe.computationSystem`
87
+
88
+ The "refinery" of the system. This pipe is run on a schedule *after* data collection. It loads all the raw portfolio data, runs a suite of calculations (e.g., risk scores, profit migrations), and saves the results in a separate "unified insights" collection for the API to read.
89
+
90
+ * **`runOrchestration(config, dependencies)`**: The main entry point. It identifies which dates need processing (or backfilling), loads all data for those dates, initializes all calculation classes, and streams the data through them.
91
+ * **`dataLoader`**: A set of helpers for loading the sharded portfolio data from Firestore efficiently.
92
+ * **`computationUtils`**: A set of helpers for categorizing calculations (historical vs. daily), committing results in chunks, and finding the earliest available data date.
93
+
94
+ ### `pipe.api`
95
+
96
+ The "tap" that serves the processed data. This pipe is an Express.js application designed to be run as a single Cloud Function, providing a public-facing API for the frontend.
97
+
98
+ * **`createApiApp(config, dependencies, unifiedCalculations)`**: The main entry point. It returns a fully configured Express app instance, complete with middleware (CORS, JSON) and all API routes.
99
+ * **API Helpers** (`buildCalculationMap`, `validateRequest`, `fetchUnifiedData`, etc.): Sub-pipes that handle request validation, fetching data from the "unified insights" collection, and generating debug manifests.
100
+
101
+ ### `pipe.maintenance`
102
+
103
+ A collection of standalone utility and cleanup functions, typically run on a schedule.
104
+
105
+ * **`runSpeculatorCleanup(config, dependencies)`**: De-registers stale speculators who haven't held a qualifying asset for a set grace period, and decrements the block counts.
106
+ * **`handleInvalidSpeculator(message, context, config, dependencies)`**: An event-driven function that listens for Pub/Sub messages containing CIDs of users found to be private or invalid, adding them to an exclusion list.
107
+ * **`runFetchInsights(config, dependencies)`**: A scheduled function to fetch general eToro market insights (e.g., buy/sell percentages).
108
+ * **`runFetchPrices(config, dependencies)`**: A scheduled function to fetch daily closing prices for all instruments.
109
+
110
+ ### `pipe.proxy`
111
+
112
+ The logic for the Google Apps Script web app that acts as the secure proxy layer. This code is *not* run in the Node.js environment; it's intended to be deployed directly into an Apps Script project.
113
+
114
+ * **`handlePost(e)`**: The main `doPost(e)` function for the Apps Script. It parses the request from the `IntelligentProxyManager`, executes it using `UrlFetchApp`, and returns the response.
115
+
116
+ -----
117
+
118
+ ## Data Flow
119
+
120
+ The system operates in four distinct, decoupled stages:
121
+
122
+ 1. **Stage 1: Discovery (Scheduled)**
123
+
124
+ * `pipe.orchestrator.runDiscoveryOrchestrator` runs.
125
+ * It determines it needs 5,000 new `normal` users.
126
+ * It creates 50 `discover` tasks (100 CIDs each) and publishes them to the `task-engine` topic.
127
+ * `pipe.taskEngine` functions spin up, `handleDiscover` finds 200 promising users.
128
+ * It publishes 200 `verify` tasks back to the `task-engine` topic.
129
+ * `pipe.taskEngine` functions `handleVerify` these users, and 75 are saved to the database.
130
+
131
+ 2. **Stage 2: Update (Scheduled)**
132
+
133
+ * `pipe.orchestrator.runUpdateOrchestrator` runs.
134
+ * It finds 50,000 existing users that need an update.
135
+ * It publishes *one* message containing all 50,000 tasks to the `dispatcher` topic.
136
+ * `pipe.dispatcher.handleRequest` receives the message.
137
+ * It loops, publishing 500 tasks at a time to the `task-engine` topic, with a 30-second delay between batches.
138
+ * `pipe.taskEngine` functions `handleUpdate` the portfolios, scaling gradually.
139
+
140
+ 3. **Stage 3: Computation (Scheduled)**
141
+
142
+ * `pipe.computationSystem.runOrchestration` runs (e.g., at 1:00 AM).
143
+ * It loads all portfolio data collected in Stages 1 & 2.
144
+ * It runs all calculations and saves the results to the `unified_insights` collection.
145
+
146
+ 4. **Stage 4: Serving (On-Demand)**
147
+
148
+ * A frontend user loads a chart.
149
+ * The browser sends a request to the `pipe.api` function.
150
+ * The API validates the request, reads the *pre-computed* data from `unified_insights`, and returns it.
151
+
152
+ ## Usage
153
+
154
+ In your Google Cloud Function `index.js` file:
155
+
156
+ ```javascript
157
+ /**
158
+ * @fileoverview Unified Cloud Functions entry (Refactored for Pipe Architecture)
159
+ */
160
+
161
+ // Import FieldPath here
162
+ const { Firestore, FieldValue, FieldPath } = require('@google-cloud/firestore');
163
+ const { PubSub } = require('@google-cloud/pubsub');
164
+ const { logger } = require('sharedsetup')(__filename);
165
+ const { pipe } = require('bulltrackers-module');
166
+ const { calculations } = require('aiden-shared-calculations-unified');
167
+ const fs = require('fs');
168
+ const path = require('path');
169
+
170
+ // --- Initialize Clients ---
171
+ const db = new Firestore();
172
+ const pubsub = new PubSub();
173
+
174
+ // --- Load Configs ---
175
+ const cfg = Object.fromEntries(fs.readdirSync(path.join(__dirname, 'config'))
176
+ .filter(f => f.endsWith('_config.js'))
177
+ .map(f => [path.basename(f, '_config.js'), require(path.join(__dirname, 'config', f))])
178
+ );
179
+
180
+ // --- Instantiate Core Managers ---
181
+ const headerManager = new pipe.core.IntelligentHeaderManager(db, logger, cfg.core);
182
+ const proxyManager = new pipe.core.IntelligentProxyManager (db, logger, cfg.core);
183
+ const batchManager = new pipe.core.FirestoreBatchManager (db, headerManager, logger, cfg.taskEngine);
184
+
185
+ // --- Master Dependencies ---
186
+ const allDependencies = {
187
+ db : db,
188
+ pubsub : pubsub,
189
+ logger : logger,
190
+ FieldValue : FieldValue,
191
+ FieldPath : FieldPath,
192
+ headerManager : headerManager,
193
+ proxyManager : proxyManager,
194
+ batchManager : batchManager,
195
+ firestoreUtils : pipe.core.firestoreUtils,
196
+ pubsubUtils : pipe.core.pubsubUtils
197
+ };
198
+
199
+ // --- Export Cloud Function Handlers ---
200
+ const handlers = {
201
+ discoveryOrchestrator : () => pipe.orchestrator .runDiscoveryOrchestrator (cfg.orchestrator , allDependencies),
202
+ updateOrchestrator : () => pipe.orchestrator .runUpdateOrchestrator (cfg.orchestrator , allDependencies),
203
+ speculatorCleanupOrchestrator: () => pipe.maintenance .runSpeculatorCleanup (cfg.cleanup , allDependencies),
204
+ fetchEtoroInsights : () => pipe.maintenance .runFetchInsights (cfg.fetchInsights , allDependencies),
205
+ updateClosingPrices : () => pipe.maintenance .runFetchPrices (cfg.priceFetcher , allDependencies),
206
+ computationSystem : () => pipe.computationSystem.runOrchestration (cfg.computationSystem, allDependencies),
207
+
208
+ taskEngineHandler : (m, c) => pipe.taskEngine .handleRequest (m, c, cfg.taskEngine , allDependencies),
209
+ dispatcher : (m, c) => pipe.dispatcher .handleRequest (m, c, cfg.dispatcher , allDependencies),
210
+ invalidSpeculatorHandler : (m, c) => pipe.maintenance .handleInvalidSpeculator(m, c, cfg.invalidSpeculator, allDependencies)
211
+ };
212
+
213
+ Object.entries(handlers).forEach(([name, fn]) => exports[name] = fn);
214
+
215
+ // --- API Export ---
216
+ exports.genericApiV2 = pipe.api.createApiApp(cfg.genericApiV2, allDependencies, calculations);
217
+
218
+ // --- Local API Server ---
219
+ if (require.main === module) {
220
+ const port = process.env.PORT || 8080;
221
+ exports.genericApiV2.listen(port, () => console.log(`API listening on port ${port}`));
222
+ }
223
223
  ```
@@ -1,19 +1,19 @@
1
- /**
2
- * Creates a structured JSON error response.
3
- * @param {string} message
4
- * @param {number} statusCode
5
- * @returns {GoogleAppsScript.Content.TextOutput}
6
- */
7
- function createErrorResponse(message, statusCode) {
8
- const error = {
9
- error: {
10
- message,
11
- code: statusCode,
12
- },
13
- };
14
-
15
- return ContentService.createTextOutput(JSON.stringify(error))
16
- .setMimeType(ContentService.MimeType.JSON);
17
- }
18
-
19
- module.exports = { createErrorResponse };
1
+ /**
2
+ * Creates a structured JSON error response.
3
+ * @param {string} message
4
+ * @param {number} statusCode
5
+ * @returns {GoogleAppsScript.Content.TextOutput}
6
+ */
7
+ function createErrorResponse(message, statusCode) {
8
+ const error = {
9
+ error: {
10
+ message,
11
+ code: statusCode,
12
+ },
13
+ };
14
+
15
+ return ContentService.createTextOutput(JSON.stringify(error))
16
+ .setMimeType(ContentService.MimeType.JSON);
17
+ }
18
+
19
+ module.exports = { createErrorResponse };
@@ -1,58 +1,58 @@
1
- /**
2
- * @fileoverview Google Apps Script proxy logic (v2) as a module.
3
- * Can be imported and used in a doPost or other GAS handlers.
4
- * NOT intended to be used as a Google Cloud Function or Otherwise, and is included in this package as an example of a proxy implementation.
5
- */
6
-
7
- const { createErrorResponse } = require('./helpers/errors');
8
-
9
- /**
10
- * Handles the incoming POST request object from Apps Script.
11
- * @param {GoogleAppsScript.Events.DoPost} e
12
- * @returns {GoogleAppsScript.Content.TextOutput}
13
- */
14
- function handlePost(e) {
15
- try {
16
- // Parse the request details from the POST body
17
- const requestDetails = JSON.parse(e.postData.contents);
18
- const { url, headers, method, body } = requestDetails;
19
-
20
- if (!url || !headers) {
21
- return createErrorResponse("Invalid request: 'url' and 'headers' are required.", 400);
22
- }
23
-
24
- // Prepare the external request options
25
- const options = {
26
- headers: headers,
27
- muteHttpExceptions: true,
28
- method: method || 'GET',
29
- };
30
-
31
- if (body) {
32
- options.payload = body;
33
- if (!headers['Content-Type']) {
34
- headers['Content-Type'] = 'application/json';
35
- }
36
- }
37
-
38
- // Make the external request using UrlFetchApp
39
- const response = UrlFetchApp.fetch(url, options);
40
-
41
- // Package the response to send back
42
- const responseData = {
43
- statusCode: response.getResponseCode(),
44
- headers: response.getHeaders(),
45
- body: response.getContentText(),
46
- };
47
-
48
- return ContentService.createTextOutput(JSON.stringify(responseData))
49
- .setMimeType(ContentService.MimeType.JSON);
50
-
51
- } catch (error) {
52
- return createErrorResponse(error.toString(), 500);
53
- }
54
- }
55
-
56
- module.exports = {
57
- handlePost,
58
- };
1
+ /**
2
+ * @fileoverview Google Apps Script proxy logic (v2) as a module.
3
+ * Can be imported and used in a doPost or other GAS handlers.
4
+ * NOT intended to be used as a Google Cloud Function or Otherwise, and is included in this package as an example of a proxy implementation.
5
+ */
6
+
7
+ const { createErrorResponse } = require('./helpers/errors');
8
+
9
+ /**
10
+ * Handles the incoming POST request object from Apps Script.
11
+ * @param {GoogleAppsScript.Events.DoPost} e
12
+ * @returns {GoogleAppsScript.Content.TextOutput}
13
+ */
14
+ function handlePost(e) {
15
+ try {
16
+ // Parse the request details from the POST body
17
+ const requestDetails = JSON.parse(e.postData.contents);
18
+ const { url, headers, method, body } = requestDetails;
19
+
20
+ if (!url || !headers) {
21
+ return createErrorResponse("Invalid request: 'url' and 'headers' are required.", 400);
22
+ }
23
+
24
+ // Prepare the external request options
25
+ const options = {
26
+ headers: headers,
27
+ muteHttpExceptions: true,
28
+ method: method || 'GET',
29
+ };
30
+
31
+ if (body) {
32
+ options.payload = body;
33
+ if (!headers['Content-Type']) {
34
+ headers['Content-Type'] = 'application/json';
35
+ }
36
+ }
37
+
38
+ // Make the external request using UrlFetchApp
39
+ const response = UrlFetchApp.fetch(url, options);
40
+
41
+ // Package the response to send back
42
+ const responseData = {
43
+ statusCode: response.getResponseCode(),
44
+ headers: response.getHeaders(),
45
+ body: response.getContentText(),
46
+ };
47
+
48
+ return ContentService.createTextOutput(JSON.stringify(responseData))
49
+ .setMimeType(ContentService.MimeType.JSON);
50
+
51
+ } catch (error) {
52
+ return createErrorResponse(error.toString(), 500);
53
+ }
54
+ }
55
+
56
+ module.exports = {
57
+ handlePost,
58
+ };