bulltrackers-module 1.0.751 → 1.0.753

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,115 @@
1
+ Here is the comprehensive documentation for the **Admin Test Endpoint (`compute-admin-test`)**. This guide details the supported actions, available overrides, and their specific impacts on your infrastructure.
2
+
3
+ ### **Overview**
4
+
5
+ The Admin Test Endpoint is a privileged tool designed for **safe, isolated testing** of computations in production. It allows you to trigger runs manually, bypass scheduling locks, and—most importantly—**divert execution to test infrastructure** (custom tables or worker pools) to prevent polluting production data.
6
+
7
+ ---
8
+
9
+ ### **Global Overrides (Infrastructure Testing)**
10
+
11
+ These parameters can be applied to **`run`** and **`run_limited`** actions to redirect the execution flow.
12
+
13
+ | Parameter | Type | Description | Impact |
14
+ | --- | --- | --- | --- |
15
+ | **`outputTable`** | `string` | The BigQuery table where results will be written. | **Data Diversion**: Instead of writing to `computation_results_v3`, the Orchestrator writes to this table. Useful for validating logic without affecting production dashboards. |
16
+ | **`workerUrl`** | `string` | The URL of the worker Cloud Function to invoke. | **Traffic Diversion**: If the worker pool is used, the Orchestrator will send HTTP requests to this URL instead of the production worker. |
17
+ | **`useWorkerPool`** | `boolean` | Force enable/disable the remote worker pool. | **Execution Strategy**: <br>
18
+
19
+ <br>`true`: Forces remote execution (even for small batches).<br>
20
+
21
+ <br>`false`: Forces local execution (inside the Admin function).<br>
22
+
23
+ <br>`undefined`: Uses system default config. |
24
+
25
+ ---
26
+
27
+ ### **Supported Actions**
28
+
29
+ #### **1. `status**`
30
+
31
+ Returns the current system manifest, listing all registered computations and their schedules.
32
+
33
+ * **Configurables:** None.
34
+ * **Data Written:** None.
35
+ * **Impact:** Read-only. Low impact.
36
+ * **Cloud Functions:** `compute-admin-test` (Local).
37
+
38
+ #### **2. `analyze**`
39
+
40
+ Runs the "Scheduler Logic" for a specific date to determine what *would* run, without actually running it. Checks dependencies, hash changes, and locks.
41
+
42
+ * **Configurables:**
43
+ * `date` (YYYY-MM-DD): The target date to analyze.
44
+
45
+
46
+ * **Data Written:** None.
47
+ * **Impact:** Read-only. Low impact.
48
+ * **Cloud Functions:** `compute-admin-test` (Local).
49
+
50
+ #### **3. `run**`
51
+
52
+ Executes a full computation for the specified date. This is the primary tool for manual triggers and backfills.
53
+
54
+ * **Configurables:**
55
+ * `computation` (Required): Name of the computation (e.g., `UserPortfolioSummary`).
56
+ * `date`: Target date (Default: Today).
57
+ * `entityIds` (Array): Run only for these specific entities.
58
+ * `force` (Boolean): If `true`, runs even if the code/data hasn't changed (Default: `true` for testing).
59
+ * `dryRun` (Boolean): If `true`, computes results but **does not write to BigQuery**.
60
+ * *Plus Global Overrides (`outputTable`, `workerUrl`, `useWorkerPool`)*.
61
+
62
+
63
+ * **Data Written:**
64
+ * **Default:** Writes to `config.resultStore.table` (Production).
65
+ * **With `outputTable`:** Writes to the specified custom table.
66
+ * **With `dryRun`:** No data written.
67
+
68
+
69
+ * **Impact:** High. Can trigger heavy BigQuery queries and write large amounts of data.
70
+ * **Cloud Functions:**
71
+ * **Local Mode:** `compute-admin-test` processes all logic.
72
+ * **Worker Pool Mode:** `compute-admin-test` acts as the Orchestrator; `computation-worker` (or `workerUrl`) executes the logic.
73
+
74
+
75
+
76
+ #### **4. `run_limited**`
77
+
78
+ A safer version of `run` that automatically fetches a small sample of entities (e.g., 5 random users) and runs the computation only for them.
79
+
80
+ * **Configurables:**
81
+ * `computation` (Required): Name of computation.
82
+ * `limit` (Integer): Number of entities to test (Default: 10).
83
+ * *Plus Global Overrides (`outputTable`, `workerUrl`, `useWorkerPool`)*.
84
+
85
+
86
+ * **Data Written:** Same behavior as `run`, but only for the sampled entities.
87
+ * **Impact:** Moderate. Executes real logic but on a strictly limited scope.
88
+ * **Cloud Functions:** Same as `run`.
89
+
90
+ #### **5. `test_worker**`
91
+
92
+ Directly tests the **logic** of a worker execution locally within the Admin function. This bypasses the Orchestrator's batching and storage logic, simulating exactly what happens inside a single worker instance.
93
+
94
+ * **Configurables:**
95
+ * `computation` (Required): Name of computation.
96
+ * `entityIds` (Required Array): Must provide at least one ID to test.
97
+ * `date`: Target date.
98
+
99
+
100
+ * **Data Written:** **None**. The result is returned directly in the HTTP response body for inspection.
101
+ * **Impact:** Low. Fetches data for 1 entity and runs logic in-memory.
102
+ * **Cloud Functions:** `compute-admin-test` (Local only).
103
+
104
+ ---
105
+
106
+ ### **Summary of Data Flow**
107
+
108
+ | Scenario | Execution Location | Data Destination |
109
+ | --- | --- | --- |
110
+ | **Standard Run** | `compute-admin-test` (Local) | `computation_results_v3` (Prod) |
111
+ | **Standard Run + Worker Pool** | `computation-worker` (Remote) | `computation_results_v3` (Prod) |
112
+ | **Run + `outputTable**` | `compute-admin-test` (Local) | **`YOUR_CUSTOM_TABLE`** |
113
+ | **Run + `workerUrl**` | **`worker-test`** (Remote) | `computation_results_v3` (Prod) |
114
+ | **Run + `dryRun**` | `compute-admin-test` (Local) | *None* |
115
+ | **Run + `outputTable` + `workerUrl**` | **`worker-test`** (Remote) | **`YOUR_CUSTOM_TABLE`** |
@@ -95,6 +95,7 @@ while true; do
95
95
  echo "5) Test Specific Entities (run with entityIds)"
96
96
  echo "6) Test Worker Logic Directly (test_worker)"
97
97
  echo "7) Test Worker Pool Offloading (run with useWorkerPool)"
98
+ echo "8) Advanced Infrastructure Test (Custom Table/Worker)"
98
99
  echo "q) Quit"
99
100
 
100
101
  read -p "Enter choice: " choice
@@ -167,6 +168,45 @@ while true; do
167
168
  echo -e "${YELLOW}Running with Worker Pool enabled...${NC}"
168
169
  run_test "{\"action\": \"run\", \"computation\": \"$COMP_NAME\", \"date\": \"$TARGET_DATE\", \"useWorkerPool\": true, \"force\": true}"
169
170
  ;;
171
+
172
+ 8)
173
+ # Advanced Infrastructure Test (Custom Table & Worker)
174
+ echo -e "${YELLOW}--- Infrastructure Integration Test ---${NC}"
175
+ ask_var "Enter Computation Name" "UserPortfolioSummary" "COMP_NAME"
176
+ ask_var "Enter Date" "$DEFAULT_DATE" "TARGET_DATE"
177
+ ask_var "Enter Output Table Name" "computation_results_test" "OUT_TABLE"
178
+ ask_var "Enter Custom Worker URL (optional)" "" "WORKER_URL"
179
+ ask_var "Use Worker Pool? (true/false)" "true" "USE_POOL"
180
+ ask_var "Entity Limit (0 for Full Run)" "10" "LIMIT_NUM"
181
+
182
+ # Determine Action (run vs run_limited)
183
+ if [ "$LIMIT_NUM" -gt 0 ]; then
184
+ ACTION="run_limited"
185
+ LIMIT_PART=", \"limit\": $LIMIT_NUM"
186
+ else
187
+ ACTION="run"
188
+ LIMIT_PART=", \"force\": true"
189
+ fi
190
+
191
+ # Build Optional JSON Parts
192
+ TABLE_PART=""
193
+ if [ ! -z "$OUT_TABLE" ]; then
194
+ TABLE_PART=", \"outputTable\": \"$OUT_TABLE\""
195
+ fi
196
+
197
+ WORKER_PART=""
198
+ if [ ! -z "$WORKER_URL" ]; then
199
+ WORKER_PART=", \"workerUrl\": \"$WORKER_URL\""
200
+ fi
201
+
202
+ echo -e "${YELLOW}Running ${ACTION} on ${OUT_TABLE}...${NC}"
203
+
204
+ # Construct Payload
205
+ # We use loose concatenation here which works fine for JSON string building in bash
206
+ JSON="{\"action\": \"$ACTION\", \"computation\": \"$COMP_NAME\", \"date\": \"$TARGET_DATE\", \"useWorkerPool\": $USE_POOL $LIMIT_PART $TABLE_PART $WORKER_PART}"
207
+
208
+ run_test "$JSON"
209
+ ;;
170
210
 
171
211
  q|Q)
172
212
  echo "Exiting."
@@ -16,6 +16,9 @@ const QUEUE_NAME = process.env.ORCHESTRATOR_QUEUE || 'task-engine-queue';
16
16
  const LOCATION = process.env.GCP_REGION || 'europe-west1';
17
17
  const PROJECT = process.env.GCP_PROJECT_ID;
18
18
 
19
+ // --- FEATURE FLAG: Disable Normal/Speculator Users ---
20
+ const ENABLE_LEGACY_USERS = process.env.ENABLE_LEGACY_USERS === 'true';
21
+
19
22
  /**
20
23
  * ENTRY POINT: HTTP Handler for Workflow Interaction
21
24
  */
@@ -33,6 +36,14 @@ async function handleOrchestratorHttp(req, res, dependencies, config) {
33
36
  throw new Error("Missing userType or date for PLAN action");
34
37
  }
35
38
 
39
+ // --- NEW: Block Legacy Users if Disabled ---
40
+ if ((userType === 'normal' || userType === 'speculator') && !ENABLE_LEGACY_USERS) {
41
+ const msg = `[Orchestrator] SKIPPING PLAN for '${userType}': ENABLE_LEGACY_USERS is false.`;
42
+ logger.log('WARN', msg);
43
+ // Return 200 to prevent retry loops in workflows
44
+ return res.status(200).send({ status: 'skipped', message: msg });
45
+ }
46
+
36
47
  // Determine self-URL for callback (Cloud Task needs to call this function back)
37
48
  // We use the env var passed by GCF (FUNCTION_URI) or construct it manually
38
49
  const orchestratorUrl = orchestratorUrlOverride ||
@@ -47,6 +58,14 @@ async function handleOrchestratorHttp(req, res, dependencies, config) {
47
58
  if (!planId || !windowId) {
48
59
  throw new Error("Missing planId or windowId for EXECUTE_WINDOW action");
49
60
  }
61
+
62
+ // --- NEW: Block Legacy Users if Disabled (Double Check) ---
63
+ if ((userType === 'normal' || userType === 'speculator') && !ENABLE_LEGACY_USERS) {
64
+ const msg = `[Orchestrator] SKIPPING EXECUTE_WINDOW for '${userType}': ENABLE_LEGACY_USERS is false.`;
65
+ logger.log('WARN', msg);
66
+ return res.status(200).send({ status: 'skipped', message: msg });
67
+ }
68
+
50
69
  const result = await executeUpdateWindow(planId, windowId, userType, config, dependencies);
51
70
  res.status(200).send(result);
52
71
 
@@ -227,8 +246,13 @@ async function runDiscoveryOrchestrator(config, deps) {
227
246
  const { logger, firestoreUtils } = deps;
228
247
  logger.log('INFO', '🚀 Discovery Orchestrator triggered...');
229
248
  await firestoreUtils.resetProxyLocks(deps, config);
230
- if (isUserTypeEnabled('normal', config.enabledUserTypes)) await runDiscovery('normal', config.discoveryConfig.normal, config, deps);
231
- if (isUserTypeEnabled('speculator', config.enabledUserTypes)) await runDiscovery('speculator', config.discoveryConfig.speculator, config, deps);
249
+
250
+ if (ENABLE_LEGACY_USERS) {
251
+ if (isUserTypeEnabled('normal', config.enabledUserTypes)) await runDiscovery('normal', config.discoveryConfig.normal, config, deps);
252
+ if (isUserTypeEnabled('speculator', config.enabledUserTypes)) await runDiscovery('speculator', config.discoveryConfig.speculator, config, deps);
253
+ } else {
254
+ logger.log('INFO', 'Discovery skipped for legacy users (normal/speculator) because ENABLE_LEGACY_USERS is false.');
255
+ }
232
256
  }
233
257
 
234
258
  async function runUpdateOrchestrator(config, deps) {
@@ -237,8 +261,13 @@ async function runUpdateOrchestrator(config, deps) {
237
261
  await firestoreUtils.resetProxyLocks(deps, config);
238
262
  const enabledTypes = config.enabledUserTypes || [];
239
263
 
240
- if (isUserTypeEnabled('normal', enabledTypes)) await runUpdates('normal', config.updateConfig, config, deps);
241
- if (isUserTypeEnabled('speculator', enabledTypes)) await runUpdates('speculator', config.updateConfig, config, deps);
264
+ if (ENABLE_LEGACY_USERS) {
265
+ if (isUserTypeEnabled('normal', enabledTypes)) await runUpdates('normal', config.updateConfig, config, deps);
266
+ if (isUserTypeEnabled('speculator', enabledTypes)) await runUpdates('speculator', config.updateConfig, config, deps);
267
+ } else {
268
+ logger.log('INFO', 'Updates skipped for legacy users (normal/speculator) because ENABLE_LEGACY_USERS is false.');
269
+ }
270
+
242
271
  if (isUserTypeEnabled('popular_investor', enabledTypes)) {
243
272
  const piConfig = { ...config.updateConfig, popularInvestorRankingsCollection: config.updateConfig.popularInvestorRankingsCollection || 'popular_investor_rankings' };
244
273
  await runUpdates('popular_investor', piConfig, config, deps);
@@ -0,0 +1,73 @@
1
+ #!/bin/bash
2
+
3
+ # ==============================================================================
4
+ # BULLTRACKERS TASK ENGINE END-TO-END TESTER
5
+ # This script triggers the Orchestrator to plan an immediate execution window.
6
+ # ==============================================================================
7
+
8
+ # --- CONFIGURATION ---
9
+ FUNCTION_NAME="orchestrator-http"
10
+ REGION="europe-west1"
11
+ DATE=$(date +%Y-%m-%d) # Defaults to today
12
+ USER_TYPE="normal" # Options: normal, speculator, popular_investor
13
+ WINDOWS=1 # 1 window = immediate execution (0s delay)
14
+
15
+ # --- 1. FETCH URL DYNAMICALLY ---
16
+ echo "🔍 Fetching URL for function: $FUNCTION_NAME ($REGION)..."
17
+
18
+ # Try Gen 2 (Cloud Run) URL first
19
+ URL=$(gcloud functions describe $FUNCTION_NAME --region=$REGION --format='value(serviceConfig.uri)' 2>/dev/null)
20
+
21
+ # Fallback to Gen 1 if empty
22
+ if [ -z "$URL" ]; then
23
+ URL=$(gcloud functions describe $FUNCTION_NAME --region=$REGION --format='value(httpsTrigger.url)' 2>/dev/null)
24
+ fi
25
+
26
+ if [ -z "$URL" ]; then
27
+ echo "❌ Error: Could not find URL for function '$FUNCTION_NAME'. Check if it is deployed."
28
+ exit 1
29
+ fi
30
+
31
+ echo "✅ Target URL: $URL"
32
+
33
+ # --- 2. GET AUTH TOKEN ---
34
+ echo "🔑 Generating Identity Token..."
35
+ TOKEN=$(gcloud auth print-identity-token)
36
+
37
+ if [ -z "$TOKEN" ]; then
38
+ echo "❌ Error: Could not generate token. Run 'gcloud auth login' first."
39
+ exit 1
40
+ fi
41
+
42
+ # --- 3. SEND REQUEST ---
43
+ echo "🚀 Triggering Plan for $USER_TYPE on $DATE ($WINDOWS window)..."
44
+
45
+ RESPONSE=$(curl -s -w "\n%{http_code}" -X POST "$URL" \
46
+ -H "Authorization: Bearer $TOKEN" \
47
+ -H "Content-Type: application/json" \
48
+ -d "{
49
+ \"action\": \"PLAN\",
50
+ \"userType\": \"$USER_TYPE\",
51
+ \"date\": \"$DATE\",
52
+ \"windows\": $WINDOWS
53
+ }")
54
+
55
+ # --- 4. PARSE RESPONSE ---
56
+ HTTP_BODY=$(echo "$RESPONSE" | head -n -1)
57
+ HTTP_CODE=$(echo "$RESPONSE" | tail -n 1)
58
+
59
+ if [ "$HTTP_CODE" -eq 200 ]; then
60
+ echo ""
61
+ echo "✅ SUCCESS (HTTP 200)"
62
+ echo "---------------------------------------------------"
63
+ echo "$HTTP_BODY" | python3 -m json.tool 2>/dev/null || echo "$HTTP_BODY"
64
+ echo "---------------------------------------------------"
65
+ echo "👉 Monitor 'task-engine-queue' in Cloud Tasks Console."
66
+ echo "👉 Check Logs Explorer for 'Orchestrator' and 'Dispatcher'."
67
+ else
68
+ echo ""
69
+ echo "❌ FAILED (HTTP $HTTP_CODE)"
70
+ echo "---------------------------------------------------"
71
+ echo "$HTTP_BODY"
72
+ echo "---------------------------------------------------"
73
+ fi
package/index.js CHANGED
@@ -1,6 +1,7 @@
1
1
  /**
2
2
  * @fileoverview Main entry point for the Bulltrackers shared module.
3
3
  * CLEANED: Removed legacy V1 Dispatcher/Computation references.
4
+ * UPDATED: Re-integrated Dispatcher (Task Throttler).
4
5
  */
5
6
 
6
7
  // Core utilities
@@ -21,7 +22,6 @@ const { checkDiscoveryNeed, getDiscoveryCandidates, dispatchDiscovery } = requir
21
22
  const { getUpdateTargets, dispatchUpdates } = require('./functions/orchestrator/helpers/update_helpers');
22
23
 
23
24
  // --- COMPUTATION SYSTEM V2 (The new standard) ---
24
- // We import the WHOLE package now, which includes handlers AND ManifestBuilder
25
25
  const computationSystemV2 = require('./functions/computation-system-v2/index');
26
26
 
27
27
  // Task Engine
@@ -30,6 +30,9 @@ const { handleDiscover } = require('./functions
30
30
  const { handleVerify } = require('./functions/task-engine/helpers/verify_helpers');
31
31
  const { handleUpdate } = require('./functions/task-engine/helpers/update_helpers');
32
32
 
33
+ // Dispatcher (Task Throttler)
34
+ const { handleRequest: dispatcherRequest } = require('./functions/dispatcher/index');
35
+
33
36
  const { createApiV2App } = require('./functions/api-v2/index');
34
37
 
35
38
  // Maintenance & Backfills
@@ -76,7 +79,9 @@ const orchestrator = {
76
79
  dispatchUpdates,
77
80
  };
78
81
 
79
- // [REMOVED] Legacy 'dispatcher' pipe. It is replaced by computationSystemV2.
82
+ const dispatcher = {
83
+ handleRequest: dispatcherRequest
84
+ };
80
85
 
81
86
  const taskEngine = {
82
87
  handleRequest: taskRequest,
@@ -85,7 +90,7 @@ const taskEngine = {
85
90
  handleUpdate,
86
91
  };
87
92
 
88
- const computationSystem = computationSystemV2; // Direct mapping
93
+ const computationSystem = computationSystemV2;
89
94
 
90
95
  const api = {
91
96
  createApiV2App,
@@ -119,5 +124,5 @@ const alertSystem = {
119
124
  };
120
125
 
121
126
  module.exports = {
122
- pipe: { core, orchestrator, taskEngine, computationSystem, api, maintenance, proxy, alertSystem },
127
+ pipe: { core, orchestrator, dispatcher, taskEngine, computationSystem, api, maintenance, proxy, alertSystem },
123
128
  };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "bulltrackers-module",
3
- "version": "1.0.751",
3
+ "version": "1.0.753",
4
4
  "description": "Helper Functions for Bulltrackers.",
5
5
  "main": "index.js",
6
6
  "files": [