bulltrackers-module 1.0.53 → 1.0.54

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.MD CHANGED
@@ -1,153 +1,227 @@
1
- # BullTrackers Module (`bulltrackers-module`)
2
1
 
3
- **Version:** 1.0.27
2
+ -----
4
3
 
5
- ## Overview
6
-
7
- This package encapsulates the core logic and shared utilities for the BullTrackers cloud function architecture. It promotes code reuse and separation of concerns by providing abstracted handlers and helper functions for various backend services.
4
+ # bulltrackers-module
8
5
 
9
- ## Core Concepts
6
+ **Version:** 1.0.53
10
7
 
11
- * **eToro User IDs (CIDs):** Sequential integers assigned upon registration.
12
-
13
- eToro assigns a hidden UID to each users' account in order for their backend configurations to identify users without needing to reference a username.
14
- This is a common fact across most modern sites, but eToro is slightly unique in that their UIDs are not randomised, they are sequential.
15
- This means that UID 1 is the first account ever made, and UID 1 million is the 1 millionth account made.
8
+ ## Overview
16
9
 
17
- * **Block Sampling:** Randomly sampling users within 1M ID blocks to ensure representative data across different registration cohorts.
10
+ This package encapsulates the core backend logic for the BullTrackers platform. It is designed as a modular, stateless library to be used within a Google Cloud Functions environment.
18
11
 
19
- Given how the UIDs are assigned on eToro, we can group users by their UID into 1m block ranges, and then from this we can sample users randomly to produce data across each Block.
20
- Now there is a slight caveat to the current implementation, where our current implementation is designed to fill up a set capacity of users, per user type, per block range.
21
- This is statistically biased, because there will be more public users in certain blocks than other blocks.
22
- The future resolution to this, is to weight the block capacity by that blocks' ratio of public to private users, ensuring representative sampling.
23
- For now, this is not implemented.
12
+ It promotes a clean, testable, and maintainable architecture by enforcing a clear separation of concerns and using a consistent dependency injection pattern. All business logic is exposed via a single, nested `pipe` object.
24
13
 
25
- * **User Types:** Differentiating between 'normal' users and 'speculators' based on portfolio activity and specific asset holdings.
14
+ ## Core Concepts
26
15
 
27
- This system attempts to categorise user portfolios, and the users themselves, by types and (in the future, sub types).
28
- These are based on the response of the rankings API call, which returns enough detail to figure out whether a user is speculative, normal, inactive or private.
29
- In the future the plan is to create sub categories that state the users' risk group alongside their main category.
30
- Private is not a category, it is rather the lack of a category. Users who are private, are implicitely removed from the response of the API, auto-filtering for the system.
31
- Inactive users are a future implementation of a user type, to allow us to track users who have not logged into the platformm recently but hold positions.
16
+ * **eToro User IDs (CIDs):** Sequential integers assigned upon registration. This sequential nature allows for "Block Sampling".
17
+ * **Block Sampling:** The process of randomly sampling users within 1 million ID blocks (e.g., 1M-2M, 2M-3M) to ensure a representative dataset across different user registration cohorts.
18
+ * **User Types:** The system differentiates between two primary user types:
19
+ * **`normal`**: Standard users.
20
+ * **`speculator`**: Users identified by specific portfolio activities, such as holding certain high-risk assets.
21
+ * **Dependency Injection (DI):** No function in this module initializes its own clients (like Firestore or Pub/Sub) or accesses global configuration. All logic is exposed as stateless functions that receive two arguments: `(config, dependencies)`. This makes the entire module testable and portable.
32
22
 
33
- * **Data Flow:** An architecture primarily using Google Cloud Pub/Sub and Cloud Functions.
23
+ ## Architecture: The `pipe` Object
34
24
 
35
- The flow of data is, in part, event-driven, and in part scheduler driven.
36
- The system begins by searching for users to fill up the capacity of the block counts for normal and speculator users. This is the discovery process.
37
- These users are then verified, and if they meet the criteria of a normal or speculator user, then their UID is stored.
38
- The update task triggers once a day and finds all the users it needs to update the data of, this will include those users the discovery process located and then verified.
39
- The update task will look at their portfolios, store the contents.
25
+ This module exports a single root object named `pipe`. This object organizes all application logic into distinct, self-contained "pipelines," each corresponding to a major microservice or domain.
40
26
 
41
- The submitting of the task type and UIDs is done by the Orchestrators, either the update or discovery orchestrator.
42
- In discovery mode, the tasks are submitted directly to the task engine through a pub/sub
43
- In update mode, the tasks are passed to the dispatcher, to then slowly release to the task engine, an intentional choice to ensure the task engine can scale gradually to handle the incoming requests.
44
- The task engine is event-driven, triggered by pub/sub messages.
45
- The portfolio data is then stored, and once a day the computation system is run on a cron schedule trigger, fetches all the portfolio data and applies computations to the data, returning results that are stored.
46
- The API is then able to read these results.
27
+ ```javascript
28
+ // Example of the module's structure
29
+ const { pipe } = require('bulltrackers-module');
30
+
31
+ // You initialize dependencies ONCE in your main function entry point
32
+ const dependencies = {
33
+ db: new Firestore(),
34
+ pubsub: new PubSub(),
35
+ logger: myLogger,
36
+ headerManager: new pipe.core.IntelligentHeaderManager(...),
37
+ proxyManager: new pipe.core.IntelligentProxyManager(...),
38
+ // ...etc.
39
+ };
40
+
41
+ // You then pass those dependencies into any pipe function
42
+ async function runDiscovery(config) {
43
+ await pipe.orchestrator.runDiscoveryOrchestrator(config, dependencies);
44
+ }
45
+ ```
46
+
47
+ -----
47
48
 
48
49
  ## Module Structure
49
50
 
50
- The module is organized by function, mirroring the cloud function deployment structure.
51
+ ### `pipe.core`
51
52
 
52
- ### `/core`
53
+ Contains the fundamental, reusable classes and utilities that power the other pipes.
53
54
 
54
- Contains utilities shared across multiple functions within the module.
55
+ * **`IntelligentHeaderManager`**: A class that manages a pool of browser headers, rotating them based on historical success rates to avoid API blocks. Performance is tracked in Firestore.
56
+ * **`IntelligentProxyManager`**: A class that manages a pool of Google Apps Script proxy URLs. It selects available proxies, handles request fetching, and locks proxies that fail (e.g., due to quota errors).
57
+ * **`FirestoreBatchManager`**: A utility class used by the Task Engine to manage stateful, asynchronous batch writes to Firestore. It handles sharding portfolio data, batching timestamp updates, and flushing processed speculator IDs.
58
+ * **`firestoreUtils`**: A collection of stateless helper functions for common Firestore operations, such as resetting proxy locks, getting block capacities, fetching exclusion IDs, and querying users who need updates.
59
+ * **`pubsubUtils`**: A collection of stateless helper functions for publishing messages to Pub/Sub in batches.
55
60
 
56
- * **`/utils`:**
57
- * `firestore_utils.js`: Helpers for common Firestore operations (batching, querying blocks, resetting proxies, getting IDs).
58
- * `pubsub_utils.js`: Helpers for publishing messages to Pub/Sub topics in batches.
59
- * `intelligent_header_manager.js`: Manages a pool of browser headers, selecting them based on past success rates to mimic real user traffic. Stores performance data in Firestore.
60
- * `intelligent_proxy_manager.js`: Manages a pool of Google Apps Script proxies, selecting available ones and handling failures/locking.
61
+ ### `pipe.orchestrator`
61
62
 
62
- ### `/orchestrator`
63
+ The "brain" of the system. This pipe is responsible for deciding *what* work needs to be done, but not *how* to do it. It's typically run on a schedule (e.g., via Cloud Scheduler).
63
64
 
64
- Handles the triggering and coordination of data fetching tasks (Discovery and Update).
65
+ * **`runDiscoveryOrchestrator(config, dependencies)`**: The main entry point for the discovery process. It checks which user blocks are under-capacity, gets candidate CIDs to scan, and dispatches `discover` tasks.
66
+ * **`runUpdateOrchestrator(config, dependencies)`**: The main entry point for the daily update process. It queries Firestore for all verified users who haven't been updated recently and dispatches `update` tasks.
67
+ * **Discovery Helpers** (`checkDiscoveryNeed`, `getDiscoveryCandidates`, `dispatchDiscovery`): Sub-pipes that handle the logic for finding underpopulated blocks, generating new CIDs (while respecting exclusion lists), and publishing discovery tasks.
68
+ * **Update Helpers** (`getUpdateTargets`, `dispatchUpdates`): Sub-pipes that find all `normal` and `speculator` users eligible for an update and publish tasks to the `Dispatcher`.
65
69
 
66
- * **`index.js`:** Exports factory functions (`createDiscoveryOrchestrator`, `createUpdateOrchestrator`).
67
- * **`/helpers`:**
68
- * `discovery_helpers.js`: Logic for checking if discovery is needed, finding candidate users (prioritizing known users, then random generation with exclusions), managing pending lists, and dispatching tasks.
69
- * `update_helpers.js`: Logic for identifying users needing updates based on timestamps and dispatching update tasks.
70
+ ### `pipe.dispatcher`
70
71
 
71
- ### `/dispatcher`
72
+ The "throttle" of the system. Its sole purpose is to receive a large number of tasks (typically from the `orchestrator`) and slowly release them in small batches to a Pub/Sub topic. This prevents the `taskEngine` from scaling too quickly and incurring high costs.
72
73
 
73
- Acts as an intermediary buffer, receiving tasks from the Orchestrator and slowly releasing them to the Task Engine to prevent overwhelming it.
74
+ * **`handleRequest(message, context, config, dependencies)`**: The main entry point, designed to be triggered by a Pub/Sub message containing an array of `tasks`.
75
+ * **`dispatchTasksInBatches(tasks, dependencies, config)`**: The core logic that loops through tasks, publishing them in batches with a configurable delay.
74
76
 
75
- * **`index.js`:** Exports the factory function `createDispatcherHandler`.
76
- * **`/helpers/dispatch_helpers.js`:** Contains the core logic for batching and delaying task publishing.
77
+ ### `pipe.taskEngine`
77
78
 
78
- ### `/task-engine`
79
+ The "factory" or "workhorse" of the system. This pipe is event-driven, triggered by individual Pub/Sub messages from the `orchestrator` or `dispatcher`. It executes the core data-fetching operations.
79
80
 
80
- The core processing unit. Receives tasks (discover, verify, update) for different user types (normal, speculator) and executes the corresponding logic using eToro APIs via the proxy manager.
81
+ * **`handleRequest(message, context, config, dependencies)`**: The main entry point. It parses the Pub/Sub message, identifies the task `type`, and routes it to the correct sub-pipe handler.
82
+ * **`handleDiscover(task, taskId, dependencies, config)`**: Fetches public ranking data for a batch of CIDs. It filters out private/inactive users, applies heuristics (for speculators), and chains to the `verify` task for any promising candidates.
83
+ * **`handleVerify(task, taskId, dependencies, config)`**: Fetches the portfolio of a *single* user. It verifies they meet the criteria (e.g., holds a speculator asset) and, if so, saves their data and increments the block count.
84
+ * **`handleUpdate(task, taskId, dependencies, config)`**: Fetches the latest portfolio for an *existing, verified* user and stores the new data in a sharded Firestore document.
81
85
 
82
- * **`index.js`:** Exports the factory function `createTaskEngineHandler`.
83
- * **`handler_creator.js`:** Sets up the handler, initializes clients (Firestore, PubSub, managers), and defines the main processing flow.
84
- * **`/helpers`:**
85
- * `discover_helpers.js`: Handles fetching user ranking/activity data, filtering private/inactive users, applying heuristics (especially for speculators), and chaining to the 'verify' task. Also handles logging invalid speculator IDs.
86
- * `verify_helpers.js`: Handles fetching portfolio details, checking for specific assets (for speculators) or any positions (for normal users), storing bronze state, updating block counts, and storing verified users.
87
- * `update_helpers.js`: Handles fetching the latest portfolio or position data for already verified users, handling private users, and storing the updated data.
88
- * **`/utils`:**
89
- * `firestore_batch_manager.js`: Manages batch writes to Firestore for portfolios (sharded), timestamps, and removing processed speculators from the pending list.
90
- ---
86
+ ### `pipe.computationSystem`
91
87
 
92
- ### `/computation-system`
88
+ The "refinery" of the system. This pipe is run on a schedule *after* data collection. It loads all the raw portfolio data, runs a suite of calculations (e.g., risk scores, profit migrations), and saves the results in a separate "unified insights" collection for the API to read.
93
89
 
94
- Handles the orchestration and execution of the **unified computation system**. This system runs scheduled calculations (both daily and historical comparisons) using calculation logic imported from the `aiden-shared-calculations-unified` package. It identifies date ranges needing processing or backfilling, loads the necessary portfolio data, runs the calculations, and stores the results.
90
+ * **`runOrchestration(config, dependencies)`**: The main entry point. It identifies which dates need processing (or backfilling), loads all data for those dates, initializes all calculation classes, and streams the data through them.
91
+ * **`dataLoader`**: A set of helpers for loading the sharded portfolio data from Firestore efficiently.
92
+ * **`computationUtils`**: A set of helpers for categorizing calculations (historical vs. daily), committing results in chunks, and finding the earliest available data date.
95
93
 
96
- * **`index.js`:** Exports the factory function `createComputationSystemHandler` and the main orchestration logic `runComputationOrchestrator`.
97
- * **`handler_creator.js`:** Creates the HTTP-triggered Cloud Function handler that initiates the computation orchestration.
98
- * **`/helpers/orchestration_helpers.js`:** Contains the core logic for determining dates to process, identifying missing computations, loading data (using `data_loader.js`), initializing calculators from the unified package, processing data across dates, and committing results to Firestore.
99
- * **`/utils`:**
100
- * `data_loader.js`: Functions specifically for loading portfolio data in parts/shards from Firestore for the computation system.
101
- * `utils.js`: Helper functions for categorizing calculations (historical vs. daily), managing Firestore batches, handling date ranges, parallel processing, and finding the earliest data dates.
94
+ ### `pipe.api`
102
95
 
96
+ The "tap" that serves the processed data. This pipe is an Express.js application designed to be run as a single Cloud Function, providing a public-facing API for the frontend.
103
97
 
104
- ### `/generic-api`
98
+ * **`createApiApp(config, dependencies, unifiedCalculations)`**: The main entry point. It returns a fully configured Express app instance, complete with middleware (CORS, JSON) and all API routes.
99
+ * **API Helpers** (`buildCalculationMap`, `validateRequest`, `fetchUnifiedData`, etc.): Sub-pipes that handle request validation, fetching data from the "unified insights" collection, and generating debug manifests.
105
100
 
106
- Provides a flexible, cached API endpoint (intended for frontend use) to query computed data stored in Firestore.
101
+ ### `pipe.maintenance`
107
102
 
108
- * **`index.js`:** Exports the factory function `createApiApp` to set up the Express app.
109
- * **`/helpers/api_helpers.js`:** Contains logic for request validation, date range generation, data fetching from the unified insights collection, and building the calculation map.
103
+ A collection of standalone utility and cleanup functions, typically run on a schedule.
110
104
 
111
- ### `/invalid-speculator-handler`
105
+ * **`runSpeculatorCleanup(config, dependencies)`**: De-registers stale speculators who haven't held a qualifying asset for a set grace period, and decrements the block counts.
106
+ * **`handleInvalidSpeculator(message, context, config, dependencies)`**: An event-driven function that listens for Pub/Sub messages containing CIDs of users found to be private or invalid, adding them to an exclusion list.
107
+ * **`runFetchInsights(config, dependencies)`**: A scheduled function to fetch general eToro market insights (e.g., buy/sell percentages).
108
+ * **`runFetchPrices(config, dependencies)`**: A scheduled function to fetch daily closing prices for all instruments.
112
109
 
113
- Listens to a Pub/Sub topic for messages containing IDs of users identified as private or inactive speculators and stores them in Firestore for future exclusion during discovery.
110
+ ### `pipe.proxy`
114
111
 
115
- * **`index.js`:** Exports helpers.
116
- * **`/helpers/handler_helpers.js`:** Core logic for finding/creating a Firestore document and adding the invalid IDs, managing document size limits.
112
+ The logic for the Google Apps Script web app that acts as the secure proxy layer. This code is *not* run in the Node.js environment; it's intended to be deployed directly into an Apps Script project.
117
113
 
118
- ### `/speculator-cleanup-orchestrator`
114
+ * **`handlePost(e)`**: The main `doPost(e)` function for the Apps Script. It parses the request from the `IntelligentProxyManager`, executes it using `UrlFetchApp`, and returns the response.
119
115
 
120
- A scheduled function to remove stale speculators from the active blocks based on inactivity and clean up old entries from the pending speculator list.
116
+ -----
121
117
 
122
- * **`index.js`:** Exports the factory function `createSpeculatorCleanupHandler`.
123
- * **`/helpers/cleanup_helpers.js`:** Contains the core logic for querying blocks/pending lists based on grace periods and batching deletions/count updates.
118
+ ## Data Flow
124
119
 
125
- ### `/fetch-insights`
120
+ The system operates in four distinct, decoupled stages:
126
121
 
127
- Scheduled function to fetch daily instrument ownership insights (buy/sell percentages, total owners) from the eToro API.
122
+ 1. **Stage 1: Discovery (Scheduled)**
128
123
 
129
- * **`index.js`:** Exports the factory function `createFetchInsightsHandler`.
130
- * **`/helpers/handler_helpers.js`:** Core logic for making the API call (via proxy manager) and storing the results in Firestore.
124
+ * `pipe.orchestrator.runDiscoveryOrchestrator` runs.
125
+ * It determines it needs 5,000 new `normal` users.
126
+ * It creates 50 `discover` tasks (100 CIDs each) and publishes them to the `task-engine` topic.
127
+ * `pipe.taskEngine` functions spin up, `handleDiscover` finds 200 promising users.
128
+ * It publishes 200 `verify` tasks back to the `task-engine` topic.
129
+ * `pipe.taskEngine` functions `handleVerify` these users, and 75 are saved to the database.
131
130
 
132
- ### `/etoro-price-fetcher`
131
+ 2. **Stage 2: Update (Scheduled)**
133
132
 
134
- Scheduled function to fetch daily closing prices for instruments from the eToro API.
133
+ * `pipe.orchestrator.runUpdateOrchestrator` runs.
134
+ * It finds 50,000 existing users that need an update.
135
+ * It publishes *one* message containing all 50,000 tasks to the `dispatcher` topic.
136
+ * `pipe.dispatcher.handleRequest` receives the message.
137
+ * It loops, publishing 500 tasks at a time to the `task-engine` topic, with a 30-second delay between batches.
138
+ * `pipe.taskEngine` functions `handleUpdate` the portfolios, scaling gradually.
135
139
 
136
- * **`index.js`:** Exports the factory function `createPriceFetcherHandler`.
137
- * **`/helpers/handler_helpers.js`:** Core logic for making the API call (via proxy manager) and batch-writing price updates to Firestore.
140
+ 3. **Stage 3: Computation (Scheduled)**
138
141
 
139
- ### `/appscript-api`
142
+ * `pipe.computationSystem.runOrchestration` runs (e.g., at 1:00 AM).
143
+ * It loads all portfolio data collected in Stages 1 & 2.
144
+ * It runs all calculations and saves the results to the `unified_insights` collection.
140
145
 
141
- Contains the helper logic intended to be deployed as a Google Apps Script web app, acting as the HTTP proxy layer.
146
+ 4. **Stage 4: Serving (On-Demand)**
142
147
 
143
- * **`index.js`:** Exports the `handlePost` function.
144
- * **`/helpers/errors.js`:** Utility for creating standardized error responses.
148
+ * A frontend user loads a chart.
149
+ * The browser sends a request to the `pipe.api` function.
150
+ * The API validates the request, reads the *pre-computed* data from `unified_insights`, and returns it.
145
151
 
146
- ## Setup & Usage
152
+ ## Usage
147
153
 
148
- ```javascript
149
- // Example
150
- const { Orchestrator } = require('bulltrackers-module');
151
- const cfg = require('./config/orchestrator_config'); // Load specific config
154
+ In your Google Cloud Function `index.js` file:
152
155
 
153
- exports.discoveryOrchestrator = Orchestrator.createDiscoveryOrchestrator(cfg);
156
+ ```javascript
157
+ const { initializeApp } = require('firebase-admin/app');
158
+ const { getFirestore } = require('firebase-admin/firestore');
159
+ const { PubSub } = require('@google-cloud/pubsub');
160
+ const { pipe } = require('bulltrackers-module');
161
+ const { createLogger } = require('sharedsetup'); // (Or your logger)
162
+
163
+ // --- 1. Initialize Clients ---
164
+ initializeApp();
165
+ const db = getFirestore();
166
+ const pubsub = new PubSub();
167
+ const logger = createLogger();
168
+
169
+ // --- 2. Load Function-Specific Config ---
170
+ const orchestratorConfig = require('./config/orchestrator_config.js');
171
+ const taskEngineConfig = require('./config/task_engine_config.js');
172
+
173
+ // --- 3. Initialize Core Dependencies ---
174
+ // These are shared across all functions that need them
175
+ const dependencies = {
176
+ db,
177
+ pubsub,
178
+ logger,
179
+ headerManager: new pipe.core.IntelligentHeaderManager(
180
+ db,
181
+ logger,
182
+ taskEngineConfig.headerManager
183
+ ),
184
+ proxyManager: new pipe.core.IntelligentProxyManager(
185
+ db,
186
+ logger,
187
+ taskEngineConfig.proxyManager
188
+ ),
189
+ // ... add batchManager if needed for task-engine
190
+ batchManager: new pipe.core.FirestoreBatchManager(
191
+ db,
192
+ // The batch manager needs the header manager to flush performance
193
+ dependencies.headerManager,
194
+ logger,
195
+ taskEngineConfig.batchManager
196
+ ),
197
+ // ... add core utils
198
+ firestoreUtils: pipe.core.firestoreUtils,
199
+ pubsubUtils: pipe.core.pubsubUtils
200
+ };
201
+
202
+ // --- 4. Export Cloud Functions ---
203
+
204
+ // Example: Discovery Orchestrator (HTTP Trigger)
205
+ exports.discoveryOrchestrator = async (req, res) => {
206
+ try {
207
+ await pipe.orchestrator.runDiscoveryOrchestrator(
208
+ orchestratorConfig,
209
+ dependencies
210
+ );
211
+ res.status(200).send('Discovery orchestration complete.');
212
+ } catch (error) {
213
+ logger.log('ERROR', 'Discovery failed', { error: error.message });
214
+ res.status(500).send('Error');
215
+ }
216
+ };
217
+
218
+ // Example: Task Engine (Pub/Sub Trigger)
219
+ exports.taskEngine = async (message, context) => {
220
+ await pipe.taskEngine.handleRequest(
221
+ message,
222
+ context,
223
+ taskEngineConfig,
224
+ dependencies
225
+ );
226
+ };
227
+ ```
@@ -42,16 +42,25 @@ async function handleUpdate(task, taskId, dependencies, config) {
42
42
  // Use batchManager from dependencies
43
43
  batchManager.deleteFromTimestampBatch(userId, userType, instrumentId);
44
44
 
45
- const blockId = `${Math.floor(parseInt(userId) / 1000000)}M`;
45
+ // <<< START FIX for private user blockId format >>>
46
+ // The blockId format for Speculator Blocks is numeric (e.g., "1000000")
47
+ // The blockId for Normal Blocks is string (e.g., "1M")
48
+ let blockId;
49
+ let incrementField;
50
+
51
+ if (userType === 'speculator') {
52
+ blockId = String(Math.floor(parseInt(userId) / 1000000) * 1000000);
53
+ incrementField = `counts.${instrumentId}_${blockId}`;
54
+ } else {
55
+ blockId = `${Math.floor(parseInt(userId) / 1000000)}M`;
56
+ incrementField = `counts.${blockId}`;
57
+ }
46
58
 
47
59
  // Use db from dependencies
48
60
  const blockCountsRef = db.doc(userType === 'speculator'
49
61
  ? config.FIRESTORE_DOC_SPECULATOR_BLOCK_COUNTS
50
62
  : config.FIRESTORE_DOC_BLOCK_COUNTS);
51
-
52
- const incrementField = userType === 'speculator'
53
- ? `counts.${instrumentId}_${blockId}`
54
- : `counts.${blockId}`;
63
+ // <<< END FIX for private user blockId format >>>
55
64
 
56
65
  await blockCountsRef.set({ [incrementField]: FieldValue.increment(-1) }, { merge: true });
57
66
 
@@ -65,15 +74,48 @@ async function handleUpdate(task, taskId, dependencies, config) {
65
74
  const portfolioData = JSON.parse(responseBody);
66
75
 
67
76
  const today = new Date().toISOString().slice(0, 10);
68
- const blockId = `${Math.floor(parseInt(userId) / 1000000)}M`;
77
+
78
+ // <<< START FULL CODE FIX >>>
69
79
 
70
- // Use batchManager from dependencies
71
- await batchManager.addToPortfolioBatch(userId, blockId, today, portfolioData, userType, instrumentId);
80
+ // BUG 1: The blockId format is different for portfolio storage ('1M')
81
+ // vs. orchestrator logic ('1000000').
82
+ const portfolioStorageBlockId = `${Math.floor(parseInt(userId) / 1000000)}M`;
83
+
84
+ // This call is correct for storing portfolio data (which uses the 'M' format)
85
+ await batchManager.addToPortfolioBatch(userId, portfolioStorageBlockId, today, portfolioData, userType, instrumentId);
86
+
87
+ // This call is correct for 'normal' users, but for 'speculator' users
88
+ // it writes to a timestamp doc that the orchestrator *does not read*.
89
+ // We still call it to update 'normal' users correctly.
72
90
  await batchManager.updateUserTimestamp(userId, userType, instrumentId);
73
91
 
92
+ // BUG 2 (The Main Problem): We must *also* update the 'SpeculatorBlocks'
93
+ // collection, which is what the orchestrator *does* read.
94
+ if (userType === 'speculator') {
95
+ const orchestratorBlockId = String(Math.floor(parseInt(userId) / 1000000) * 1000000);
96
+
97
+ logger.log('INFO', `[UPDATE] Applying speculator timestamp fix for user ${userId} in block ${orchestratorBlockId}`);
98
+
99
+ // Create a new, separate batch to commit this timestamp *immediately*.
100
+ // This mirrors the logic in 'verify_helpers.js'.
101
+ const fixBatch = db.batch();
102
+ const speculatorBlockRef = db.collection(config.FIRESTORE_COLLECTION_SPECULATOR_BLOCKS).doc(orchestratorBlockId);
103
+
104
+ // These are the two fields the orchestrator and cleanup functions read
105
+ const updates = {
106
+ [`users.${userId}.lastVerified`]: new Date(),
107
+ [`users.${userId}.lastHeldSpeculatorAsset`]: new Date()
108
+ };
109
+
110
+ fixBatch.set(speculatorBlockRef, updates, { merge: true });
111
+ await fixBatch.commit();
112
+ logger.log('INFO', `[UPDATE] Speculator timestamp fix for user ${userId} committed.`);
113
+ }
114
+ // <<< END FULL CODE FIX >>>
115
+
74
116
  } finally {
75
117
  if (selectedHeader) headerManager.updatePerformance(selectedHeader.id, wasSuccess);
76
118
  }
77
119
  }
78
120
 
79
- module.exports = { handleUpdate };
121
+ module.exports = { handleUpdate };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "bulltrackers-module",
3
- "version": "1.0.53",
3
+ "version": "1.0.54",
4
4
  "description": "Helper Functions for Bulltrackers.",
5
5
  "main": "index.js",
6
6
  "files": [