bulltrackers-module 1.0.155 → 1.0.157

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.MD CHANGED
@@ -1,223 +1,179 @@
1
+ [ MAIN README ]
1
2
 
2
- -----
3
+ Project Overview
3
4
 
4
- # bulltrackers-module
5
+ Welcome to the Bulltrackers Project
5
6
 
6
- **Version:** 1.0.53
7
+ This project is a wide ranging ingestion pipeline for the analysis of an infinitely scalable set of retail portfolios, specifically focusing on the analysis of public eToro accounts. The project covers the pipeline of ingestion of eToro metrics across user portfolios, social data, asset pricing and trading data, builds in-depth computations based on this data, and then iteratively produces further computations based on the results of prior computations. This is all then provided through a customised dynamically generated type-safe frontend integration.
7
8
 
8
- ## Overview
9
+ The objective is simple; Transform the possibilities of the everyday Joe, give them the same level of access as hedge funds get, without the millions of dollars in upfront costs. For too long have hedge funds held the upper hand, with pipelines connected directly to brokerages, where every trade you make is immediately sent to a dozen firms trading against you, feeding their algorithms to make decisions.
9
10
 
10
- This package encapsulates the core backend logic for the BullTrackers platform. It is designed as a modular, stateless library to be used within a Google Cloud Functions environment.
11
+ This project turns the tables, gives you the upper hand, and hands you the data they pay millions for.
11
12
 
12
- It promotes a clean, testable, and maintainable architecture by enforcing a clear separation of concerns and using a consistent dependency injection pattern. All business logic is exposed via a single, nested `pipe` object.
13
+ What if you could build a system to monitor exactly what the best - and worst - are doing? Copy their trades, or inverse them.
14
+ This is just one single possibility that this project offers, one of an infinite range.
15
+ If you can think of it, you can build it.
13
16
 
14
- ## Core Concepts
15
17
 
16
- * **eToro User IDs (CIDs):** Sequential integers assigned upon registration. This sequential nature allows for "Block Sampling".
17
- * **Block Sampling:** The process of randomly sampling users within 1 million ID blocks (e.g., 1M-2M, 2M-3M) to ensure a representative dataset across different user registration cohorts.
18
- * **User Types:** The system differentiates between two primary user types:
19
- * **`normal`**: Standard users.
20
- * **`speculator`**: Users identified by specific portfolio activities, such as holding certain high-risk assets.
21
- * **Dependency Injection (DI):** No function in this module initializes its own clients (like Firestore or Pub/Sub) or accesses global configuration. All logic is exposed as stateless functions that receive two arguments: `(config, dependencies)`. This makes the entire module testable and portable.
18
+ [ Architecture ]
22
19
 
23
- ## Architecture: The `pipe` Object
20
+ This project consists of 3 primary backend packages.
24
21
 
25
- This module exports a single root object named `pipe`. This object organizes all application logic into distinct, self-contained "pipelines," each corresponding to a major microservice or domain.
22
+ Bulltrackers - This is the core package, consisting of all the cloud functions used in the project, using a heavily customised, extremely efficient and robust pipeline for every functionality of the project. It is designed in such a way that costs are minimised to extreme lengths, processing speeds are lightning fast and creating additional cloud functions and then deploying them is as simple as adding a new file, plugging it into the pipeline and setting up their configs.
26
23
 
27
- ```javascript
28
- // Example of the module's structure
29
- const { pipe } = require('bulltrackers-module');
24
+ Calculations - This is the pairing package for the computations module of the Bulltrackers package. It provides a way to simply insert a new computation into its relevant category, or introduce a new category, without any additional code changes. Simply create a computation, list any dependencies, and the computation system will handle the rest as-is.
30
25
 
31
- // You initialize dependencies ONCE in your main function entry point
32
- const dependencies = {
33
- db: new Firestore(),
34
- pubsub: new PubSub(),
35
- logger: myLogger,
36
- headerManager: new pipe.core.IntelligentHeaderManager(...),
37
- proxyManager: new pipe.core.IntelligentProxyManager(...),
38
- // ...etc.
39
- };
26
+ web-components - This is the frontend-as-a-backend integration. Rather than building a complex frontend that must be modified for every time we introduce a new computation, this system is able to produce a completely automated type-safe schema for defining new computations directly from the computation system. We simply produce a new computation, give it a schema, and once it has run, the web-components will pull the new schema in. The result is that we do not need to handle the annoyance of typescript fiddling, we can just pass a chart component the data to render, set the position and the page path and job done.
40
27
 
41
- // You then pass those dependencies into any pipe function
42
- async function runDiscovery(config) {
43
- await pipe.orchestrator.runDiscoveryOrchestrator(config, dependencies);
44
- }
45
- ```
28
+ [ Things you MUST understand, before attempting to understand this project ]
29
+
30
+ 1.
31
+ This is NOT, a project you can simply "npm install", in fact doing so is a very bad idea. You will rapidly find that it won't work, I have not provided the configurations for api routes, core parameters for inner workings, and much of the logic will not make any sense to you, regardless of your experience in developing anything. This is intentional; I will not ever be providing the routes for finding the data that is processed here, it would be irresponsible of me. Furthermore, if you attempt to find the configuration values yourself through whatever means, you are likely to violate your own country laws, scraping can - and has been - proven to be a criminal act in some jurisdictions, you MUST seek permission by both eToro, and depending on your objectives, the targeted user(s). Without this permission, I heavily caution any interaction with this project, and warn you that you will likely risk both civil and criminal charges that can ruin your life.
46
32
 
47
- -----
33
+ Do not take this risk.
48
34
 
49
- ## Module Structure
35
+ 2.
36
+ This project revolves around a few key facts on the eToro system, how it works and the fundamental inner workings of what is perhaps a rather unique brokerage. Indeed, without much of how eToro works being so, this project would not be possible. Outside of eToro being my primary brokerage, this is a key reason for choosing eToro as the data provider, their choices in architecture and their own website configurations make this project possible. Not easy, but possible.
50
37
 
51
- ### `pipe.core`
38
+ One of these key facts is understanding what happens when a user makes their account. Unlike normal sites, eToro does not assign randomised IDs to each user upon account creation, they are simple integers ordered from account creation dates. Account ID 1 is the first account, Account ID 35m is the 35 millionth account, and so on. This is a key fact to understand when reviewing the task engine and orchestrators.
52
39
 
53
- Contains the fundamental, reusable classes and utilities that power the other pipes.
40
+ Another fact is how eToro processes requests for user data. It is not possible to fetch most of a private accounts' data, there are some exceptions to this but for the most part, when using an API route that allows for user batching - submitting multiple user IDs into one request - if any of the users' passed to that API are private, eToro returns nothing for that user; not null, but literally nothing, the user will not be included in the response. This is a very useful fact, as you will find later, I use this built-in filter to my advantage and it saves a significant sum of wall-time in the processing, it also forms a mathematically powerful positive feedback loop by allowing us to mark any user we find was a private account, and ensuring we never try to process them again.
54
41
 
55
- * **`IntelligentHeaderManager`**: A class that manages a pool of browser headers, rotating them based on historical success rates to avoid API blocks. Performance is tracked in Firestore.
56
- * **`IntelligentProxyManager`**: A class that manages a pool of Google Apps Script proxy URLs. It selects available proxies, handles request fetching, and locks proxies that fail (e.g., due to quota errors).
57
- * **`FirestoreBatchManager`**: A utility class used by the Task Engine to manage stateful, asynchronous batch writes to Firestore. It handles sharding portfolio data, batching timestamp updates, and flushing processed speculator IDs.
58
- * **`firestoreUtils`**: A collection of stateless helper functions for common Firestore operations, such as resetting proxy locks, getting block capacities, fetching exclusion IDs, and querying users who need updates.
59
- * **`pubsubUtils`**: A collection of stateless helper functions for publishing messages to Pub/Sub in batches.
42
+ 3.
43
+ It is not possible to discover how much an account is worth. This was possible in early January 2025, through what was a serious oversight in an update eToro performed, which exposed a parameter called LotCount. This was a parameter that showed the number of shares each Public user owned for a given asset, and by extrapolation you could then infer their portfolio values. The developer of this project was solely responsible for uncovering this bug, reporting it and as a result holds considerable favour within the eToro public, but also staff, community. However, what is possible is uncovering whether a user is of Bronze tier or not, whilst all other tier information is private, eToro intentionally exposes whether a given public account is of Bronze tier, meaning <5k USD portfolio value. This metric is used in some computations to highlight the difference - or lack of - between Bronze and Non-Bronze portfolios. It is repeated again, that this is a public metric, I perform no access to user data beyond what is publicly available and other tier information is not accessible. It has been clarified with eToro that exposing whether a user is of Bronze tier or not, is indeed the intended mechanism.
60
44
 
61
- ### `pipe.orchestrator`
45
+ 4.
46
+ Due to the above restriction on it not being possible to identify the exact dollar cost of a single position, no computations revolve around USD values, but a fair number of computations revolve around computing the value of positions based on their P&L. The main portfolio API used in the task engine, returns ONLY the top-level positioning for each users' portfolio, for example it will show they own an exposure of 9% invested in NVDA, but not that the position is 9 positions of NVDA at 1% each. We cannot see the individual position breakdown. It is possible to get this through another endpoint, but requires iterating over every single position one at a time to discover those values; This is not worthwhile for normal user task types, but is used for speculator task types, where we know which position is useful to look at. Of note, the main portfolio API does not return details like leverage, entry point or other such values, it simply returns the averages of the whole position, the individual position API used by the speculator task type, returns this more granular data. On the flip side, the history API used for all task types, returns closed trade values over a YTD period, and each days snapshot can be used to compute the individual position changes. This concept is covered more in depth in task engine documentation.
62
47
 
63
- The "brain" of the system. This pipe is responsible for deciding *what* work needs to be done, but not *how* to do it. It's typically run on a schedule (e.g., via Cloud Scheduler).
48
+ [ Conventions ]
64
49
 
65
- * **`runDiscoveryOrchestrator(config, dependencies)`**: The main entry point for the discovery process. It checks which user blocks are under-capacity, gets candidate CIDs to scan, and dispatches `discover` tasks.
66
- * **`runUpdateOrchestrator(config, dependencies)`**: The main entry point for the daily update process. It queries Firestore for all verified users who haven't been updated recently and dispatches `update` tasks.
67
- * **Discovery Helpers** (`checkDiscoveryNeed`, `getDiscoveryCandidates`, `dispatchDiscovery`): Sub-pipes that handle the logic for finding underpopulated blocks, generating new CIDs (while respecting exclusion lists), and publishing discovery tasks.
68
- * **Update Helpers** (`getUpdateTargets`, `dispatchUpdates`): Sub-pipes that find all `normal` and `speculator` users eligible for an update and publish tasks to the `Dispatcher`.
50
+ 1.
51
+ The Pipe architecture used in the core bulltrackers package is a MUST. The existence of this pipe architecture is a fundamental feature of the wider system, and allows for Dependency Injections, making every cloud function reusable, testable and standalone. A single cloud function can break, which is inevitable, and the wider system is unaffected. All cloud functions must use the dependency injection and never define their own require statements within their packages, we simply pass down their dependencies.
69
52
 
70
- ### `pipe.dispatcher`
53
+ 2.
54
+ As with all professional systems, DRY applies. Wherever feasible, sensible and logical to do so, functionality for cloud functions should be abstracted into helper modules, or - if used across multiple cloud functions - shifted into the core pipe, which is passed to all cloud functions.
71
55
 
72
- The "throttle" of the system. Its sole purpose is to receive a large number of tasks (typically from the `orchestrator`) and slowly release them in small batches to a Pub/Sub topic. This prevents the `taskEngine` from scaling too quickly and incurring high costs.
56
+ 3.
57
+ The use of firestore is huge within this project, and as such, it is a primary cost aspect, as well as significant driver of wall-time. It is therefore an absolute rule to never design queries, reads, writes or deletes within firestore in such a way that they are inefficient. Extreme lengths should be taken to dramatically reduce, simplify and redesign systems that inefficiently use firestore. This is obligatory for any cost efficient system.
73
58
 
74
- * **`handleRequest(message, context, config, dependencies)`**: The main entry point, designed to be triggered by a Pub/Sub message containing an array of `tasks`.
75
- * **`dispatchTasksInBatches(tasks, dependencies, config)`**: The core logic that loops through tasks, publishing them in batches with a configurable delay.
59
+ 4.
60
+ Computations in the computation system are automatically forced to use a strict naming convention, removing underscores and replacing with hyphens. This is a necessity to ensure we do not accidentally name new computations in a way that the system doesn't expect and have a string of errors. The pass architecture of the computation system naturally results in a whole array of errors, if a single computation fails and thus their dependencies cannot proceed, this can take a significant time to debug, so resolving common mistakes early on is key.
76
61
 
77
- ### `pipe.taskEngine`
62
+ 5.
63
+ Newly added computations must be integrated into an appropriate category, it is fine to introduce a new category name - this is dynamically auto-handled by the computation system and requires no new code changes, but the name must be understandable and clear to what the computations themselves represent. Furthermore, if a computation requires "yesterday" data, then it must be added to a sub-folder name exactly "historical" within the relevant category, otherwise the computation will fail as it will not receive the data it requires. Please review the computation system documentation to understand this concept and why it matters.
78
64
 
79
- The "factory" or "workhorse" of the system. This pipe is event-driven, triggered by individual Pub/Sub messages from the `orchestrator` or `dispatcher`. It executes the core data-fetching operations.
65
+ 6.
66
+ New computations MUST - I repeat - MUST define a static getDependencies() within the computation class, IF they use a dependency of another computation. By adding this, you allow the computation manifest to produce the exact order the computation must be run within the wider dependency architecture, as it uses Kahns' algorithm for topological sorting of dependencies. This MUST be given in the computation class in a format like :
80
67
 
81
- * **`handleRequest(message, context, config, dependencies)`**: The main entry point. It parses the Pub/Sub message, identifies the task `type`, and routes it to the correct sub-pipe handler.
82
- * **`handleDiscover(task, taskId, dependencies, config)`**: Fetches public ranking data for a batch of CIDs. It filters out private/inactive users, applies heuristics (for speculators), and chains to the `verify` task for any promising candidates.
83
- * **`handleVerify(task, taskId, dependencies, config)`**: Fetches the portfolio of a *single* user. It verifies they meet the criteria (e.g., holds a speculator asset) and, if so, saves their data and increments the block count.
84
- * **`handleUpdate(task, taskId, dependencies, config)`**: Fetches the latest portfolio for an *existing, verified* user and stores the new data in a sharded Firestore document.
68
+ ```js
69
+ static getDependencies() {
70
+ return ['user_profitability_tracker'];
71
+ }
72
+ ```
85
73
 
86
- ### `pipe.computationSystem`
74
+ Any computation that does not define a dependency is automatically assigned as pass 1, and if this is incorrect then the computation will fail.
87
75
 
88
- The "refinery" of the system. This pipe is run on a schedule *after* data collection. It loads all the raw portfolio data, runs a suite of calculations (e.g., risk scores, profit migrations), and saves the results in a separate "unified insights" collection for the API to read.
76
+ Furthermore to this, we must ensure that a computation does not try to create a dependency structure that results in circularity, it may well be the case that up to 5 passes is not enough to avoid this, in which case a 6th pass would have to be developed. This isn't a significant change and is easily implementable, however as-is, 5 passes is enough.
89
77
 
90
- * **`runOrchestration(config, dependencies)`**: The main entry point. It identifies which dates need processing (or backfilling), loads all data for those dates, initializes all calculation classes, and streams the data through them.
91
- * **`dataLoader`**: A set of helpers for loading the sharded portfolio data from Firestore efficiently.
92
- * **`computationUtils`**: A set of helpers for categorizing calculations (historical vs. daily), committing results in chunks, and finding the earliest available data date.
78
+ The manifest is almost entirely dynamic, and it is likely that you will not need to introduce code changes to the manifest to handle any newly introduced computation; There are some exceptions to this where a computation requires some special handling, but it is intended that we will resolve this by listing the conditions for how the manifest handles special computations, within the computation code itself, for the manifest to then read. A future update will cover this in more detail, and the results of it will be listed in the computation documentation, please refer to that for future updates on this aspect, should you need it.
93
79
 
94
- ### `pipe.api`
80
+ 7.
81
+ There are 2 entry points for this system, there is one in the Bulltrackers module, which defines the exported pipes, and there is another you would build yourself to complete the architecture. The 2nd is not included in this project, as it would expose the configurations used in the deployed cloud functions, but the index.js itself includes the following.
95
82
 
96
- The "tap" that serves the processed data. This pipe is an Express.js application designed to be run as a single Cloud Function, providing a public-facing API for the frontend.
83
+ ```JS
84
+ const { Firestore, FieldValue, FieldPath } = require('@google-cloud/firestore');
85
+ const { PubSub } = require('@google-cloud/pubsub');
86
+ const { VertexAI } = require('@google-cloud/vertexai');
87
+ const { logger } = require('sharedsetup')(__filename);
88
+ const { pipe } = require('bulltrackers-module');
89
+ const { calculations, utils } = require('aiden-shared-calculations-unified');
90
+
91
+ const fs = require('fs');
92
+ const path = require('path');
93
+ const db = new Firestore();
94
+ const pubsub = new PubSub();
95
+
96
+ const vertexAI = new VertexAI({project: 'some_project', location: 'europe-west1'});
97
+ const geminiModel = vertexAI.getGenerativeModel({ model: 'gemini-2.5-flash-lite',});
98
+ const configDir = path.join(__dirname, 'config');
99
+ const configFiles = fs.readdirSync(configDir).filter(f => /_(config|manifest)\.js$/.test(f));
100
+ const cfg = Object.fromEntries(configFiles.map(f => [f.replace(/_(config|manifest)\.js$/, ''),require(path.join(configDir, f))]));
101
+
102
+ const headerManager = new pipe.core.IntelligentHeaderManager(db , logger, cfg.core);
103
+ const proxyManager = new pipe.core.IntelligentProxyManager (db , logger, cfg.core);
104
+ const batchManager = new pipe.core.FirestoreBatchManager (db, headerManager, logger, cfg.taskEngine);
105
+
106
+ const allDependencies = { db, pubsub, logger, FieldValue, FieldPath, headerManager, proxyManager, batchManager, firestoreUtils: pipe.core.firestoreUtils, pubsubUtils: pipe.core.pubsubUtils, geminiModel, calculationUtils: utils };
107
+ const handlers = {
108
+ discoveryOrchestrator : () => pipe.orchestrator .runDiscoveryOrchestrator (cfg.orchestrator , allDependencies),
109
+ updateOrchestrator : () => pipe.orchestrator .runUpdateOrchestrator (cfg.orchestrator , allDependencies),
110
+ speculatorCleanupOrchestrator : () => pipe.maintenance .runSpeculatorCleanup (cfg.cleanup , allDependencies),
111
+ fetchEtoroInsights : () => pipe.maintenance .runFetchInsights (cfg.fetchInsights , allDependencies),
112
+ updateClosingPrices : () => pipe.maintenance .runFetchPrices (cfg.priceFetcher , allDependencies),
113
+ socialOrchestrator : () => pipe.maintenance .runSocialOrchestrator (cfg.social , allDependencies),
114
+ priceBackfill : () => pipe.maintenance .runBackfillAssetPrices (cfg.priceBackfill , allDependencies),
115
+
116
+ taskEngineHandler : (m, c) => pipe.taskEngine .handleRequest (m, c, cfg.taskEngine , allDependencies),
117
+ dispatcher : (m, c) => pipe.dispatcher .handleRequest (m, c, cfg.dispatcher , allDependencies),
118
+ invalidSpeculatorHandler : (m, c) => pipe.maintenance .handleInvalidSpeculator (m, c, cfg.invalidSpeculator , allDependencies),
119
+ socialTaskHandler : (m, c) => pipe.maintenance .handleSocialTask (m, c, cfg.social , allDependencies),
120
+
121
+ computationSystemPass1 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
122
+ computationSystemPass2 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
123
+ computationSystemPass3 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
124
+ computationSystemPass4 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
125
+ computationSystemPass5 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
126
+ genericApiV2 : pipe.api.createApiApp(cfg.genericApiV2, allDependencies, calculations)
127
+ };
128
+ Object.assign(exports, handlers);
129
+ if (require.main === module) {const port = process.env.PORT || 8080;
130
+ exports.genericApiV2.listen(port, () => console.log(`API listening on port ${port}`));}
131
+ ```
97
132
 
98
- * **`createApiApp(config, dependencies, unifiedCalculations)`**: The main entry point. It returns a fully configured Express app instance, complete with middleware (CORS, JSON) and all API routes.
99
- * **API Helpers** (`buildCalculationMap`, `validateRequest`, `fetchUnifiedData`, etc.): Sub-pipes that handle request validation, fetching data from the "unified insights" collection, and generating debug manifests.
133
+ This index.js exposes the entry points for each cloud function, which is then linked to their relevant pipelines, which allows GCP, my chosen cloud computing provider, to deploy each function. I personally use a custom deployment system to handle the commands for each pipeline, and this system will be included in a future separate npm package, it also handles passing of the configurations, .env values and all other key aspects.
100
134
 
101
- ### `pipe.maintenance`
135
+ The key note to take is that both these 2 core index.js entry points are clean and easy to read, they provide clear names for their intentions, and it is easy to follow the pipes to each individual cloud function. The inner workings of each pipeline are obscured behind their own internal modules, and thus making a change to one pipe, does not have a direct effect on any other pipe, with the sole exception of modifications in the core pipeline, which is used across all functions, but rarely needs modifying.
102
136
 
103
- A collection of standalone utility and cleanup functions, typically run on a schedule.
137
+ All cloud functions have their own pipe, with the exception that the orchestrators and handlers for both portfolio and social tasks are passed through a single pipe which internally splits off into 2 sub pipes, which themselves handle the orchestration of the tasks and the handling of the tasks as 2 seperate cloud functions, or in the case of the portfolio processing, it actually uses 2 cloud functions for the orchestration, and 1 cloud function for task processing. More on this architecture and the reasoning for choices can be found in the task engine and social processing documentation.
104
138
 
105
- * **`runSpeculatorCleanup(config, dependencies)`**: De-registers stale speculators who haven't held a qualifying asset for a set grace period, and decrements the block counts.
106
- * **`handleInvalidSpeculator(message, context, config, dependencies)`**: An event-driven function that listens for Pub/Sub messages containing CIDs of users found to be private or invalid, adding them to an exclusion list.
107
- * **`runFetchInsights(config, dependencies)`**: A scheduled function to fetch general eToro market insights (e.g., buy/sell percentages).
108
- * **`runFetchPrices(config, dependencies)`**: A scheduled function to fetch daily closing prices for all instruments.
139
+ It is OK for a main pipeline, exposed in the outer index.js, to be handling multiple cloud functions so long as their core functionality is similar. 2 cloud functions that exist to process only slightly differing objectives, can be given the same core pipeline, and then be split off into sub pipes at the inner layers. However, a pipeline should not consist of functionality that is dissimilar to other components of that same pipeline. A future development may chang this, to further simplify the outer index.js into solely the core features. This would result in the discovery & update orchestrator being merged into a main pipeline with the task engine and dispatcher, as they all work together to achieve an end objective - the processing of user portfolios. This would then expose a single primary portfolioProcessor pipeline.
109
140
 
110
- ### `pipe.proxy`
141
+ [ Objectives ]
111
142
 
112
- The logic for the Google Apps Script web app that acts as the secure proxy layer. This code is *not* run in the Node.js environment; it's intended to be deployed directly into an Apps Script project.
143
+ There are some clear, high-level objectives this project, encompassing the likely dozens of npm packages it will require for an end-result, attempts to achieve.
113
144
 
114
- * **`handlePost(e)`**: The main `doPost(e)` function for the Apps Script. It parses the request from the `IntelligentProxyManager`, executes it using `UrlFetchApp`, and returns the response.
145
+ These are :
115
146
 
116
- -----
147
+ 1. A world-class demonstration in how to build, from the ground-up, a scalable, highly efficient data ingestion pipeline that not only ingests.
117
148
 
118
- ## Data Flow
149
+ 2. A comprehensive data computation layer which allows for extremely advanced, highly complex computations to be integrated seamlessly and dynamically, with the ability to build data upon data, and have a limitless range of possible computations, all of which are scalable and processed in not seconds, but milliseconds.
119
150
 
120
- The system operates in four distinct, decoupled stages:
151
+ 3. A dynamically built, completely automated frontend integration that exposes easy-to-use displays for all the computed data, along with advanced customisation options and a no-code layer for allowing an end-user to produce their own computations on-top of raw data, or pre-computed data.
121
152
 
122
- 1. **Stage 1: Discovery (Scheduled)**
153
+ 4. A demonstration in what is possible for the eToro community, and their own developers, particularly of potential interest to the Popular Investors fortunate enough to be handed access to the API layer. Though I must add the caveat that you will likely not be able to build anywhere similar to the scale of this project as you will have rate limits and much less data capability than this project allows, however it should give you some idea of what you can build yourself. Please check out the separate computation module for some inspiration.
123
154
 
124
- * `pipe.orchestrator.runDiscoveryOrchestrator` runs.
125
- * It determines it needs 5,000 new `normal` users.
126
- * It creates 50 `discover` tasks (100 CIDs each) and publishes them to the `task-engine` topic.
127
- * `pipe.taskEngine` functions spin up, `handleDiscover` finds 200 promising users.
128
- * It publishes 200 `verify` tasks back to the `task-engine` topic.
129
- * `pipe.taskEngine` functions `handleVerify` these users, and 75 are saved to the database.
155
+ 5. A personal test for myself, as the sole developer of this project, and completely expecting very few contributions to the wider system, this is a significant undertaking. I am not a professional developer, and so this is quite the learning curve. I believe in the hidden and unused value that data analysis can offer, and eToro offers, arguably unknowingly, a rare opportunity to build something that is of immense value.
130
156
 
131
- 2. **Stage 2: Update (Scheduled)**
157
+ 6. To produce a series of computations that are of such value that they are provably predicative in their insight, and thus have financial value to offer to a front-end user.
132
158
 
133
- * `pipe.orchestrator.runUpdateOrchestrator` runs.
134
- * It finds 50,000 existing users that need an update.
135
- * It publishes *one* message containing all 50,000 tasks to the `dispatcher` topic.
136
- * `pipe.dispatcher.handleRequest` receives the message.
137
- * It loops, publishing 500 tasks at a time to the `task-engine` topic, with a 30-second delay between batches.
138
- * `pipe.taskEngine` functions `handleUpdate` the portfolios, scaling gradually.
159
+ [ Future Updates ]
139
160
 
140
- 3. **Stage 3: Computation (Scheduled)**
161
+ 1. Producing and exposing unit tests for each computation; These will be dynamically produced based on the contents of the computation.
162
+ 2. Refining logging and building a custom tool for detecting problems across all the codebase by creating a log sink to read the results of the custom logging.log() function, across the project.
163
+ 3. Reviewing existing computations, refining the results and optimising computations. Then developing pass 5 computations to prove the signals of each signal-generating computations are alpha-generative.
164
+ 4. General solidification of the codebase, marking aspects as TODO, abstracting remaining magic numbers into configurable variables. Applying Dependency Injection for the few remaining imports.
165
+ 5. Building up the frontend integration, mapping data to charts and stress-testing edge cases in schema formats.
141
166
 
142
- * `pipe.computationSystem.runOrchestration` runs (e.g., at 1:00 AM).
143
- * It loads all portfolio data collected in Stages 1 & 2.
144
- * It runs all calculations and saves the results to the `unified_insights` collection.
167
+ For coverage on each cloud function, please refer to their individual documentation, which you can find linked below.
145
168
 
146
- 4. **Stage 4: Serving (On-Demand)**
169
+ [View Computation System Documentation](./docs/ComputationSystem.MD)
170
+ [View Computation System Documentation](./docs/TaskEngine.MD)
147
171
 
148
- * A frontend user loads a chart.
149
- * The browser sends a request to the `pipe.api` function.
150
- * The API validates the request, reads the *pre-computed* data from `unified_insights`, and returns it.
151
172
 
152
- ## Usage
153
173
 
154
- In your Google Cloud Function `index.js` file:
155
174
 
156
- ```javascript
157
- /**
158
- * @fileoverview Unified Cloud Functions entry (Refactored for Pipe Architecture)
159
- */
160
175
 
161
- // Import FieldPath here
162
- const { Firestore, FieldValue, FieldPath } = require('@google-cloud/firestore');
163
- const { PubSub } = require('@google-cloud/pubsub');
164
- const { logger } = require('sharedsetup')(__filename);
165
- const { pipe } = require('bulltrackers-module');
166
- const { calculations } = require('aiden-shared-calculations-unified');
167
- const fs = require('fs');
168
- const path = require('path');
169
-
170
- // --- Initialize Clients ---
171
- const db = new Firestore();
172
- const pubsub = new PubSub();
173
-
174
- // --- Load Configs ---
175
- const cfg = Object.fromEntries(fs.readdirSync(path.join(__dirname, 'config'))
176
- .filter(f => f.endsWith('_config.js'))
177
- .map(f => [path.basename(f, '_config.js'), require(path.join(__dirname, 'config', f))])
178
- );
179
-
180
- // --- Instantiate Core Managers ---
181
- const headerManager = new pipe.core.IntelligentHeaderManager(db, logger, cfg.core);
182
- const proxyManager = new pipe.core.IntelligentProxyManager (db, logger, cfg.core);
183
- const batchManager = new pipe.core.FirestoreBatchManager (db, headerManager, logger, cfg.taskEngine);
184
-
185
- // --- Master Dependencies ---
186
- const allDependencies = {
187
- db : db,
188
- pubsub : pubsub,
189
- logger : logger,
190
- FieldValue : FieldValue,
191
- FieldPath : FieldPath,
192
- headerManager : headerManager,
193
- proxyManager : proxyManager,
194
- batchManager : batchManager,
195
- firestoreUtils : pipe.core.firestoreUtils,
196
- pubsubUtils : pipe.core.pubsubUtils
197
- };
198
176
 
199
- // --- Export Cloud Function Handlers ---
200
- const handlers = {
201
- discoveryOrchestrator : () => pipe.orchestrator .runDiscoveryOrchestrator (cfg.orchestrator , allDependencies),
202
- updateOrchestrator : () => pipe.orchestrator .runUpdateOrchestrator (cfg.orchestrator , allDependencies),
203
- speculatorCleanupOrchestrator: () => pipe.maintenance .runSpeculatorCleanup (cfg.cleanup , allDependencies),
204
- fetchEtoroInsights : () => pipe.maintenance .runFetchInsights (cfg.fetchInsights , allDependencies),
205
- updateClosingPrices : () => pipe.maintenance .runFetchPrices (cfg.priceFetcher , allDependencies),
206
- computationSystem : () => pipe.computationSystem.runOrchestration (cfg.computationSystem, allDependencies),
207
-
208
- taskEngineHandler : (m, c) => pipe.taskEngine .handleRequest (m, c, cfg.taskEngine , allDependencies),
209
- dispatcher : (m, c) => pipe.dispatcher .handleRequest (m, c, cfg.dispatcher , allDependencies),
210
- invalidSpeculatorHandler : (m, c) => pipe.maintenance .handleInvalidSpeculator(m, c, cfg.invalidSpeculator, allDependencies)
211
- };
212
177
 
213
- Object.entries(handlers).forEach(([name, fn]) => exports[name] = fn);
214
178
 
215
- // --- API Export ---
216
- exports.genericApiV2 = pipe.api.createApiApp(cfg.genericApiV2, allDependencies, calculations);
217
179
 
218
- // --- Local API Server ---
219
- if (require.main === module) {
220
- const port = process.env.PORT || 8080;
221
- exports.genericApiV2.listen(port, () => console.log(`API listening on port ${port}`));
222
- }
223
- ```
@@ -6,8 +6,6 @@
6
6
  * --- MODIFIED: Now includes exponential backoff and retries specifically for rate-limit errors. ---
7
7
  */
8
8
  const { FieldValue } = require('@google-cloud/firestore');
9
-
10
- // --- NEW: Added sleep utility ---
11
9
  const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));
12
10
 
13
11
  class IntelligentProxyManager {
@@ -33,11 +31,8 @@ class IntelligentProxyManager {
33
31
  this.proxyLockingEnabled = config.proxyLockingEnabled !== false;
34
32
  this.proxies = {};
35
33
  this.configLastLoaded = 0;
36
-
37
- // --- NEW: Retry configuration ---
38
34
  this.MAX_RETRIES = 3;
39
35
  this.INITIAL_BACKOFF_MS = 1000;
40
-
41
36
  if (this.proxyUrls.length === 0) { this.logger.log('WARN', '[ProxyManager] No proxy URLs provided in config.');
42
37
  } else { const lockingStatus = this.proxyLockingEnabled ? "Locking Mechanism Enabled" : "Locking Mechanism DISABLED"; this.logger.log('INFO', `[ProxyManager] Initialized with ${this.proxyUrls.length} proxies and ${lockingStatus}.`); }
43
38
  }
@@ -68,10 +63,8 @@ class IntelligentProxyManager {
68
63
  */
69
64
  async _selectProxy() {
70
65
  await this._loadConfig();
71
-
72
66
  const availableProxies = this.proxyLockingEnabled ? Object.values(this.proxies).filter(p => p.status === 'unlocked') : Object.values(this.proxies);
73
- if (availableProxies.length === 0) { const errorMsg = this.proxyLockingEnabled ? "All proxies are locked. No proxy available." : "No proxies are loaded. Cannot make request.";
74
- this.logger.log('ERROR', `[ProxyManager] ${errorMsg}`); throw new Error(errorMsg); }
67
+ if (availableProxies.length === 0) { const errorMsg = this.proxyLockingEnabled ? "All proxies are locked. No proxy available." : "No proxies are loaded. Cannot make request."; this.logger.log('ERROR', `[ProxyManager] ${errorMsg}`); throw new Error(errorMsg); }
75
68
  const selected = availableProxies[Math.floor(Math.random() * availableProxies.length)];
76
69
  return { owner: selected.owner, url: selected.url };
77
70
  }
@@ -84,8 +77,7 @@ class IntelligentProxyManager {
84
77
  if (!this.proxyLockingEnabled) { this.logger.log('TRACE', `[ProxyManager] Locking skipped for ${owner} (locking is disabled).`); return; }
85
78
  if (this.proxies[owner]) { this.proxies[owner].status = 'locked'; }
86
79
  this.logger.log('WARN', `[ProxyManager] Locking proxy: ${owner}`);
87
- try { const docRef = this.firestore.doc(this.PERFORMANCE_DOC_PATH);
88
- await docRef.set({ locks: { [owner]: { locked: true, lastLocked: FieldValue.serverTimestamp() } } }, { merge: true });
80
+ try { const docRef = this.firestore.doc(this.PERFORMANCE_DOC_PATH); await docRef.set({ locks: { [owner]: { locked: true, lastLocked: FieldValue.serverTimestamp() } } }, { merge: true });
89
81
  } catch (error) { this.logger.log('ERROR', `[ProxyManager] Failed to write lock for ${owner} to Firestore.`, { errorMessage: error.message }); }
90
82
  }
91
83
 
@@ -97,85 +89,114 @@ class IntelligentProxyManager {
97
89
  */
98
90
  async fetch(targetUrl, options = {}) {
99
91
  let proxy = null;
100
- try {
101
- proxy = await this._selectProxy();
102
- } catch (error) {
103
- // This happens if *all* proxies are locked.
104
- return { ok: false, status: 503, error: { message: error.message }, headers: new Headers() };
105
- }
106
-
92
+ try { proxy = await this._selectProxy(); } catch (error) { return { ok: false, status: 503, error: { message: error.message }, headers: new Headers() }; }
107
93
  let backoff = this.INITIAL_BACKOFF_MS;
108
94
  let lastResponse = null;
109
-
110
95
  for (let attempt = 1; attempt <= this.MAX_RETRIES; attempt++) {
111
96
  const response = await this._fetchViaAppsScript(proxy.url, targetUrl, options);
112
- lastResponse = response; // Always store the last response
113
-
97
+ lastResponse = response;
114
98
  // 1. Success
115
- if (response.ok) {
116
- return response;
117
- }
118
-
99
+ if (response.ok) { return response; }
119
100
  // 2. Rate Limit Error (Retryable)
120
- if (response.isRateLimitError) {
121
- this.logger.log('WARN', `[ProxyManager] Rate limit hit on proxy ${proxy.owner} (Attempt ${attempt}/${this.MAX_RETRIES}). Backing off for ${backoff}ms...`, { url: targetUrl });
122
- await sleep(backoff);
123
- backoff *= 2; // Exponential backoff
124
- // Continue to the next attempt
125
- continue;
126
- }
127
-
101
+ if (response.isRateLimitError) { this.logger.log('WARN', `[ProxyManager] Rate limit hit on proxy ${proxy.owner} (Attempt ${attempt}/${this.MAX_RETRIES}). Backing off for ${backoff}ms...`, { url: targetUrl }); await sleep(backoff); backoff *= 2; continue; }
128
102
  // 3. Other Fetch Error (Non-Retryable, Lock Proxy)
129
- if (response.isUrlFetchError) {
130
- this.logger.log('ERROR', `[ProxyManager] Proxy ${proxy.owner} failed (non-rate-limit). Locking proxy.`, { url: targetUrl, status: response.status });
131
- await this.lockProxy(proxy.owner);
132
- return response; // Fail fast and return
133
- }
134
-
103
+ if (response.isUrlFetchError) { this.logger.log('ERROR', `[ProxyManager] Proxy ${proxy.owner} failed (non-rate-limit). Locking proxy.`, { url: targetUrl, status: response.status }); await this.lockProxy(proxy.owner); return response; }
135
104
  // 4. Standard Error (e.g., 404, 500 from *target* URL, not proxy)
136
- // This was a "successful" proxy fetch of a failing URL. Not retryable.
137
- return response;
138
- }
139
-
105
+ return response; }
140
106
  // If loop finishes, all retries failed (likely all were rate-limit errors)
141
107
  this.logger.log('ERROR', `[ProxyManager] Request failed after ${this.MAX_RETRIES} rate-limit retries.`, { url: targetUrl });
142
108
  return lastResponse;
143
109
  }
144
110
 
145
111
 
112
+ // Inside backend_npm_pkgs/bulltrackers-module/functions/core/utils/intelligent_proxy_manager.js
113
+
146
114
  /**
147
115
  * Internal function to call the Google AppScript proxy.
148
- * --- MODIFIED: Now adds `isRateLimitError` flag to response ---
116
+ * --- MODIFIED: Now checks Content-Type for HTML to robustly detect rate limits ---
149
117
  * @private
150
118
  */
151
119
  async _fetchViaAppsScript(proxyUrl, targetUrl, options) {
152
120
  const payload = { url: targetUrl, ...options };
121
+ let response; // Declare response here to access in catch block
122
+
153
123
  try {
154
- const response = await fetch(proxyUrl, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(payload) });
155
-
156
- // This is an error with the *proxy function itself* (e.g., 500, 429)
124
+ response = await fetch(proxyUrl, {
125
+ method: 'POST',
126
+ headers: { 'Content-Type': 'application/json' },
127
+ body: JSON.stringify(payload)
128
+ });
129
+
130
+ // --- THIS IS THE DOCTYPE CHECK ---
131
+ // Check the response headers from the proxy itself.
132
+ const contentType = response.headers.get('content-type') || '';
133
+ if (contentType.includes('text/html')) {
134
+ // This is Google's HTML error page. This is a rate-limit error.
135
+ const errorText = await response.text();
136
+ this.logger.log('WARN', `[ProxyManager] Proxy returned HTML error page (rate limit).`, {
137
+ status: response.status,
138
+ proxy: proxyUrl,
139
+ errorSnippet: errorText.substring(0, 150) // Log a snippet
140
+ });
141
+
142
+ return {
143
+ ok: false,
144
+ status: response.status, // Will be 500, 503, etc.
145
+ isUrlFetchError: true,
146
+ isRateLimitError: true, // <--- This is the key change
147
+ error: { message: `Proxy returned HTML error page (likely rate limit).` },
148
+ headers: response.headers,
149
+ text: () => Promise.resolve(errorText)
150
+ };
151
+ }
152
+ // --- END DOCTYPE CHECK ---
153
+
154
+ // If it's not HTML, but still not OK (e.g., 400 Bad Request),
155
+ // it's a non-rate-limit proxy error.
157
156
  if (!response.ok) {
158
157
  const errorText = await response.text();
159
- this.logger.log('WARN', `[ProxyManager] Proxy infrastructure itself failed.`, { status: response.status, proxy: proxyUrl, error: errorText });
158
+ this.logger.log('WARN', `[ProxyManager] Proxy infrastructure itself failed (non-HTML).`, {
159
+ status: response.status,
160
+ proxy: proxyUrl,
161
+ error: errorText
162
+ });
163
+
164
+ // We can still check 429 here, just in case Google sends one.
160
165
  const isRateLimit = response.status === 429;
161
- return { ok: false, status: response.status, isUrlFetchError: true, isRateLimitError: isRateLimit, error: { message: `Proxy infrastructure failed with status ${response.status}` }, headers: response.headers, text: () => Promise.resolve(errorText) };
166
+
167
+ return {
168
+ ok: false,
169
+ status: response.status,
170
+ isUrlFetchError: true,
171
+ isRateLimitError: isRateLimit,
172
+ error: { message: `Proxy infrastructure failed with status ${response.status}` },
173
+ headers: response.headers,
174
+ text: () => Promise.resolve(errorText)
175
+ };
162
176
  }
163
-
177
+
178
+ // If we are here, Content-Type was application/json and status was OK.
164
179
  const proxyResponse = await response.json();
165
180
 
166
- // This is an error *returned by the proxy* (e.g., UrlFetchApp failed)
181
+ // Now we check for errors *inside* the JSON
182
+ // (e.g., the Apps Script caught an error and reported it).
167
183
  if (proxyResponse.error) {
168
184
  const errorMsg = proxyResponse.error.message || '';
169
- // --- NEW: Check for AppScript's rate limit error text ---
170
- if (errorMsg.toLowerCase().includes('service invoked too many times')) {
171
- this.logger.log('WARN', `[ProxyManager] Proxy quota error: ${proxyUrl}`, { error: proxyResponse.error });
172
- return { ok: false, status: 500, error: proxyResponse.error, isUrlFetchError: true, isRateLimitError: true, headers: new Headers() }; // <-- Set flag
185
+
186
+ // Fallback check for "invoked too many times" *inside* the JSON error,
187
+ // just in case. The HTML check is now our primary defense.
188
+ const isRateLimit = errorMsg.toLowerCase().includes('service invoked too many times');
189
+
190
+ if (isRateLimit) {
191
+ this.logger.log('WARN', `[ProxyManager] Proxy quota error (JSON): ${proxyUrl}`, { error: proxyResponse.error });
192
+ return { ok: false, status: 500, error: proxyResponse.error, isUrlFetchError: true, isRateLimitError: true, headers: new Headers() };
173
193
  }
174
- // Other UrlFetchApp error
175
- return { ok: false, status: 500, error: proxyResponse.error, isUrlFetchError: true, isRateLimitError: false, headers: new Headers(), text: () => Promise.resolve(errorMsg) };
194
+
195
+ // Other non-rate-limit error caught by the script
196
+ return { ok: false, status: 500, error: proxyResponse.error, isUrlFetchError: true, isRateLimitError: false, headers: new Headers(), text: () => Promise.resolve(errorMsg) };
176
197
  }
177
198
 
178
- // Success. The proxy fetched the target URL.
199
+ // Success!
179
200
  return {
180
201
  ok: proxyResponse.statusCode >= 200 && proxyResponse.statusCode < 300,
181
202
  status: proxyResponse.statusCode,
@@ -183,11 +204,19 @@ class IntelligentProxyManager {
183
204
  json: () => Promise.resolve(JSON.parse(proxyResponse.body)),
184
205
  text: () => Promise.resolve(proxyResponse.body),
185
206
  isUrlFetchError: false,
186
- isRateLimitError: false
207
+ isRateLimitError: false
208
+ };
209
+ } catch (networkError) {
210
+ // This catches fetch() failures (e.g., DNS, network down)
211
+ this.logger.log('ERROR', `[ProxyManager] Network error calling proxy: ${proxyUrl}`, { errorMessage: networkError.message });
212
+ return {
213
+ ok: false,
214
+ status: 0,
215
+ isUrlFetchError: true,
216
+ isRateLimitError: false, // Not a rate limit, a network failure
217
+ error: { message: `Network error: ${networkError.message}` },
218
+ headers: new Headers()
187
219
  };
188
- } catch (networkError) {
189
- this.logger.log('ERROR', `[ProxyManager] Network error calling proxy: ${proxyUrl}`, { errorMessage: networkError.message });
190
- return { ok: false, status: 0, isUrlFetchError: true, isRateLimitError: false, error: { message: `Network error: ${networkError.message}` }, headers: new Headers() };
191
220
  }
192
221
  }
193
222
  }
@@ -154,6 +154,7 @@ async function handleUpdate(task, taskId, { logger, headerManager, proxyManager,
154
154
  await batchManager.addToPortfolioBatch(userId, portfolioBlockId, today, JSON.parse(body), userType, requestInfo.instrumentId);
155
155
  }
156
156
  logger.log('DEBUG', 'Processing portfolio for user', { userId, portfolioUrl: requestInfo.url });
157
+ logger.log('DEBUG', 'Response returned ', { body } , 'for user' , { userId })
157
158
  } else {
158
159
  logger.log('WARN', `Failed to fetch portfolio`, { userId, url: requestInfo.url, error: portfolioRes.reason || `status ${portfolioRes.value?.status}` });
159
160
  }
@@ -63,7 +63,7 @@ async function executeTasks(tasksToRun, otherTasks, dependencies, config, taskId
63
63
  // REMOVED: const historyFetchedForUser = new Set();
64
64
 
65
65
  // Create one unified parallel pool
66
- const limit = pLimit(config.TASK_ENGINE_CONCURRENCY || 10);
66
+ const limit = pLimit(config.TASK_ENGINE_CONCURRENCY || 3); // TODO Work out what the optimal concurrency is
67
67
  const allTaskPromises = [];
68
68
  let taskCounters = { update: 0, discover: 0, verify: 0, unknown: 0, failed: 0 };
69
69
 
package/index.js CHANGED
@@ -1,101 +1,53 @@
1
1
  /**
2
2
  * @fileoverview Main entry point for the Bulltrackers shared module.
3
- * This module consolidates all core logic into a single 'pipe' object
4
- * to enforce a clear naming convention and dependency injection pattern.
3
+ * Export the pipes!
5
4
  */
6
5
 
7
- // --- Core Utilities (Classes and Stateless Helpers) ---
8
-
9
- const core = {
10
- IntelligentHeaderManager : require('./functions/core/utils/intelligent_header_manager') .IntelligentHeaderManager,
11
- IntelligentProxyManager : require('./functions/core/utils/intelligent_proxy_manager') .IntelligentProxyManager,
12
- FirestoreBatchManager : require('./functions/task-engine/utils/firestore_batch_manager').FirestoreBatchManager,
13
- firestoreUtils : require('./functions/core/utils/firestore_utils'),
14
- pubsubUtils : require('./functions/core/utils/pubsub_utils'),
15
- };
16
-
17
- // --- Pipe 1: Orchestrator ---
18
-
19
- const orchestrator = {
20
- // Main Pipes (Entry points for Cloud Functions)
21
- runDiscoveryOrchestrator : require('./functions/orchestrator/index').runDiscoveryOrchestrator,
22
- runUpdateOrchestrator : require('./functions/orchestrator/index').runUpdateOrchestrator,
23
-
24
- // Sub-Pipes (Discovery)
25
- checkDiscoveryNeed : require('./functions/orchestrator/helpers/discovery_helpers').checkDiscoveryNeed,
26
- getDiscoveryCandidates : require('./functions/orchestrator/helpers/discovery_helpers').getDiscoveryCandidates,
27
- dispatchDiscovery : require('./functions/orchestrator/helpers/discovery_helpers').dispatchDiscovery,
28
-
29
- // Sub-Pipes (Updates)
30
- getUpdateTargets : require('./functions/orchestrator/helpers/update_helpers').getUpdateTargets,
31
- dispatchUpdates : require('./functions/orchestrator/helpers/update_helpers').dispatchUpdates,
32
- };
33
-
34
-
35
- // --- Pipe 2: Dispatcher ---
36
-
37
- const dispatcher = {
38
- handleRequest : require('./functions/dispatcher/index').handleRequest,
39
- dispatchTasksInBatches : require('./functions/dispatcher/helpers/dispatch_helpers').dispatchTasksInBatches,
40
- };
41
-
42
-
43
- // --- Pipe 3: Task Engine ---
44
-
45
- const taskEngine = {
46
- handleRequest : require('./functions/task-engine/handler_creator').handleRequest,
47
- handleDiscover : require('./functions/task-engine/helpers/discover_helpers').handleDiscover,
48
- handleVerify : require('./functions/task-engine/helpers/verify_helpers').handleVerify,
49
- handleUpdate : require('./functions/task-engine/helpers/update_helpers').handleUpdate,
50
- };
51
-
52
-
53
- // --- Pipe 4: Computation System ---
54
-
55
- const computationSystem = {
56
- runComputationPass : require('./functions/computation-system/helpers/computation_pass_runner').runComputationPass,
57
- dataLoader : require('./functions/computation-system/utils/data_loader'),
58
- computationUtils : require('./functions/computation-system/utils/utils'),
59
- };
60
-
61
-
62
- // --- Pipe 5: API ---
63
-
64
- const api = {
65
- createApiApp : require('./functions/generic-api/index').createApiApp,
66
- helpers : require('./functions/generic-api/helpers/api_helpers'),
67
- };
68
-
69
-
70
- // --- Pipe 6: Maintenance ---
71
-
72
- const maintenance = {
73
- runSpeculatorCleanup : require('./functions/speculator-cleanup-orchestrator/helpers/cleanup_helpers') .runCleanup,
74
- handleInvalidSpeculator : require('./functions/invalid-speculator-handler/helpers/handler_helpers') .handleInvalidSpeculator,
75
- runFetchInsights : require('./functions/fetch-insights/helpers/handler_helpers').fetchAndStoreInsights,
76
- runFetchPrices : require('./functions/etoro-price-fetcher/helpers/handler_helpers').fetchAndStorePrices,
77
- runSocialOrchestrator : require('./functions/social-orchestrator/helpers/orchestrator_helpers') .runSocialOrchestrator,
78
- handleSocialTask : require('./functions/social-task-handler/helpers/handler_helpers') .handleSocialTask,
79
- runBackfillAssetPrices : require('./functions/price-backfill/helpers/handler_helpers') .runBackfillAssetPrices,
80
- };
81
-
82
-
83
- // --- Pipe 7: Proxy ---
84
-
85
- const proxy = {
86
- handlePost : require('./functions/appscript-api/index').handlePost,
87
- };
88
-
89
-
90
- module.exports = {
91
- pipe: {
92
- core,
93
- orchestrator,
94
- dispatcher,
95
- taskEngine,
96
- computationSystem,
97
- api,
98
- maintenance,
99
- proxy,
100
- }
101
- };
6
+ // Core
7
+ const core = { IntelligentHeaderManager : require('./functions/core/utils/intelligent_header_manager') .IntelligentHeaderManager,
8
+ IntelligentProxyManager : require('./functions/core/utils/intelligent_proxy_manager') .IntelligentProxyManager,
9
+ FirestoreBatchManager : require('./functions/task-engine/utils/firestore_batch_manager') .FirestoreBatchManager,
10
+ firestoreUtils : require('./functions/core/utils/firestore_utils'),
11
+ pubsubUtils : require('./functions/core/utils/pubsub_utils') };
12
+
13
+ // Orchestrator
14
+ const orchestrator = { runDiscoveryOrchestrator : require('./functions/orchestrator/index') .runDiscoveryOrchestrator,
15
+ runUpdateOrchestrator : require('./functions/orchestrator/index') .runUpdateOrchestrator,
16
+ checkDiscoveryNeed : require('./functions/orchestrator/helpers/discovery_helpers') .checkDiscoveryNeed,
17
+ getDiscoveryCandidates : require('./functions/orchestrator/helpers/discovery_helpers') .getDiscoveryCandidates,
18
+ dispatchDiscovery : require('./functions/orchestrator/helpers/discovery_helpers') .dispatchDiscovery,
19
+ getUpdateTargets : require('./functions/orchestrator/helpers/update_helpers') .getUpdateTargets,
20
+ dispatchUpdates : require('./functions/orchestrator/helpers/update_helpers') .dispatchUpdates };
21
+
22
+ // Dispatcher
23
+ const dispatcher = { handleRequest : require('./functions/dispatcher/index') .handleRequest ,
24
+ dispatchTasksInBatches : require('./functions/dispatcher/helpers/dispatch_helpers') .dispatchTasksInBatches };
25
+
26
+ // Task Engine
27
+ const taskEngine = { handleRequest : require('./functions/task-engine/handler_creator') .handleRequest ,
28
+ handleDiscover : require('./functions/task-engine/helpers/discover_helpers') .handleDiscover,
29
+ handleVerify : require('./functions/task-engine/helpers/verify_helpers') .handleVerify ,
30
+ handleUpdate : require('./functions/task-engine/helpers/update_helpers') .handleUpdate };
31
+
32
+ // Computation System
33
+ const computationSystem = { runComputationPass : require('./functions/computation-system/helpers/computation_pass_runner') .runComputationPass,
34
+ dataLoader : require('./functions/computation-system/utils/data_loader'),
35
+ computationUtils : require('./functions/computation-system/utils/utils') };
36
+
37
+ // API
38
+ const api = { createApiApp : require('./functions/generic-api/index') .createApiApp,
39
+ helpers : require('./functions/generic-api/helpers/api_helpers') };
40
+
41
+ // Maintenance
42
+ const maintenance = { runSpeculatorCleanup : require('./functions/speculator-cleanup-orchestrator/helpers/cleanup_helpers') .runCleanup,
43
+ handleInvalidSpeculator : require('./functions/invalid-speculator-handler/helpers/handler_helpers') .handleInvalidSpeculator,
44
+ runFetchInsights : require('./functions/fetch-insights/helpers/handler_helpers') .fetchAndStoreInsights,
45
+ runFetchPrices : require('./functions/etoro-price-fetcher/helpers/handler_helpers') .fetchAndStorePrices,
46
+ runSocialOrchestrator : require('./functions/social-orchestrator/helpers/orchestrator_helpers') .runSocialOrchestrator,
47
+ handleSocialTask : require('./functions/social-task-handler/helpers/handler_helpers') .handleSocialTask,
48
+ runBackfillAssetPrices : require('./functions/price-backfill/helpers/handler_helpers') .runBackfillAssetPrices };
49
+
50
+ // Proxy
51
+ const proxy = { handlePost : require('./functions/appscript-api/index') .handlePost };
52
+
53
+ module.exports = { pipe: { core, orchestrator, dispatcher, taskEngine, computationSystem, api, maintenance, proxy } };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "bulltrackers-module",
3
- "version": "1.0.155",
3
+ "version": "1.0.157",
4
4
  "description": "Helper Functions for Bulltrackers.",
5
5
  "main": "index.js",
6
6
  "files": [