bulltrackers-module 1.0.172 → 1.0.173

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.MD CHANGED
@@ -1,179 +1,3 @@
1
- [ MAIN README ]
2
-
3
- Project Overview
4
-
5
- Welcome to the Bulltrackers Project
6
-
7
- This project is a wide ranging ingestion pipeline for the analysis of an infinitely scalable set of retail portfolios, specifically focusing on the analysis of public eToro accounts. The project covers the pipeline of ingestion of eToro metrics across user portfolios, social data, asset pricing and trading data, builds in-depth computations based on this data, and then iteratively produces further computations based on the results of prior computations. This is all then provided through a customised dynamically generated type-safe frontend integration.
8
-
9
- The objective is simple; Transform the possibilities of the everyday Joe, give them the same level of access as hedge funds get, without the millions of dollars in upfront costs. For too long have hedge funds held the upper hand, with pipelines connected directly to brokerages, where every trade you make is immediately sent to a dozen firms trading against you, feeding their algorithms to make decisions.
10
-
11
- This project turns the tables, gives you the upper hand, and hands you the data they pay millions for.
12
-
13
- What if you could build a system to monitor exactly what the best - and worst - are doing? Copy their trades, or inverse them.
14
- This is just one single possibility that this project offers, one of an infinite range.
15
- If you can think of it, you can build it.
16
-
17
-
18
- [ Architecture ]
19
-
20
- This project consists of 3 primary backend packages.
21
-
22
- Bulltrackers - This is the core package, consisting of all the cloud functions used in the project, using a heavily customised, extremely efficient and robust pipeline for every functionality of the project. It is designed in such a way that costs are minimised to extreme lengths, processing speeds are lightning fast and creating additional cloud functions and then deploying them is as simple as adding a new file, plugging it into the pipeline and setting up their configs.
23
-
24
- Calculations - This is the pairing package for the computations module of the Bulltrackers package. It provides a way to simply insert a new computation into its relevant category, or introduce a new category, without any additional code changes. Simply create a computation, list any dependencies, and the computation system will handle the rest as-is.
25
-
26
- web-components - This is the frontend-as-a-backend integration. Rather than building a complex frontend that must be modified for every time we introduce a new computation, this system is able to produce a completely automated type-safe schema for defining new computations directly from the computation system. We simply produce a new computation, give it a schema, and once it has run, the web-components will pull the new schema in. The result is that we do not need to handle the annoyance of typescript fiddling, we can just pass a chart component the data to render, set the position and the page path and job done.
27
-
28
- [ Things you MUST understand, before attempting to understand this project ]
29
-
30
- 1.
31
- This is NOT, a project you can simply "npm install", in fact doing so is a very bad idea. You will rapidly find that it won't work, I have not provided the configurations for api routes, core parameters for inner workings, and much of the logic will not make any sense to you, regardless of your experience in developing anything. This is intentional; I will not ever be providing the routes for finding the data that is processed here, it would be irresponsible of me. Furthermore, if you attempt to find the configuration values yourself through whatever means, you are likely to violate your own country laws, scraping can - and has been - proven to be a criminal act in some jurisdictions, you MUST seek permission by both eToro, and depending on your objectives, the targeted user(s). Without this permission, I heavily caution any interaction with this project, and warn you that you will likely risk both civil and criminal charges that can ruin your life.
32
-
33
- Do not take this risk.
34
-
35
- 2.
36
- This project revolves around a few key facts on the eToro system, how it works and the fundamental inner workings of what is perhaps a rather unique brokerage. Indeed, without much of how eToro works being so, this project would not be possible. Outside of eToro being my primary brokerage, this is a key reason for choosing eToro as the data provider, their choices in architecture and their own website configurations make this project possible. Not easy, but possible.
37
-
38
- One of these key facts is understanding what happens when a user makes their account. Unlike normal sites, eToro does not assign randomised IDs to each user upon account creation, they are simple integers ordered from account creation dates. Account ID 1 is the first account, Account ID 35m is the 35 millionth account, and so on. This is a key fact to understand when reviewing the task engine and orchestrators.
39
-
40
- Another fact is how eToro processes requests for user data. It is not possible to fetch most of a private accounts' data, there are some exceptions to this but for the most part, when using an API route that allows for user batching - submitting multiple user IDs into one request - if any of the users' passed to that API are private, eToro returns nothing for that user; not null, but literally nothing, the user will not be included in the response. This is a very useful fact, as you will find later, I use this built-in filter to my advantage and it saves a significant sum of wall-time in the processing, it also forms a mathematically powerful positive feedback loop by allowing us to mark any user we find was a private account, and ensuring we never try to process them again.
41
-
42
- 3.
43
- It is not possible to discover how much an account is worth. This was possible in early January 2025, through what was a serious oversight in an update eToro performed, which exposed a parameter called LotCount. This was a parameter that showed the number of shares each Public user owned for a given asset, and by extrapolation you could then infer their portfolio values. The developer of this project was solely responsible for uncovering this bug, reporting it and as a result holds considerable favour within the eToro public, but also staff, community. However, what is possible is uncovering whether a user is of Bronze tier or not, whilst all other tier information is private, eToro intentionally exposes whether a given public account is of Bronze tier, meaning <5k USD portfolio value. This metric is used in some computations to highlight the difference - or lack of - between Bronze and Non-Bronze portfolios. It is repeated again, that this is a public metric, I perform no access to user data beyond what is publicly available and other tier information is not accessible. It has been clarified with eToro that exposing whether a user is of Bronze tier or not, is indeed the intended mechanism.
44
-
45
- 4.
46
- Due to the above restriction on it not being possible to identify the exact dollar cost of a single position, no computations revolve around USD values, but a fair number of computations revolve around computing the value of positions based on their P&L. The main portfolio API used in the task engine, returns ONLY the top-level positioning for each users' portfolio, for example it will show they own an exposure of 9% invested in NVDA, but not that the position is 9 positions of NVDA at 1% each. We cannot see the individual position breakdown. It is possible to get this through another endpoint, but requires iterating over every single position one at a time to discover those values; This is not worthwhile for normal user task types, but is used for speculator task types, where we know which position is useful to look at. Of note, the main portfolio API does not return details like leverage, entry point or other such values, it simply returns the averages of the whole position, the individual position API used by the speculator task type, returns this more granular data. On the flip side, the history API used for all task types, returns closed trade values over a YTD period, and each days snapshot can be used to compute the individual position changes. This concept is covered more in depth in task engine documentation.
47
-
48
- [ Conventions ]
49
-
50
- 1.
51
- The Pipe architecture used in the core bulltrackers package is a MUST. The existence of this pipe architecture is a fundamental feature of the wider system, and allows for Dependency Injections, making every cloud function reusable, testable and standalone. A single cloud function can break, which is inevitable, and the wider system is unaffected. All cloud functions must use the dependency injection and never define their own require statements within their packages, we simply pass down their dependencies.
52
-
53
- 2.
54
- As with all professional systems, DRY applies. Wherever feasible, sensible and logical to do so, functionality for cloud functions should be abstracted into helper modules, or - if used across multiple cloud functions - shifted into the core pipe, which is passed to all cloud functions.
55
-
56
- 3.
57
- The use of firestore is huge within this project, and as such, it is a primary cost aspect, as well as significant driver of wall-time. It is therefore an absolute rule to never design queries, reads, writes or deletes within firestore in such a way that they are inefficient. Extreme lengths should be taken to dramatically reduce, simplify and redesign systems that inefficiently use firestore. This is obligatory for any cost efficient system.
58
-
59
- 4.
60
- Computations in the computation system are automatically forced to use a strict naming convention, removing underscores and replacing with hyphens. This is a necessity to ensure we do not accidentally name new computations in a way that the system doesn't expect and have a string of errors. The pass architecture of the computation system naturally results in a whole array of errors, if a single computation fails and thus their dependencies cannot proceed, this can take a significant time to debug, so resolving common mistakes early on is key.
61
-
62
- 5.
63
- Newly added computations must be integrated into an appropriate category, it is fine to introduce a new category name - this is dynamically auto-handled by the computation system and requires no new code changes, but the name must be understandable and clear to what the computations themselves represent. Furthermore, if a computation requires "yesterday" data, then it must be added to a sub-folder name exactly "historical" within the relevant category, otherwise the computation will fail as it will not receive the data it requires. Please review the computation system documentation to understand this concept and why it matters.
64
-
65
- 6.
66
- New computations MUST - I repeat - MUST define a static getDependencies() within the computation class, IF they use a dependency of another computation. By adding this, you allow the computation manifest to produce the exact order the computation must be run within the wider dependency architecture, as it uses Kahns' algorithm for topological sorting of dependencies. This MUST be given in the computation class in a format like :
67
-
68
- ```js
69
- static getDependencies() {
70
- return ['user_profitability_tracker'];
71
- }
72
- ```
73
-
74
- Any computation that does not define a dependency is automatically assigned as pass 1, and if this is incorrect then the computation will fail.
75
-
76
- Furthermore to this, we must ensure that a computation does not try to create a dependency structure that results in circularity, it may well be the case that up to 5 passes is not enough to avoid this, in which case a 6th pass would have to be developed. This isn't a significant change and is easily implementable, however as-is, 5 passes is enough.
77
-
78
- The manifest is almost entirely dynamic, and it is likely that you will not need to introduce code changes to the manifest to handle any newly introduced computation; There are some exceptions to this where a computation requires some special handling, but it is intended that we will resolve this by listing the conditions for how the manifest handles special computations, within the computation code itself, for the manifest to then read. A future update will cover this in more detail, and the results of it will be listed in the computation documentation, please refer to that for future updates on this aspect, should you need it.
79
-
80
- 7.
81
- There are 2 entry points for this system, there is one in the Bulltrackers module, which defines the exported pipes, and there is another you would build yourself to complete the architecture. The 2nd is not included in this project, as it would expose the configurations used in the deployed cloud functions, but the index.js itself includes the following.
82
-
83
- ```JS
84
- const { Firestore, FieldValue, FieldPath } = require('@google-cloud/firestore');
85
- const { PubSub } = require('@google-cloud/pubsub');
86
- const { VertexAI } = require('@google-cloud/vertexai');
87
- const { logger } = require('sharedsetup')(__filename);
88
- const { pipe } = require('bulltrackers-module');
89
- const { calculations, utils } = require('aiden-shared-calculations-unified');
90
-
91
- const fs = require('fs');
92
- const path = require('path');
93
- const db = new Firestore();
94
- const pubsub = new PubSub();
95
-
96
- const vertexAI = new VertexAI({project: 'some_project', location: 'europe-west1'});
97
- const geminiModel = vertexAI.getGenerativeModel({ model: 'gemini-2.5-flash-lite',});
98
- const configDir = path.join(__dirname, 'config');
99
- const configFiles = fs.readdirSync(configDir).filter(f => /_(config|manifest)\.js$/.test(f));
100
- const cfg = Object.fromEntries(configFiles.map(f => [f.replace(/_(config|manifest)\.js$/, ''),require(path.join(configDir, f))]));
101
-
102
- const headerManager = new pipe.core.IntelligentHeaderManager(db , logger, cfg.core);
103
- const proxyManager = new pipe.core.IntelligentProxyManager (db , logger, cfg.core);
104
- const batchManager = new pipe.core.FirestoreBatchManager (db, headerManager, logger, cfg.taskEngine);
105
-
106
- const allDependencies = { db, pubsub, logger, FieldValue, FieldPath, headerManager, proxyManager, batchManager, firestoreUtils: pipe.core.firestoreUtils, pubsubUtils: pipe.core.pubsubUtils, geminiModel, calculationUtils: utils };
107
- const handlers = {
108
- discoveryOrchestrator : () => pipe.orchestrator .runDiscoveryOrchestrator (cfg.orchestrator , allDependencies),
109
- updateOrchestrator : () => pipe.orchestrator .runUpdateOrchestrator (cfg.orchestrator , allDependencies),
110
- speculatorCleanupOrchestrator : () => pipe.maintenance .runSpeculatorCleanup (cfg.cleanup , allDependencies),
111
- fetchEtoroInsights : () => pipe.maintenance .runFetchInsights (cfg.fetchInsights , allDependencies),
112
- updateClosingPrices : () => pipe.maintenance .runFetchPrices (cfg.priceFetcher , allDependencies),
113
- socialOrchestrator : () => pipe.maintenance .runSocialOrchestrator (cfg.social , allDependencies),
114
- priceBackfill : () => pipe.maintenance .runBackfillAssetPrices (cfg.priceBackfill , allDependencies),
115
-
116
- taskEngineHandler : (m, c) => pipe.taskEngine .handleRequest (m, c, cfg.taskEngine , allDependencies),
117
- dispatcher : (m, c) => pipe.dispatcher .handleRequest (m, c, cfg.dispatcher , allDependencies),
118
- invalidSpeculatorHandler : (m, c) => pipe.maintenance .handleInvalidSpeculator (m, c, cfg.invalidSpeculator , allDependencies),
119
- socialTaskHandler : (m, c) => pipe.maintenance .handleSocialTask (m, c, cfg.social , allDependencies),
120
-
121
- computationSystemPass1 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
122
- computationSystemPass2 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
123
- computationSystemPass3 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
124
- computationSystemPass4 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
125
- computationSystemPass5 : () => pipe.computationSystem.runComputationPass(cfg.computationSystem, allDependencies, cfg.computation),
126
- genericApiV2 : pipe.api.createApiApp(cfg.genericApiV2, allDependencies, calculations)
127
- };
128
- Object.assign(exports, handlers);
129
- if (require.main === module) {const port = process.env.PORT || 8080;
130
- exports.genericApiV2.listen(port, () => console.log(`API listening on port ${port}`));}
131
- ```
132
-
133
- This index.js exposes the entry points for each cloud function, which is then linked to their relevant pipelines, which allows GCP, my chosen cloud computing provider, to deploy each function. I personally use a custom deployment system to handle the commands for each pipeline, and this system will be included in a future separate npm package, it also handles passing of the configurations, .env values and all other key aspects.
134
-
135
- The key note to take is that both these 2 core index.js entry points are clean and easy to read, they provide clear names for their intentions, and it is easy to follow the pipes to each individual cloud function. The inner workings of each pipeline are obscured behind their own internal modules, and thus making a change to one pipe, does not have a direct effect on any other pipe, with the sole exception of modifications in the core pipeline, which is used across all functions, but rarely needs modifying.
136
-
137
- All cloud functions have their own pipe, with the exception that the orchestrators and handlers for both portfolio and social tasks are passed through a single pipe which internally splits off into 2 sub pipes, which themselves handle the orchestration of the tasks and the handling of the tasks as 2 seperate cloud functions, or in the case of the portfolio processing, it actually uses 2 cloud functions for the orchestration, and 1 cloud function for task processing. More on this architecture and the reasoning for choices can be found in the task engine and social processing documentation.
138
-
139
- It is OK for a main pipeline, exposed in the outer index.js, to be handling multiple cloud functions so long as their core functionality is similar. 2 cloud functions that exist to process only slightly differing objectives, can be given the same core pipeline, and then be split off into sub pipes at the inner layers. However, a pipeline should not consist of functionality that is dissimilar to other components of that same pipeline. A future development may chang this, to further simplify the outer index.js into solely the core features. This would result in the discovery & update orchestrator being merged into a main pipeline with the task engine and dispatcher, as they all work together to achieve an end objective - the processing of user portfolios. This would then expose a single primary portfolioProcessor pipeline.
140
-
141
- [ Objectives ]
142
-
143
- There are some clear, high-level objectives this project, encompassing the likely dozens of npm packages it will require for an end-result, attempts to achieve.
144
-
145
- These are :
146
-
147
- 1. A world-class demonstration in how to build, from the ground-up, a scalable, highly efficient data ingestion pipeline that not only ingests.
148
-
149
- 2. A comprehensive data computation layer which allows for extremely advanced, highly complex computations to be integrated seamlessly and dynamically, with the ability to build data upon data, and have a limitless range of possible computations, all of which are scalable and processed in not seconds, but milliseconds.
150
-
151
- 3. A dynamically built, completely automated frontend integration that exposes easy-to-use displays for all the computed data, along with advanced customisation options and a no-code layer for allowing an end-user to produce their own computations on-top of raw data, or pre-computed data.
152
-
153
- 4. A demonstration in what is possible for the eToro community, and their own developers, particularly of potential interest to the Popular Investors fortunate enough to be handed access to the API layer. Though I must add the caveat that you will likely not be able to build anywhere similar to the scale of this project as you will have rate limits and much less data capability than this project allows, however it should give you some idea of what you can build yourself. Please check out the separate computation module for some inspiration.
154
-
155
- 5. A personal test for myself, as the sole developer of this project, and completely expecting very few contributions to the wider system, this is a significant undertaking. I am not a professional developer, and so this is quite the learning curve. I believe in the hidden and unused value that data analysis can offer, and eToro offers, arguably unknowingly, a rare opportunity to build something that is of immense value.
156
-
157
- 6. To produce a series of computations that are of such value that they are provably predicative in their insight, and thus have financial value to offer to a front-end user.
158
-
159
- [ Future Updates ]
160
-
161
- 1. Producing and exposing unit tests for each computation; These will be dynamically produced based on the contents of the computation.
162
- 2. Refining logging and building a custom tool for detecting problems across all the codebase by creating a log sink to read the results of the custom logging.log() function, across the project.
163
- 3. Reviewing existing computations, refining the results and optimising computations. Then developing pass 5 computations to prove the signals of each signal-generating computations are alpha-generative.
164
- 4. General solidification of the codebase, marking aspects as TODO, abstracting remaining magic numbers into configurable variables. Applying Dependency Injection for the few remaining imports.
165
- 5. Building up the frontend integration, mapping data to charts and stress-testing edge cases in schema formats.
166
-
167
- For coverage on each cloud function, please refer to their individual documentation, which you can find linked below.
168
-
169
- [View Computation System Documentation](./docs/ComputationSystem.MD)
170
- [View Computation System Documentation](./docs/TaskEngine.MD)
171
-
172
-
173
-
174
-
175
-
176
-
177
-
178
-
179
-
1
+ This is the directory for the cloud function entry points in the Bulltrackers Platform
2
+
3
+ Please refer to BullTrackers\Backend\documentation for the full documentation
@@ -1,53 +1,42 @@
1
1
  /**
2
- * @fileoverview Control Layer - Orchestrates Computation Execution
3
- * UPDATE: Imported and exposed HistoryExtractor in the math context.
4
- * UPDATE: Implemented strict User Type segregation. 'All' now defaults to 'Normal'.
2
+ * FIXED: computation_controller.js
3
+ * V3.2: Supports Streaming Trading History (todayHistory) separate from Yesterday's Portfolio.
5
4
  */
6
5
 
7
- const {
8
- DataExtractor,
9
- HistoryExtractor,
10
- MathPrimitives,
11
- Aggregators,
12
- Validators,
13
- SCHEMAS,
14
- SignalPrimitives
15
- } = require('../layers/math_primitives');
16
-
17
- const {
18
- loadDailyInsights,
19
- loadDailySocialPostInsights,
20
- getPortfolioPartRefs,
21
- getHistoryPartRefs,
22
- streamPortfolioData,
23
- streamHistoryData
6
+ const { DataExtractor,
7
+ HistoryExtractor,
8
+ MathPrimitives,
9
+ Aggregators,
10
+ Validators,
11
+ SCHEMAS,
12
+ SignalPrimitives,
13
+ DistributionAnalytics,
14
+ TimeSeries,
15
+ priceExtractor }
16
+ = require('../layers/math_primitives');
17
+
18
+ const { loadDailyInsights,
19
+ loadDailySocialPostInsights,
24
20
  } = require('../utils/data_loader');
25
21
 
26
- // ============================================================================
27
- // DATA LOADER WRAPPER
28
- // ============================================================================
29
-
30
22
  class DataLoader {
31
23
  constructor(config, dependencies) {
32
24
  this.config = config;
33
25
  this.deps = dependencies;
34
26
  this.cache = { mappings: null, insights: new Map(), social: new Map() };
35
27
  }
36
-
37
28
  async loadMappings() {
38
29
  if (this.cache.mappings) return this.cache.mappings;
39
30
  const { calculationUtils } = this.deps;
40
31
  this.cache.mappings = await calculationUtils.loadInstrumentMappings();
41
32
  return this.cache.mappings;
42
33
  }
43
-
44
34
  async loadInsights(dateStr) {
45
35
  if (this.cache.insights.has(dateStr)) return this.cache.insights.get(dateStr);
46
36
  const insights = await loadDailyInsights(this.config, this.deps, dateStr);
47
37
  this.cache.insights.set(dateStr, insights);
48
38
  return insights;
49
39
  }
50
-
51
40
  async loadSocial(dateStr) {
52
41
  if (this.cache.social.has(dateStr)) return this.cache.social.get(dateStr);
53
42
  const social = await loadDailySocialPostInsights(this.config, this.deps, dateStr);
@@ -56,42 +45,51 @@ class DataLoader {
56
45
  }
57
46
  }
58
47
 
59
- // ============================================================================
60
- // CONTEXT BUILDER
61
- // ============================================================================
62
-
63
48
  class ContextBuilder {
64
49
  static buildPerUserContext(options) {
65
- const {
66
- todayPortfolio, yesterdayPortfolio, todayHistory, yesterdayHistory,
67
- userId, userType, dateStr, metadata, mappings, insights, socialData,
68
- computedDependencies, config, deps
50
+ const {
51
+ todayPortfolio,
52
+ yesterdayPortfolio,
53
+ todayHistory,
54
+ yesterdayHistory,
55
+ userId,
56
+ userType,
57
+ dateStr,
58
+ metadata,
59
+ mappings,
60
+ insights,
61
+ socialData,
62
+ computedDependencies,
63
+ previousComputedDependencies,
64
+ config,
65
+ deps
69
66
  } = options;
70
67
 
71
68
  return {
72
- // User Identity & Data
73
69
  user: {
74
70
  id: userId,
75
71
  type: userType,
76
72
  portfolio: { today: todayPortfolio, yesterday: yesterdayPortfolio },
77
73
  history: { today: todayHistory, yesterday: yesterdayHistory }
78
74
  },
79
- // Global Time & Data
80
75
  date: { today: dateStr },
81
76
  insights: { today: insights?.today, yesterday: insights?.yesterday },
82
77
  social: { today: socialData?.today, yesterday: socialData?.yesterday },
83
- // Helpers
84
78
  mappings: mappings || {},
85
- math: {
79
+ math: { // Introduced a new computation class in the math primitives? Add it here. Then add it to meta context a little further down.
86
80
  extract: DataExtractor,
87
81
  history: HistoryExtractor,
88
82
  compute: MathPrimitives,
89
83
  aggregate: Aggregators,
90
84
  validate: Validators,
91
85
  signals: SignalPrimitives,
92
- schemas: SCHEMAS
86
+ schemas: SCHEMAS,
87
+ distribution : DistributionAnalytics,
88
+ TimeSeries: TimeSeries,
89
+ priceExtractor : priceExtractor
93
90
  },
94
91
  computed: computedDependencies || {},
92
+ previousComputed: previousComputedDependencies || {},
95
93
  meta: metadata,
96
94
  config,
97
95
  deps
@@ -100,8 +98,15 @@ class ContextBuilder {
100
98
 
101
99
  static buildMetaContext(options) {
102
100
  const {
103
- dateStr, metadata, mappings, insights, socialData,
104
- computedDependencies, config, deps
101
+ dateStr,
102
+ metadata,
103
+ mappings,
104
+ insights,
105
+ socialData,
106
+ computedDependencies,
107
+ previousComputedDependencies,
108
+ config,
109
+ deps
105
110
  } = options;
106
111
 
107
112
  return {
@@ -109,16 +114,20 @@ class ContextBuilder {
109
114
  insights: { today: insights?.today, yesterday: insights?.yesterday },
110
115
  social: { today: socialData?.today, yesterday: socialData?.yesterday },
111
116
  mappings: mappings || {},
112
- math: {
117
+ math: { // Introduced a new computation class in the math primitives? Add it here.
113
118
  extract: DataExtractor,
114
119
  history: HistoryExtractor,
115
120
  compute: MathPrimitives,
116
121
  aggregate: Aggregators,
117
122
  validate: Validators,
118
123
  signals: SignalPrimitives,
119
- schemas: SCHEMAS
124
+ schemas: SCHEMAS,
125
+ distribution: DistributionAnalytics,
126
+ TimeSeries: TimeSeries,
127
+ priceExtractor : priceExtractor
120
128
  },
121
129
  computed: computedDependencies || {},
130
+ previousComputed: previousComputedDependencies || {},
122
131
  meta: metadata,
123
132
  config,
124
133
  deps
@@ -126,10 +135,6 @@ class ContextBuilder {
126
135
  }
127
136
  }
128
137
 
129
- // ============================================================================
130
- // EXECUTOR
131
- // ============================================================================
132
-
133
138
  class ComputationExecutor {
134
139
  constructor(config, dependencies, dataLoader) {
135
140
  this.config = config;
@@ -137,66 +142,57 @@ class ComputationExecutor {
137
142
  this.loader = dataLoader;
138
143
  }
139
144
 
140
- async executePerUser(calcInstance, metadata, dateStr, portfolioData, historyData, computedDeps) {
145
+ /**
146
+ * UPDATED: Accepts yesterdayPortfolioData AND historyData separately.
147
+ */
148
+ async executePerUser(calcInstance, metadata, dateStr, portfolioData, yesterdayPortfolioData, historyData, computedDeps, prevDeps) {
141
149
  const { logger } = this.deps;
142
150
 
143
- // --------------------------------------------------------------------
144
- // 1. DETERMINE TARGET SCHEMA
145
- // --------------------------------------------------------------------
146
- // We strictly enforce separation here.
147
- // Unless 'speculator' is explicitly requested, we default to 'normal'.
148
- // This effectively deprecates 'all' by treating it as 'normal' for safety.
149
- const targetUserType = (metadata.userType === 'speculator')
150
- ? SCHEMAS.USER_TYPES.SPECULATOR
151
- : SCHEMAS.USER_TYPES.NORMAL;
152
-
151
+ // Fix for the 'all' userType discrepancy:
152
+ const targetUserType = metadata.userType; // 'all', 'normal', or 'speculator'
153
+
153
154
  const mappings = await this.loader.loadMappings();
154
- const insights = metadata.rootDataDependencies?.includes('insights')
155
- ? { today: await this.loader.loadInsights(dateStr) } : null;
155
+ const insights = metadata.rootDataDependencies?.includes('insights') ? { today: await this.loader.loadInsights(dateStr) } : null;
156
156
 
157
- // Loop through user batch
158
157
  for (const [userId, todayPortfolio] of Object.entries(portfolioData)) {
159
- const yesterdayPortfolio = historyData ? historyData[userId] : null;
158
+ // 1. Get Yesterday's Portfolio (if available)
159
+ const yesterdayPortfolio = yesterdayPortfolioData ? yesterdayPortfolioData[userId] : null;
160
160
 
161
- // ----------------------------------------------------------------
162
- // 2. IDENTIFY ACTUAL DATA TYPE
163
- // ----------------------------------------------------------------
164
- // We inspect the data structure to know what we are holding.
165
- const actualUserType = todayPortfolio.PublicPositions
166
- ? SCHEMAS.USER_TYPES.SPECULATOR
167
- : SCHEMAS.USER_TYPES.NORMAL;
168
-
169
- // ----------------------------------------------------------------
170
- // 3. STRICT GATEKEEPING
171
- // ----------------------------------------------------------------
172
- // If the computation asked for 'normal' (or 'all'), but we have a 'speculator', SKIP.
173
- // If the computation asked for 'speculator', but we have a 'normal', SKIP.
174
- if (targetUserType !== actualUserType) continue;
161
+ // 2. Get Today's Trading History (if available)
162
+ const todayHistory = historyData ? historyData[userId] : null;
163
+
164
+ const actualUserType = todayPortfolio.PublicPositions ? SCHEMAS.USER_TYPES.SPECULATOR : SCHEMAS.USER_TYPES.NORMAL;
165
+
166
+ // Filtering Logic
167
+ if (targetUserType !== 'all') {
168
+ const mappedTarget = (targetUserType === 'speculator') ? SCHEMAS.USER_TYPES.SPECULATOR : SCHEMAS.USER_TYPES.NORMAL;
169
+ if (mappedTarget !== actualUserType) continue;
170
+ }
175
171
 
176
172
  const context = ContextBuilder.buildPerUserContext({
177
173
  todayPortfolio, yesterdayPortfolio,
174
+ todayHistory, // Passed to context
178
175
  userId, userType: actualUserType, dateStr, metadata, mappings, insights,
179
- computedDependencies: computedDeps, config: this.config, deps: this.deps
176
+ computedDependencies: computedDeps,
177
+ previousComputedDependencies: prevDeps,
178
+ config: this.config, deps: this.deps
180
179
  });
181
180
 
182
- try {
183
- await calcInstance.process(context);
184
- } catch (e) {
185
- logger.log('WARN', `Calc ${metadata.name} failed for user ${userId}: ${e.message}`);
186
- }
181
+ try { await calcInstance.process(context); }
182
+ catch (e) { logger.log('WARN', `Calc ${metadata.name} failed for user ${userId}: ${e.message}`); }
187
183
  }
188
184
  }
189
185
 
190
- async executeOncePerDay(calcInstance, metadata, dateStr, computedDeps) {
186
+ async executeOncePerDay(calcInstance, metadata, dateStr, computedDeps, prevDeps) {
191
187
  const mappings = await this.loader.loadMappings();
192
- const insights = metadata.rootDataDependencies?.includes('insights')
193
- ? { today: await this.loader.loadInsights(dateStr) } : null;
194
- const social = metadata.rootDataDependencies?.includes('social')
195
- ? { today: await this.loader.loadSocial(dateStr) } : null;
188
+ const insights = metadata.rootDataDependencies?.includes('insights') ? { today: await this.loader.loadInsights(dateStr) } : null;
189
+ const social = metadata.rootDataDependencies?.includes('social') ? { today: await this.loader.loadSocial(dateStr) } : null;
196
190
 
197
191
  const context = ContextBuilder.buildMetaContext({
198
192
  dateStr, metadata, mappings, insights, socialData: social,
199
- computedDependencies: computedDeps, config: this.config, deps: this.deps
193
+ computedDependencies: computedDeps,
194
+ previousComputedDependencies: prevDeps,
195
+ config: this.config, deps: this.deps
200
196
  });
201
197
 
202
198
  return await calcInstance.process(context);
@@ -80,7 +80,7 @@ function findCycles(manifestMap, adjacencyList) {
80
80
  */
81
81
  function getDependencySet(endpoints, adjacencyList) {
82
82
  const required = new Set(endpoints);
83
- const queue = [...endpoints];
83
+ const queue = [...endpoints];
84
84
  while (queue.length > 0) { const calcName = queue.shift(); const dependencies = adjacencyList.get(calcName);
85
85
  if (dependencies) { for (const dep of dependencies) { if (!required.has(dep)) { required.add(dep); queue.push(dep); } } } }
86
86
  return required;
@@ -104,6 +104,7 @@ function buildManifest(productLinesToRun = [], calculations) {
104
104
  const reverseAdjacency = new Map();
105
105
  const inDegree = new Map();
106
106
  let hasFatalError = false;
107
+
107
108
  /* ---------------- 1. Load All Calculations ---------------- */
108
109
  log.step('Loading and validating all calculation classes…');
109
110
  const allCalculationClasses = new Map();
@@ -121,10 +122,12 @@ function buildManifest(productLinesToRun = [], calculations) {
121
122
  if (typeof Class.getDependencies !== 'function') { log.fatal(`Calculation "${normalizedName}" is missing the static getDependencies() method. Build FAILED.`); hasFatalError = true;return; }
122
123
  // --- RULE 3: Check for static getSchema() ---
123
124
  if (typeof Class.getSchema !== 'function') {log.warn(`Calculation "${normalizedName}" is missing the static getSchema() method. (Recommended)`); }
124
- const metadata = Class.getMetadata();
125
+ const metadata = Class.getMetadata();
125
126
  const dependencies = Class.getDependencies().map(normalizeName);
126
127
  // --- RULE 4: Check for isHistorical mismatch ---
127
- if (metadata.isHistorical === true && !Class.toString().includes('yesterdayPortfolio')) { log.warn(`Calculation "${normalizedName}" is marked 'isHistorical: true' but does not seem to use 'yesterdayPortfolio'.`); }
128
+ if (metadata.isHistorical === true && !Class.toString().includes('yesterday')) { // UPDATED FOR MATH LAYER, THIS IS A LITTLE BRITTLE, BUT FOR NOW FINE...
129
+ log.warn(`Calculation "${normalizedName}" is marked 'isHistorical: true' but does not seem to reference 'yesterday' data.`);
130
+ }
128
131
  const manifestEntry = {
129
132
  name: normalizedName,
130
133
  class: Class,
@@ -157,6 +160,7 @@ function buildManifest(productLinesToRun = [], calculations) {
157
160
 
158
161
  if (hasFatalError) { throw new Error('Manifest build failed due to missing static methods in calculations.'); }
159
162
  log.success(`Loaded and validated ${manifestMap.size} total calculations.`);
163
+
160
164
  /* ---------------- 2. Validate Dependency Links ---------------- */
161
165
  log.divider('Validating Dependency Links');
162
166
  const allNames = new Set(manifestMap.keys());
@@ -177,6 +181,7 @@ function buildManifest(productLinesToRun = [], calculations) {
177
181
  }
178
182
  if (invalidLinks) { throw new Error('Manifest validation failed. Fix missing or self-referencing dependencies.'); }
179
183
  log.success('All dependency links are valid.');
184
+
180
185
  /* ---------------- 3. Filter for Product Lines ---------------- */
181
186
  log.divider('Filtering by Product Line');
182
187
  // 1. Find all "endpoint" calculations (the final signals) in the target product lines.
@@ -189,17 +194,18 @@ function buildManifest(productLinesToRun = [], calculations) {
189
194
  log.info(`Identified ${productLineEndpoints.length} endpoint/core calculations.`);
190
195
  log.info(`Traced dependencies: ${requiredCalcs.size} total calculations are required.`);
191
196
  // 4. Create the final, filtered maps for sorting.
192
- const filteredManifestMap = new Map();
193
- const filteredInDegree = new Map();
197
+ const filteredManifestMap = new Map();
198
+ const filteredInDegree = new Map();
194
199
  const filteredReverseAdjacency = new Map();
195
200
  for (const name of requiredCalcs) { filteredManifestMap.set(name, manifestMap.get(name)); filteredInDegree.set(name, inDegree.get(name));
196
201
  const consumers = (reverseAdjacency.get(name) || []).filter(consumer => requiredCalcs.has(consumer)); filteredReverseAdjacency.set(name, consumers); }
197
202
  log.success(`Filtered manifest to ${filteredManifestMap.size} calculations.`);
203
+
198
204
  /* ---------------- 4. Topological Sort (Kahn's Algorithm) ---------------- */
199
205
  log.divider('Topological Sorting (Kahn\'s Algorithm)');
200
206
  const sortedManifest = [];
201
- const queue = [];
202
- let maxPass = 0;
207
+ const queue = [];
208
+ let maxPass = 0;
203
209
  for (const [name, degree] of filteredInDegree) { if (degree === 0) { queue.push(name); filteredManifestMap.get(name).pass = 1; maxPass = 1; } }
204
210
  queue.sort();
205
211
  while (queue.length) {
@@ -216,6 +222,7 @@ function buildManifest(productLinesToRun = [], calculations) {
216
222
  const cycles = findCycles(filteredManifestMap, adjacency);
217
223
  for (const c of cycles) log.error('Cycle: ' + c.join(' → '));
218
224
  throw new Error('Circular dependency detected. Manifest build failed.'); }
225
+
219
226
  /* ---------------- 5. Summary ---------------- */
220
227
  log.divider('Manifest Summary');
221
228
  log.success(`Total calculations in build: ${sortedManifest.length}`);
@@ -251,10 +258,6 @@ async function generateSvgGraph(manifest, filename = 'dependency-tree.svg') {
251
258
  dot += '}\n';
252
259
  try {
253
260
  const svg = await viz.renderString(dot, { format: 'svg' });
254
-
255
- // --- MODIFIED PATH ---
256
- // Writes to the root of the 'bulltrackers-module' package directory,
257
- // which is a safer, more predictable location.
258
261
  const out = path.join('/tmp', filename);
259
262
  fs.writeFileSync(out, svg);
260
263
  log.success(`Dependency tree generated at ${out}`);
@@ -270,21 +273,13 @@ async function generateSvgGraph(manifest, filename = 'dependency-tree.svg') {
270
273
  */
271
274
  function build(productLinesToRun, calculations) {
272
275
  try {
273
- // This function MUST NOT fail or generate SVGs.
274
- // It has one job: build and return the manifest array.
275
276
  const manifest = buildManifest(productLinesToRun, calculations);
276
277
  return manifest; // Returns the array
277
278
  } catch (error) {
278
279
  log.error(error.message);
279
- // If buildManifest fails, it will throw.
280
- // We return null so the caller can handle the failure.
281
280
  return null;
282
281
  }
283
282
  }
284
283
 
285
- // --- REMOVED ---
286
- // The entire 'if (require.main === module)' block has been removed
287
- // to prevent any local 'require' conflicts. This file is now a pure
288
- // module and must be called by another script.
289
284
 
290
- module.exports = { build, generateSvgGraph }; // <-- Also exporting generateSvgGraph
285
+ module.exports = { build, generateSvgGraph };