solana-traderclaw 1.0.90 → 1.0.92

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  // OpenClaw Gateway Configuration — TraderClaw V1-Upgraded (Single Agent)
2
- // Single "main" agent with 5-minute heartbeat + cron jobs for autonomous operation.
3
- // V1-Upgraded: All cron jobs use sessionTarget:"isolated" + delivery:{mode:"none"}.
4
- // New cron jobs: intelligence_lab_eval (12h), source_trust_refresh (6h).
2
+ // Single "main" agent with 5-minute heartbeat + 10 consolidated cron jobs.
3
+ // Consolidated from 13 10: portfolio-health (dead-money+whale+risk-audit),
4
+ // trust-refresh (source-reputation+deployer-trust). Shortened prompts.
5
5
  {
6
6
  agents: {
7
7
  list: [
@@ -22,6 +22,9 @@
22
22
  keepLines: 2000
23
23
  },
24
24
  jobs: [
25
+ <<<<<<< feat/cron-jobs-upgrade
26
+ // ── Alpha Scanning ──────────────────────────────────────────────
27
+ =======
25
28
  // ── Strategy & Learning ───────────────────────────────────────
26
29
  {
27
30
  id: "strategy-evolution",
@@ -32,119 +35,113 @@
32
35
  message: "CRON_JOB: strategy_evolution\n\nStep 1: Call solana_journal_summary to get aggregate performance stats (win rate, avg PnL, trade count). If fewer than 10 closed trades since the last strategy evolution, skip weight updates but still run pattern detection.\n\nStep 2: Call solana_memory_search for 'strategy_evolution' — find last 3 evolution cycle results. Call solana_memory_search for 'strategy_drift_warning' — find drift warnings since last evolution. Call solana_memory_search for 'pre_trade_rationale' — recent decision patterns.\n\nStep 3: Run Recurring Pattern Detection — search for learning_entry tags, group by area, check linked chains (3+ = confirmed pattern), investigate drift warnings.\n\nStep 4: Call solana_strategy_state to read current feature weights.\n\nStep 5: Call solana_trades to get recent closed trades. Apply ADL checks (direction consistency, weight velocity, reversion check).\n\nStep 6: Compute proposed weight changes. Score each with VFM (Frequency + Failure Reduction + Self-Cost). Only apply changes scoring >= 3/5.\n\nStep 7: Verify guardrails: maxDeltaOk, sumWeightsOk, minTradesOk, floorCapOk. If all pass, call solana_strategy_update with incremented version.\n\nStep 8: Run Named Pattern Recognition — search for winning trade clusters, catalog new patterns, evolve existing ones.\n\nStep 9: Evaluate discovery filter performance. Log all results via solana_memory_write with tags: strategy_evolution, vfm_scorecard, pattern_detection, named_pattern.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
33
36
  enabled: true
34
37
  },
38
+ >>>>>>> main
35
39
  {
36
- id: "source-reputation",
40
+ id: "alpha-scan",
37
41
  schedule: "0 */3 * * *",
38
42
  agentId: "main",
39
43
  sessionTarget: "isolated",
40
- delivery: { mode: "none" },
41
- message: "CRON_JOB: source_reputation_recalc\n\nStep 1: Call solana_alpha_sources to get per-source performance stats (signal count, conversion rate, avg score).\n\nStep 2: Call solana_alpha_history to get recent signal history with scores and source identifiers.\n\nStep 3: Call solana_trades to get recent trade outcomes. Cross-reference each trade back to its originating signal source.\n\nStep 4: For each source, calculate: win rate (trades that hit TP vs SL), average PnL per trade, signal-to-trade conversion rate.\n\nStep 5: Assign tier rankings:\n- TIER-1 (LOCK): Win rate above 60% AND 5+ trades AND positive avg PnL\n- TIER-2 (CONDITIONAL): Win rate 30-60% OR fewer than 5 trades\n- TIER-3 (BLACKLIST): Win rate below 30% with 5+ trades\n\nStep 6: Write scorecard to memory using solana_memory_write with tag 'source_reputation'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
44
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
45
+ message: "CRON_JOB: alpha_scan\n\nScan new launches, filter, score, log alpha. Tools: solana_scan_launches filter (vol>30K, mcap>10K, liq>5K) solana_token_snapshot for survivors quality filter (top10 <50%, deployer <3 abandoned, has social) score 0-100 solana_alpha_log for 65+. Summarize results.",
42
46
  enabled: true
43
47
  },
44
48
 
45
- // ── Risk & Audit ──────────────────────────────────────────────
49
+ // ── Portfolio Health (combined dead-money + whale + risk audit) ──
46
50
  {
47
- id: "risk-audit",
48
- schedule: "0 */2 * * *",
51
+ id: "portfolio-health",
52
+ schedule: "0 */4 * * *",
49
53
  agentId: "main",
50
54
  sessionTarget: "isolated",
51
- delivery: { mode: "none" },
52
- message: "CRON_JOB: portfolio_risk_audit\n\nStep 1: Call solana_capital_status to get wallet balance and portfolio value.\n\nStep 2: Call solana_positions to get all open positions with entry prices and sizes.\n\nStep 3: For each open position, call solana_token_snapshot to get current price, 24h volume, and market cap.\n\nStep 4: Run concentration check — flag WARNING if any single position exceeds 30 percent of total portfolio value, CRITICAL if above 50 percent.\n\nStep 5: Run exposure check flag WARNING if total exposure exceeds 50 percent of wallet balance, CRITICAL if above 75 percent.\n\nStep 6: Run drawdown check — CRITICAL if portfolio drawdown exceeds 25 percent from peak capital.\n\nStep 7: Calculate portfolio heat (sum of all position risk scores). Flag WARNING above 50 percent, CRITICAL above 75 percent.\n\nStep 8: Run liquidity check — WARNING if any position exceeds 2 percent of its pool depth.\n\nStep 9: Check solana_killswitch_status.\n\nStep 10: Write risk report via solana_memory_write with tag 'risk_audit'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
55
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
56
+ message: "CRON_JOB: portfolio_health\n\nCombined dead-money + whale + risk audit. solana_capital_status + solana_positions solana_token_snapshot per position dead money exit (loss>40% or 90min+down+low vol) whale flags (>5% supply moves) risk checks (concentration/drawdown/exposure) sell if CRITICAL solana_memory_write tag 'portfolio_health'.",
53
57
  enabled: true
54
58
  },
55
59
 
56
- // ── On-Chain Intelligence ─────────────────────────────────────
60
+ // ── Trust Refresh (combined source + deployer trust) ────────────
57
61
  {
58
- id: "meta-rotation",
59
- schedule: "30 */3 * * *",
62
+ id: "trust-refresh",
63
+ schedule: "0 */8 * * *",
60
64
  agentId: "main",
61
65
  sessionTarget: "isolated",
62
66
  delivery: { mode: "none" },
63
- message: "CRON_JOB: meta_rotation_analysis\n\nStep 0: Call x_search_tweets with queries: 'solana memecoin', 'pump fun gem', 'sol alpha'. Note which token names and narratives appear most frequently in the last 3 hours. Use this social signal to validate or challenge the on-chain data in the following steps.\n\nStep 1: Call solana_scan_launches to get recent token launches (last 3-6 hours).\n\nStep 2: Categorize each token by narrative cluster: AI/Agents, Animal Memes, Political, Celebrity/IP, DeFi, Gaming, Culture/Humor, Other.\n\nStep 3: For each cluster, aggregate: token count, total volume, average market cap.\n\nStep 4: Call solana_memory_search for 'meta_rotation' to compare with prior scan.\n\nStep 5: Classify each narrative: GAINING, SATURATED, COOLING, DORMANT.\n\nStep 6: Write rotation report via solana_memory_write with tag 'meta_rotation'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
67
+ message: "CRON_JOB: trust_refresh\n\nCombined source + deployer trust. solana_source_trust_refresh + solana_deployer_trust_refresh solana_alpha_sources + solana_trades for win rates solana_source_trust_get + solana_deployer_trust_get, flag <30 solana_memory_write tag 'trust_refresh'.",
64
68
  enabled: true
65
69
  },
66
70
 
67
- // ── Portfolio Maintenance ─────────────────────────────────────
68
- {
69
- id: "dead-money-sweep",
70
- schedule: "0 */2 * * *",
71
- agentId: "main",
72
- sessionTarget: "isolated",
73
- delivery: { mode: "none" },
74
- message: "CRON_JOB: dead_money_sweep\n\nStep 1: Call solana_positions to get all open LOCAL_MANAGED positions.\n\nStep 2: For each position, call solana_token_snapshot for current price and 24h volume.\n\nStep 3: Apply dead money criteria (ALL four must be true):\n- Loss > 40%\n- Held 90+ min AND still down 5%+\n- 24h volume < $5,000\n- Price flat (±5%) for 4+ hours\n\nStep 4: For flagged positions, execute exit via solana_trade_execute with side 'sell', max slippage 2000bps.\n\nStep 5: Call solana_trade_review for each exit.\n\nStep 6: Write sweep report via solana_memory_write with tag 'dead_money'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Every TX MUST include solscan link.",
75
- enabled: true
76
- },
71
+ // ── On-Chain Intelligence ───────────────────────────────────────
77
72
  {
78
- id: "subscription-cleanup",
79
- schedule: "0 * * * *",
73
+ id: "meta-rotation",
74
+ schedule: "30 */8 * * *",
80
75
  agentId: "main",
81
76
  sessionTarget: "isolated",
82
- delivery: { mode: "none" },
83
- message: "CRON_JOB: subscription_cleanup\n\nStep 1: Call solana_positions to get all open positions and extract their contract addresses.\n\nStep 2: Call solana_bitquery_subscriptions to list all active Bitquery subscriptions. If this call returns an AUTH_SCOPE_MISSING error, log the error to memory and stop gracefully — do not retry or error out.\n\nStep 3: For each active subscription, check if the associated token CA still has an open position. Build two lists: 'matched' (has position) and 'orphaned' (no position).\n\nStep 4: Unsubscribe orphans via solana_bitquery_unsubscribe.\n\nStep 5: Reopen subscriptions nearing 24h expiry via solana_bitquery_subscription_reopen.\n\nStep 6: Write summary via solana_memory_write with tag 'subscription_cleanup'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
77
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
78
+ message: "CRON_JOB: meta_rotation_analysis\n\nx_search_tweets trending topics solana_scan_launches categorize by narrative cluster per-cluster metrics compare vs solana_memory_search tag 'meta_rotation' declare hot/fading clusters solana_memory_write tag 'meta_rotation'.",
84
79
  enabled: true
85
80
  },
86
81
 
87
- // ── Reporting ─────────────────────────────────────────────────
82
+ // ── Strategy & Learning ─────────────────────────────────────────
88
83
  {
89
- id: "daily-report",
90
- schedule: "0 4 * * *",
84
+ id: "strategy-evolution",
85
+ schedule: "0 6 * * *",
91
86
  agentId: "main",
92
87
  sessionTarget: "isolated",
93
- delivery: { mode: "none" },
94
- message: "CRON_JOB: daily_performance_report\n\nStep 1: Call solana_journal_summary for last 24h aggregate stats.\n\nStep 2: Call solana_capital_status for current balance.\n\nStep 3: Call solana_positions for open positions.\n\nStep 4: Call solana_trades for all trades in last 24h.\n\nStep 5: Call solana_strategy_state for current version and weights.\n\nStep 6: Call solana_memory_search for 'source_reputation' and 'daily_report' (yesterday's for comparison).\n\nStep 7: Build comprehensive report: executive summary, trade log, best/worst trades, source performance, capital trajectory, strategy status, open positions.\n\nStep 8: Write report via solana_memory_write with tag 'daily_performance'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Every trade MUST include solscan TX link.\n- Do not execute trades. Do not ask questions.",
88
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
89
+ message: "CRON_JOB: strategy_evolution\n\nDaily strategy review. solana_journal_summary if <10 closed trades since last evolution, log 'insufficient data' and stop. Otherwise: solana_trades to bucket by confidence tier solana_strategy_state for current weights analyze tier performance solana_strategy_update with conservative adjustments (max 10% per weight per cycle) solana_memory_write tag 'strategy_evolution'.",
95
90
  enabled: true
96
91
  },
97
92
 
98
- // ── Whale / Smart Money Tracking ──────────────────────────────
93
+ // ── Portfolio Maintenance ───────────────────────────────────────
99
94
  {
100
- id: "whale-watch",
101
- schedule: "0 */2 * * *",
95
+ id: "subscription-cleanup",
96
+ schedule: "15 */8 * * *",
102
97
  agentId: "main",
103
98
  sessionTarget: "isolated",
104
- delivery: { mode: "none" },
105
- message: "CRON_JOB: whale_activity_scan\n\nStep 1: Call solana_positions to get open position CAs.\n\nStep 2: For each held token, call solana_token_holders for top 10 distribution.\n\nStep 3: Call solana_memory_search for 'whale_activity' baseline.\n\nStep 4: Compare distributions. Flag: NEW_WHALE, WHALE_EXIT, DEPLOYER_MOVE, CONCENTRATION_SPIKE, FRESH_WALLET_SURGE.\n\nStep 5: Assign priority: HIGH for whale exit/deployer move on held positions, MEDIUM for concentration changes, LOW for watchlist-only.\n\nStep 6: Write findings via solana_memory_write with tag 'whale_activity'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
99
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
100
+ message: "CRON_JOB: subscription_cleanup\n\nsolana_positions for open CAs solana_bitquery_subscriptions for active subs (if AUTH_SCOPE_MISSING, log and stop) match subs to positions solana_bitquery_unsubscribe orphaned subs solana_memory_write tag 'subscription_cleanup'. Summarize before/after counts.",
106
101
  enabled: true
107
102
  },
108
103
 
109
- // ── Alpha Scanning ──────────────────────────────────────────────
104
+ // ── Reporting ───────────────────────────────────────────────────
110
105
  {
111
- id: "alpha-scan",
112
- schedule: "0 * * * *",
106
+ id: "daily-performance-report",
107
+ schedule: "0 4 * * *",
113
108
  agentId: "main",
114
109
  sessionTarget: "isolated",
115
- delivery: { mode: "none" },
116
- message: "CRON_JOB: alpha_scan\n\nStep 1: Call solana_scan_launches to find new token launches from the last hour. The response includes volume data per token — use THIS volume as the primary filter (not solana_token_snapshot volumeUsd24h, which often returns 0 for fresh tokens).\n\nStep 2: Filter candidates from the scan_launches results:\n- 24h volume above 50000 USD (use the volume field from scan_launches)\n- Market cap above 10000 USD (use mcap from scan_launches if available)\nTokens must pass BOTH filters to qualify. Tokens that fail either filter are eliminated. Log the top 3 rejected tokens by volume for reference.\n\nStep 3: For each token that passes the volume/mcap filter, call solana_token_holders to check holder distribution. Skip if the top single holder owns more than 30 percent (concentration risk). Skip if total top-10 concentration exceeds 60 percent.\n\nStep 4: For each token that passes holder checks, call solana_token_risk to check for mint authority or freeze authority. Hard skip if either is present (rug risk).\n\nStep 5: Website legitimacy check — for each token that passes risk checks, call solana_bitquery_catalog with templatePath 'pumpFunMetadata.tokenMetadataByAddress' and variables { token: CA } to get on-chain metadata. If the metadata contains a 'website' field, call web_fetch_url with the website URL. Analyze the response: check if the site has real content (not just a generic template), check if socialLinks.twitter matches the on-chain metadata twitter field. Include findings in the alpha submission thesis. If no website exists, note 'no website' — this is neutral, not a skip. Cache rule: check solana_memory_search for 'website_analyzed' with the same URL before fetching. If analyzed in the last 48 hours, reuse the cached result. After analysis, write findings via solana_memory_write with tag 'website_analyzed'.\n\nStep 6: Scrub any external text via solana_scrub_untrusted_text before using in analysis.\n\nStep 7: For each token that passes ALL checks, call solana_alpha_submit to add it to the alpha buffer for the next heartbeat evaluation. Include the thesis: volume, mcap, holder concentration, risk flags, narrative cluster, website legitimacy findings.\n\nStep 8: Write scan report to memory using solana_memory_write with tag 'alpha_scan'. Include: total launches scanned, unique tokens, tokens passed each filter stage (volume → holders → risk → website → submitted), top 3 rejected tokens with reason.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Include solscan token link: https://solscan.io/token/{CA} for each qualifying token.\n- Do not execute trades directly — only submit to alpha buffer.\n- Do not ask questions.",
110
+ delivery: { mode: "announce", channel: "telegram" },
111
+ message: "CRON_JOB: daily_performance_report\n\nCompile 24h report. solana_journal_summary + solana_capital_status + solana_positions + solana_trades + solana_strategy_state sections: Portfolio Summary, Trading Activity (count/win rate/PnL), Best/Worst Trades, Strategy State, Risk Metrics, Recommendations solana_memory_write tag 'daily_report'. Deliver full report.",
117
112
  enabled: true
118
113
  },
119
114
 
120
- // ── Intelligence Lab (V1-Upgraded NEW) ──────────────────────────
115
+ // ── Intelligence Lab ────────────────────────────────────────────
121
116
  {
122
117
  id: "intelligence-lab-eval",
123
- schedule: "0 */12 * * *",
118
+ schedule: "0 16 * * *",
124
119
  agentId: "main",
125
120
  sessionTarget: "isolated",
126
121
  delivery: { mode: "none" },
127
- message: "CRON_JOB: intelligence_lab_eval\n\nStep 1: Check candidate dataset size via solana_candidate_get. If fewer than 20 labeled candidates, log 'insufficient data for evaluation' and exit.\n\nStep 2: Run evaluation report via solana_evaluation_report get confusion matrix, accuracy, precision, recall, F1 for current champion model.\n\nStep 3: Check model registry via solana_model_registry for any challenger models.\n\nStep 4: If challenger exists, run replay via solana_replay_run comparing champion vs challenger on recent candidates.\n\nStep 5: Generate replay report via solana_replay_report.\n\nStep 6: If challenger outperforms champion (higher F1 AND accuracy), promote via solana_model_promote.\n\nStep 7: Log evaluation results via solana_memory_write with tag 'intelligence_lab_eval'. Include: model accuracy, F1, promotion decision, dataset size.\n\nDo not execute trades. Do not ask questions.",
122
+ message: "CRON_JOB: intelligence_lab_eval\n\nsolana_candidate_get if <20 labeled candidates, log 'insufficient data' and exit. Otherwise: solana_evaluation_report solana_model_registry for challengers solana_replay_eval if challenger exists solana_model_promote if challenger beats champion by >5% F1 solana_memory_write tag 'intelligence_lab'.",
128
123
  enabled: true
129
124
  },
125
+
126
+ // ── Memory Maintenance ──────────────────────────────────────────
130
127
  {
131
- id: "source-trust-refresh",
132
- schedule: "0 */6 * * *",
128
+ id: "memory-trim",
129
+ schedule: "0 3 * * *",
133
130
  agentId: "main",
134
131
  sessionTarget: "isolated",
135
132
  delivery: { mode: "none" },
136
- message: "CRON_JOB: source_trust_refresh\n\nStep 1: Call solana_source_trust_refresh to recalculate trust scores for all tracked alpha sources based on recent trade outcomes.\n\nStep 2: Call solana_deployer_trust_refresh to recalculate deployer trust scores based on recent token performance.\n\nStep 3: Call solana_source_trust_get to read updated scores. Flag any sources that dropped below 30 trust score.\n\nStep 4: Call solana_deployer_trust_get to read deployer scores. Flag any deployers with 3+ failed tokens.\n\nStep 5: Log trust score updates via solana_memory_write with tag 'trust_refresh'. Include: sources updated, deployers updated, flagged sources/deployers.\n\nDo not execute trades. Do not ask questions.",
133
+ message: "CRON_JOB: memory_trim\n\nsolana_memory_trim dryRun:true first review solana_memory_trim retentionDays:2 solana_memory_write tag 'memory_trim' with summary.",
137
134
  enabled: true
138
135
  },
139
136
 
140
- // ── Memory Maintenance ──────────────────────────────────────────
137
+ // ── Balance Watchdog ────────────────────────────────────────────
141
138
  {
142
- id: "memory-trim",
143
- schedule: "0 3 * * *",
139
+ id: "balance-watchdog",
140
+ schedule: "0 */2 * * *",
144
141
  agentId: "main",
145
142
  sessionTarget: "isolated",
146
- delivery: { mode: "none" },
147
- message: "CRON_JOB: memory_trim\n\nStep 1: Call solana_memory_trim with dryRun true first to preview what will be trimmed.\n\nStep 2: Review the dry run summary. Verify no critical data is flagged for removal.\n\nStep 3: Call solana_memory_trim with retentionDays 2 to execute the trim.\n\nStep 4: Log the trim summary via solana_memory_write with tag 'memory_trim'. Include: daily logs deleted, state keys pruned, watchlist entries trimmed, decision entries trimmed, bulletin entries trimmed.\n\nDo not execute trades. Do not ask questions.",
143
+ delivery: { mode: "announce", channel: "telegram" },
144
+ message: "Balance watchdog. 1) solana_capital_status 2) solana_positions 3) solana_context_snapshot_read 4) Compare real vs believed. If mismatch: solana_context_snapshot_write with corrected state, summarize changes. If match: reply WATCHDOG_OK.",
148
145
  enabled: true
149
146
  }
150
147
  ]
@@ -1,7 +1,7 @@
1
- // OpenClaw Gateway Configuration — TraderClaw V1-Upgraded (Single Agent)
2
- // Single "main" agent with 5-minute heartbeat + cron jobs for autonomous operation.
3
- // V1-Upgraded: All cron jobs use sessionTarget:"isolated" + delivery:{mode:"none"}.
4
- // New cron jobs: intelligence_lab_eval (12h), source_trust_refresh (6h).
1
+ // OpenClaw Gateway Configuration — TraderClaw V1 (Single Agent)
2
+ // Single "main" agent with 5-minute heartbeat + 10 consolidated cron jobs.
3
+ // Consolidated from 13 10: portfolio-health (dead-money+whale+risk-audit),
4
+ // trust-refresh (source-reputation+deployer-trust). Shortened prompts.
5
5
  {
6
6
  agents: {
7
7
  list: [
@@ -22,118 +22,113 @@
22
22
  keepLines: 2000
23
23
  },
24
24
  jobs: [
25
- // ── Strategy & Learning ───────────────────────────────────────
25
+ // ── Alpha Scanning ──────────────────────────────────────────────
26
26
  {
27
- id: "strategy-evolution",
28
- schedule: "0 */4 * * *",
27
+ id: "alpha-scan",
28
+ schedule: "0 */3 * * *",
29
29
  agentId: "main",
30
30
  sessionTarget: "isolated",
31
- delivery: { mode: "none" },
32
- message: "CRON_JOB: strategy_evolution\n\nStep 1: Call solana_journal_summary to get aggregate performance stats (win rate, avg PnL, trade count). If fewer than 20 closed trades since the last strategy evolution, skip weight updates but still run pattern detection.\n\nStep 2: Call solana_memory_search for 'strategy_evolution' — find last 3 evolution cycle results. Call solana_memory_search for 'strategy_drift_warning' — find drift warnings since last evolution. Call solana_memory_search for 'pre_trade_rationale' — recent decision patterns.\n\nStep 3: Run Recurring Pattern Detection — search for learning_entry tags, group by area, check linked chains (3+ = confirmed pattern), investigate drift warnings.\n\nStep 4: Call solana_strategy_state to read current feature weights.\n\nStep 5: Call solana_trades to get recent closed trades. Apply ADL checks (direction consistency, weight velocity, reversion check).\n\nStep 6: Compute proposed weight changes. Score each with VFM (Frequency + Failure Reduction + Self-Cost). Only apply changes scoring >= 3/5.\n\nStep 7: Verify guardrails: maxDeltaOk, sumWeightsOk, minTradesOk, floorCapOk. If all pass, call solana_strategy_update with incremented version.\n\nStep 8: Run Named Pattern Recognition — search for winning trade clusters, catalog new patterns, evolve existing ones.\n\nStep 9: Evaluate discovery filter performance. Log all results via solana_memory_write with tags: strategy_evolution, vfm_scorecard, pattern_detection, named_pattern.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
31
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
32
+ message: "CRON_JOB: alpha_scan\n\nScan new launches, filter, score, log alpha. Tools: solana_scan_launches filter (vol>30K, mcap>10K, liq>5K) solana_token_snapshot for survivors quality filter (top10 <50%, deployer <3 abandoned, has social) score 0-100 solana_alpha_log for 65+. Summarize results.",
33
33
  enabled: true
34
34
  },
35
+
36
+ // ── Portfolio Health (combined dead-money + whale + risk audit) ──
35
37
  {
36
- id: "source-reputation",
37
- schedule: "0 */3 * * *",
38
+ id: "portfolio-health",
39
+ schedule: "0 */4 * * *",
38
40
  agentId: "main",
39
41
  sessionTarget: "isolated",
40
- delivery: { mode: "none" },
41
- message: "CRON_JOB: source_reputation_recalc\n\nStep 1: Call solana_alpha_sources to get per-source performance stats (signal count, conversion rate, avg score).\n\nStep 2: Call solana_alpha_history to get recent signal history with scores and source identifiers.\n\nStep 3: Call solana_trades to get recent trade outcomes. Cross-reference each trade back to its originating signal source.\n\nStep 4: For each source, calculate: win rate (trades that hit TP vs SL), average PnL per trade, signal-to-trade conversion rate.\n\nStep 5: Assign tier rankings:\n- TIER-1 (LOCK): Win rate above 60% AND 5+ trades AND positive avg PnL\n- TIER-2 (CONDITIONAL): Win rate 30-60% OR fewer than 5 trades\n- TIER-3 (BLACKLIST): Win rate below 30% with 5+ trades\n\nStep 6: Write scorecard to memory using solana_memory_write with tag 'source_reputation'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
42
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
43
+ message: "CRON_JOB: portfolio_health\n\nCombined dead-money + whale + risk audit. solana_capital_status + solana_positions solana_token_snapshot per position dead money exit (loss>40% or 90min+down+low vol) whale flags (>5% supply moves) risk checks (concentration/drawdown/exposure) sell if CRITICAL solana_memory_write tag 'portfolio_health'.",
42
44
  enabled: true
43
45
  },
44
46
 
45
- // ── Risk & Audit ──────────────────────────────────────────────
47
+ // ── Trust Refresh (combined source + deployer trust) ────────────
46
48
  {
47
- id: "risk-audit",
48
- schedule: "0 */2 * * *",
49
+ id: "trust-refresh",
50
+ schedule: "0 */8 * * *",
49
51
  agentId: "main",
50
52
  sessionTarget: "isolated",
51
53
  delivery: { mode: "none" },
52
- message: "CRON_JOB: portfolio_risk_audit\n\nStep 1: Call solana_capital_status to get wallet balance and portfolio value.\n\nStep 2: Call solana_positions to get all open positions with entry prices and sizes.\n\nStep 3: For each open position, call solana_token_snapshot to get current price, 24h volume, and market cap.\n\nStep 4: Run concentration check — flag WARNING if any single position exceeds 30 percent of total portfolio value, CRITICAL if above 50 percent.\n\nStep 5: Run exposure check — flag WARNING if total exposure exceeds 50 percent of wallet balance, CRITICAL if above 75 percent.\n\nStep 6: Run drawdown check — CRITICAL if portfolio drawdown exceeds 25 percent from peak capital.\n\nStep 7: Calculate portfolio heat (sum of all position risk scores). Flag WARNING above 50 percent, CRITICAL above 75 percent.\n\nStep 8: Run liquidity check — WARNING if any position exceeds 2 percent of its pool depth.\n\nStep 9: Check solana_killswitch_status.\n\nStep 10: Write risk report via solana_memory_write with tag 'risk_audit'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
54
+ message: "CRON_JOB: trust_refresh\n\nCombined source + deployer trust. solana_source_trust_refresh + solana_deployer_trust_refresh solana_alpha_sources + solana_trades for win rates solana_source_trust_get + solana_deployer_trust_get, flag <30 solana_memory_write tag 'trust_refresh'.",
53
55
  enabled: true
54
56
  },
55
57
 
56
- // ── On-Chain Intelligence ─────────────────────────────────────
58
+ // ── On-Chain Intelligence ───────────────────────────────────────
57
59
  {
58
60
  id: "meta-rotation",
59
- schedule: "30 */3 * * *",
61
+ schedule: "30 */8 * * *",
60
62
  agentId: "main",
61
63
  sessionTarget: "isolated",
62
- delivery: { mode: "none" },
63
- message: "CRON_JOB: meta_rotation_analysis\n\nStep 1: Call solana_scan_launches to get recent launches (last 3-6 hours).\n\nStep 2: Categorize each token by narrative cluster: AI/Agents, Animal Memes, Political, Celebrity/IP, DeFi, Gaming, Culture/Humor, Other.\n\nStep 3: For each cluster, aggregate: token count, total volume, average market cap.\n\nStep 4: Call solana_memory_search for 'meta_rotation' to compare with prior scan.\n\nStep 5: Classify each narrative: GAINING, SATURATED, COOLING, DORMANT.\n\nStep 6: Write rotation report via solana_memory_write with tag 'meta_rotation'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
64
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
65
+ message: "CRON_JOB: meta_rotation_analysis\n\nx_search_tweets trending topics solana_scan_launches categorize by narrative cluster per-cluster metrics compare vs solana_memory_search tag 'meta_rotation' declare hot/fading clusters solana_memory_write tag 'meta_rotation'.",
64
66
  enabled: true
65
67
  },
66
68
 
67
- // ── Portfolio Maintenance ─────────────────────────────────────
69
+ // ── Strategy & Learning ─────────────────────────────────────────
68
70
  {
69
- id: "dead-money-sweep",
70
- schedule: "0 */2 * * *",
71
+ id: "strategy-evolution",
72
+ schedule: "0 6 * * *",
71
73
  agentId: "main",
72
74
  sessionTarget: "isolated",
73
- delivery: { mode: "none" },
74
- message: "CRON_JOB: dead_money_sweep\n\nStep 1: Call solana_positions to get all open LOCAL_MANAGED positions.\n\nStep 2: For each position, call solana_token_snapshot for current price and 24h volume.\n\nStep 3: Apply dead money criteria (ALL four must be true):\n- Loss > 40%\n- Held 90+ min AND still down 5%+\n- 24h volume < $5,000\n- Price flat (±5%) for 4+ hours\n\nStep 4: For flagged positions, execute exit via solana_trade_execute with side 'sell', max slippage 2000bps.\n\nStep 5: Call solana_trade_review for each exit.\n\nStep 6: Write sweep report via solana_memory_write with tag 'dead_money'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Every TX MUST include solscan link.",
75
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
76
+ message: "CRON_JOB: strategy_evolution\n\nDaily strategy review. solana_journal_summary if <10 closed trades since last evolution, log 'insufficient data' and stop. Otherwise: solana_trades to bucket by confidence tier solana_strategy_state for current weights analyze tier performance solana_strategy_update with conservative adjustments (max 10% per weight per cycle) solana_memory_write tag 'strategy_evolution'.",
75
77
  enabled: true
76
78
  },
79
+
80
+ // ── Portfolio Maintenance ───────────────────────────────────────
77
81
  {
78
82
  id: "subscription-cleanup",
79
- schedule: "0 * * * *",
83
+ schedule: "15 */8 * * *",
80
84
  agentId: "main",
81
85
  sessionTarget: "isolated",
82
- delivery: { mode: "none" },
83
- message: "CRON_JOB: subscription_cleanup\n\nStep 1: Call solana_positions to get all open positions and extract their contract addresses.\n\nStep 2: Call solana_bitquery_subscriptions to list all active Bitquery subscriptions. If this call returns an AUTH_SCOPE_MISSING error, log the error to memory and stop gracefully — do not retry or error out.\n\nStep 3: For each active subscription, check if the associated token CA still has an open position. Build two lists: 'matched' (has position) and 'orphaned' (no position).\n\nStep 4: Unsubscribe orphans via solana_bitquery_unsubscribe.\n\nStep 5: Reopen subscriptions nearing 24h expiry via solana_bitquery_subscription_reopen.\n\nStep 6: Write summary via solana_memory_write with tag 'subscription_cleanup'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
86
+ delivery: { mode: "announce", channel: "last", bestEffort: true },
87
+ message: "CRON_JOB: subscription_cleanup\n\nsolana_positions for open CAs solana_bitquery_subscriptions for active subs (if AUTH_SCOPE_MISSING, log and stop) match subs to positions solana_bitquery_unsubscribe orphaned subs solana_memory_write tag 'subscription_cleanup'. Summarize before/after counts.",
84
88
  enabled: true
85
89
  },
86
90
 
87
- // ── Reporting ─────────────────────────────────────────────────
91
+ // ── Reporting ───────────────────────────────────────────────────
88
92
  {
89
- id: "daily-report",
93
+ id: "daily-performance-report",
90
94
  schedule: "0 4 * * *",
91
95
  agentId: "main",
92
96
  sessionTarget: "isolated",
93
- delivery: { mode: "none" },
94
- message: "CRON_JOB: daily_performance_report\n\nStep 1: Call solana_journal_summary for last 24h aggregate stats.\n\nStep 2: Call solana_capital_status for current balance.\n\nStep 3: Call solana_positions for open positions.\n\nStep 4: Call solana_trades for all trades in last 24h.\n\nStep 5: Call solana_strategy_state for current version and weights.\n\nStep 6: Call solana_memory_search for 'source_reputation' and 'daily_report' (yesterday's for comparison).\n\nStep 7: Build comprehensive report: executive summary, trade log, best/worst trades, source performance, capital trajectory, strategy status, open positions.\n\nStep 8: Write report via solana_memory_write with tag 'daily_performance'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Every trade MUST include solscan TX link.\n- Do not execute trades. Do not ask questions.",
97
+ delivery: { mode: "announce", channel: "telegram" },
98
+ message: "CRON_JOB: daily_performance_report\n\nCompile 24h report. solana_journal_summary + solana_capital_status + solana_positions + solana_trades + solana_strategy_state sections: Portfolio Summary, Trading Activity (count/win rate/PnL), Best/Worst Trades, Strategy State, Risk Metrics, Recommendations solana_memory_write tag 'daily_report'. Deliver full report.",
95
99
  enabled: true
96
100
  },
97
101
 
98
- // ── Whale / Smart Money Tracking ──────────────────────────────
102
+ // ── Intelligence Lab ────────────────────────────────────────────
99
103
  {
100
- id: "whale-watch",
101
- schedule: "0 */2 * * *",
104
+ id: "intelligence-lab-eval",
105
+ schedule: "0 16 * * *",
102
106
  agentId: "main",
103
107
  sessionTarget: "isolated",
104
108
  delivery: { mode: "none" },
105
- message: "CRON_JOB: whale_activity_scan\n\nStep 1: Call solana_positions to get open position CAs.\n\nStep 2: For each held token, call solana_token_holders for top 10 distribution.\n\nStep 3: Call solana_memory_search for 'whale_activity' baseline.\n\nStep 4: Compare distributions. Flag: NEW_WHALE, WHALE_EXIT, DEPLOYER_MOVE, CONCENTRATION_SPIKE, FRESH_WALLET_SURGE.\n\nStep 5: Assign priority: HIGH for whale exit/deployer move on held positions, MEDIUM for concentration changes, LOW for watchlist-only.\n\nStep 6: Write findings via solana_memory_write with tag 'whale_activity'.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Do not execute trades. Do not ask questions.",
109
+ message: "CRON_JOB: intelligence_lab_eval\n\nsolana_candidate_get if <20 labeled candidates, log 'insufficient data' and exit. Otherwise: solana_evaluation_report solana_model_registry for challengers solana_replay_eval if challenger exists solana_model_promote if challenger beats champion by >5% F1 solana_memory_write tag 'intelligence_lab'.",
106
110
  enabled: true
107
111
  },
108
112
 
109
- // ── Alpha Scanning ──────────────────────────────────────────────
113
+ // ── Memory Maintenance ──────────────────────────────────────────
110
114
  {
111
- id: "alpha-scan",
112
- schedule: "0 * * * *",
115
+ id: "memory-trim",
116
+ schedule: "0 3 * * *",
113
117
  agentId: "main",
114
118
  sessionTarget: "isolated",
115
119
  delivery: { mode: "none" },
116
- message: "CRON_JOB: alpha_scan\n\nStep 1: Call solana_scan_launches to find new token launches from the last hour. The response includes volume data per token — use THIS volume as the primary filter (not solana_token_snapshot volumeUsd24h, which often returns 0 for fresh tokens).\n\nStep 2: Filter candidates from the scan_launches results:\n- 24h volume above 50000 USD (use the volume field from scan_launches)\n- Market cap above 10000 USD (use mcap from scan_launches if available)\nTokens must pass BOTH filters to qualify. Tokens that fail either filter are eliminated. Log the top 3 rejected tokens by volume for reference.\n\nStep 3: For each token that passes the volume/mcap filter, call solana_token_holders to check holder distribution. Skip if the top single holder owns more than 30 percent (concentration risk). Skip if total top-10 concentration exceeds 60 percent.\n\nStep 4: For each token that passes holder checks, call solana_token_risk to check for mint authority or freeze authority. Hard skip if either is present (rug risk).\n\nStep 5: Website legitimacy check — for each token that passes risk checks, call solana_bitquery_catalog with templatePath 'pumpFunMetadata.tokenMetadataByAddress' and variables { token: CA } to get on-chain metadata. If the metadata contains a 'website' field, call web_fetch_url with the website URL. Analyze the response: check if the site has real content (not just a generic template), check if socialLinks.twitter matches the on-chain metadata twitter field. Include findings in the alpha submission thesis. If no website exists, note 'no website' — this is neutral, not a skip. Cache rule: check solana_memory_search for 'website_analyzed' with the same URL before fetching. If analyzed in the last 48 hours, reuse the cached result. After analysis, write findings via solana_memory_write with tag 'website_analyzed'.\n\nStep 6: Scrub any external text via solana_scrub_untrusted_text before using in analysis.\n\nStep 7: For each token that passes ALL checks, call solana_alpha_submit to add it to the alpha buffer for the next heartbeat evaluation. Include the thesis: volume, mcap, holder concentration, risk flags, narrative cluster, website legitimacy findings.\n\nStep 8: Write scan report to memory using solana_memory_write with tag 'alpha_scan'. Include: total launches scanned, unique tokens, tokens passed each filter stage (volume → holders → risk → website → submitted), top 3 rejected tokens with reason.\n\nFORMATTING RULES:\n- Every token reference MUST use SYMBOL (full_CA) format.\n- Include solscan token link: https://solscan.io/token/{CA} for each qualifying token.\n- Do not execute trades directly — only submit to alpha buffer.\n- Do not ask questions.",
120
+ message: "CRON_JOB: memory_trim\n\nsolana_memory_trim dryRun:true first review solana_memory_trim retentionDays:2 solana_memory_write tag 'memory_trim' with summary.",
117
121
  enabled: true
118
122
  },
119
123
 
120
- // ── Intelligence Lab (V1-Upgraded NEW) ──────────────────────────
124
+ // ── Balance Watchdog ────────────────────────────────────────────
121
125
  {
122
- id: "intelligence-lab-eval",
123
- schedule: "0 */12 * * *",
124
- agentId: "main",
125
- sessionTarget: "isolated",
126
- delivery: { mode: "none" },
127
- message: "CRON_JOB: intelligence_lab_eval\n\nStep 1: Check candidate dataset size via solana_candidate_get. If fewer than 20 labeled candidates, log 'insufficient data for evaluation' and exit.\n\nStep 2: Run evaluation report via solana_evaluation_report — get confusion matrix, accuracy, precision, recall, F1 for current champion model.\n\nStep 3: Check model registry via solana_model_registry for any challenger models.\n\nStep 4: If challenger exists, run replay via solana_replay_run comparing champion vs challenger on recent candidates.\n\nStep 5: Generate replay report via solana_replay_report.\n\nStep 6: If challenger outperforms champion (higher F1 AND accuracy), promote via solana_model_promote.\n\nStep 7: Log evaluation results via solana_memory_write with tag 'intelligence_lab_eval'. Include: model accuracy, F1, promotion decision, dataset size.\n\nDo not execute trades. Do not ask questions.",
128
- enabled: true
129
- },
130
- {
131
- id: "source-trust-refresh",
132
- schedule: "0 */6 * * *",
126
+ id: "balance-watchdog",
127
+ schedule: "0 */2 * * *",
133
128
  agentId: "main",
134
129
  sessionTarget: "isolated",
135
- delivery: { mode: "none" },
136
- message: "CRON_JOB: source_trust_refresh\n\nStep 1: Call solana_source_trust_refresh to recalculate trust scores for all tracked alpha sources based on recent trade outcomes.\n\nStep 2: Call solana_deployer_trust_refresh to recalculate deployer trust scores based on recent token performance.\n\nStep 3: Call solana_source_trust_get to read updated scores. Flag any sources that dropped below 30 trust score.\n\nStep 4: Call solana_deployer_trust_get to read deployer scores. Flag any deployers with 3+ failed tokens.\n\nStep 5: Log trust score updates via solana_memory_write with tag 'trust_refresh'. Include: sources updated, deployers updated, flagged sources/deployers.\n\nDo not execute trades. Do not ask questions.",
130
+ delivery: { mode: "announce", channel: "telegram" },
131
+ message: "Balance watchdog. 1) solana_capital_status 2) solana_positions 3) solana_context_snapshot_read 4) Compare real vs believed. If mismatch: solana_context_snapshot_write with corrected state, summarize changes. If match: reply WATCHDOG_OK.",
137
132
  enabled: true
138
133
  }
139
134
  ]
@@ -0,0 +1,303 @@
1
+ // src/bitquery-ws.ts
2
+ var RECONNECT_DELAYS = [1e3, 2e3, 4e3, 8e3, 16e3, 3e4];
3
+ var BitqueryStreamManager = class {
4
+ config;
5
+ ws = null;
6
+ authenticated = false;
7
+ connecting = false;
8
+ reconnectAttempt = 0;
9
+ reconnectTimer = null;
10
+ intentionalClose = false;
11
+ currentAccessToken = "";
12
+ // FIFO queue — server doesn't echo a requestId, so we match by arrival order
13
+ pendingSubscribeQueue = [];
14
+ // keyed by subscriptionId
15
+ pendingUnsubscribes = /* @__PURE__ */ new Map();
16
+ // tracks active subscriptions for auto-resubscribe on reconnect
17
+ activeSubscriptions = /* @__PURE__ */ new Map();
18
+ constructor(config) {
19
+ this.config = config;
20
+ }
21
+ async subscribe(params) {
22
+ await this.ensureConnected();
23
+ return new Promise((resolve, reject) => {
24
+ const timeout = setTimeout(() => {
25
+ const idx = this.pendingSubscribeQueue.findIndex((p) => p.resolve === resolve);
26
+ if (idx !== -1) this.pendingSubscribeQueue.splice(idx, 1);
27
+ reject(new Error("bitquery_subscribe timed out after 15 seconds"));
28
+ }, 15e3);
29
+ this.pendingSubscribeQueue.push({
30
+ resolve,
31
+ reject,
32
+ timeout,
33
+ templateKey: params.templateKey,
34
+ variables: params.variables || {},
35
+ agentId: params.agentId,
36
+ subscriberType: params.subscriberType
37
+ });
38
+ const msg = {
39
+ type: "bitquery_subscribe",
40
+ templateKey: params.templateKey,
41
+ variables: params.variables || {},
42
+ walletId: this.config.walletId
43
+ };
44
+ if (params.agentId) {
45
+ msg.agentId = params.agentId;
46
+ msg.subscriberType = params.subscriberType || "agent";
47
+ } else if (params.subscriberType) {
48
+ msg.subscriberType = params.subscriberType;
49
+ }
50
+ try {
51
+ this.ws.send(JSON.stringify(msg));
52
+ } catch (err) {
53
+ clearTimeout(timeout);
54
+ const idx = this.pendingSubscribeQueue.findIndex((p) => p.resolve === resolve);
55
+ if (idx !== -1) this.pendingSubscribeQueue.splice(idx, 1);
56
+ reject(new Error(`Failed to send subscribe: ${err}`));
57
+ }
58
+ });
59
+ }
60
+ async unsubscribe(subscriptionId) {
61
+ this.activeSubscriptions.delete(subscriptionId);
62
+ if (!this.ws || this.ws.readyState !== 1) {
63
+ return { unsubscribed: true };
64
+ }
65
+ return new Promise((resolve) => {
66
+ const timeout = setTimeout(() => {
67
+ this.pendingUnsubscribes.delete(subscriptionId);
68
+ resolve({ unsubscribed: true });
69
+ }, 1e4);
70
+ this.pendingUnsubscribes.set(subscriptionId, { resolve, timeout });
71
+ try {
72
+ this.ws.send(JSON.stringify({ type: "bitquery_unsubscribe", subscriptionId }));
73
+ } catch {
74
+ clearTimeout(timeout);
75
+ this.pendingUnsubscribes.delete(subscriptionId);
76
+ resolve({ unsubscribed: true });
77
+ }
78
+ });
79
+ }
80
+ /** Close the WS if no active subscriptions remain. */
81
+ disconnectIfIdle() {
82
+ if (this.activeSubscriptions.size === 0) {
83
+ this.close();
84
+ }
85
+ }
86
+ close() {
87
+ this.intentionalClose = true;
88
+ if (this.reconnectTimer) {
89
+ clearTimeout(this.reconnectTimer);
90
+ this.reconnectTimer = null;
91
+ }
92
+ if (this.ws) {
93
+ try {
94
+ this.ws.close();
95
+ } catch {
96
+ }
97
+ this.ws = null;
98
+ }
99
+ this.authenticated = false;
100
+ }
101
+ async ensureConnected() {
102
+ if (this.ws && this.ws.readyState === 1 && this.authenticated) return;
103
+ if (this.connecting) {
104
+ await new Promise((resolve, reject) => {
105
+ const timeout = setTimeout(() => reject(new Error("Timed out waiting for connection")), 2e4);
106
+ const check = setInterval(() => {
107
+ if (this.authenticated) {
108
+ clearTimeout(timeout);
109
+ clearInterval(check);
110
+ resolve();
111
+ }
112
+ }, 100);
113
+ });
114
+ return;
115
+ }
116
+ this.intentionalClose = false;
117
+ this.connecting = true;
118
+ try {
119
+ await this.connect();
120
+ await new Promise((resolve, reject) => {
121
+ const timeout = setTimeout(() => reject(new Error("Authentication timed out")), 15e3);
122
+ const check = setInterval(() => {
123
+ if (this.authenticated) {
124
+ clearTimeout(timeout);
125
+ clearInterval(check);
126
+ resolve();
127
+ }
128
+ }, 100);
129
+ });
130
+ } finally {
131
+ this.connecting = false;
132
+ }
133
+ }
134
+ async connect() {
135
+ const WebSocket = (await import("ws")).default;
136
+ this.currentAccessToken = await this.config.getAccessToken();
137
+ const url = `${this.config.wsUrl}?accessToken=${encodeURIComponent(this.currentAccessToken)}`;
138
+ this.authenticated = false;
139
+ this.log("info", `Connecting to ${this.config.wsUrl}`);
140
+ return new Promise((resolve, reject) => {
141
+ let ws;
142
+ try {
143
+ ws = new WebSocket(url);
144
+ this.ws = ws;
145
+ } catch (err) {
146
+ reject(err);
147
+ return;
148
+ }
149
+ const connectTimeout = setTimeout(() => {
150
+ if (ws.readyState !== 1) {
151
+ ws.close();
152
+ reject(new Error("WS connection timed out"));
153
+ }
154
+ }, 1e4);
155
+ ws.on("open", () => {
156
+ clearTimeout(connectTimeout);
157
+ this.reconnectAttempt = 0;
158
+ this.log("info", "Connected");
159
+ resolve();
160
+ });
161
+ ws.on("message", (data) => {
162
+ try {
163
+ const msg = JSON.parse(data.toString());
164
+ this.handleMessage(msg);
165
+ } catch {
166
+ this.log("warn", "Failed to parse message");
167
+ }
168
+ });
169
+ ws.on("close", () => {
170
+ clearTimeout(connectTimeout);
171
+ this.authenticated = false;
172
+ this.log("info", "WS closed");
173
+ this.drainPendingOnClose();
174
+ if (!this.intentionalClose && this.activeSubscriptions.size > 0) {
175
+ this.scheduleReconnect();
176
+ }
177
+ });
178
+ ws.on("error", (err) => {
179
+ clearTimeout(connectTimeout);
180
+ this.log("error", `WS error: ${err.message}`);
181
+ if (ws.readyState !== 1) {
182
+ reject(err);
183
+ }
184
+ });
185
+ });
186
+ }
187
+ handleMessage(msg) {
188
+ switch (msg.type) {
189
+ case "connected":
190
+ this.log("info", "Handshake received, authenticating...");
191
+ if (this.ws && this.ws.readyState === 1) {
192
+ this.ws.send(JSON.stringify({ type: "auth", accessToken: this.currentAccessToken }));
193
+ }
194
+ break;
195
+ case "authenticated":
196
+ this.authenticated = true;
197
+ this.log("info", "Authenticated");
198
+ void this.resubscribeAll();
199
+ break;
200
+ case "bitquery_subscribed": {
201
+ const subscriptionId = msg.subscriptionId;
202
+ const streamKey = msg.streamKey;
203
+ const pending = this.pendingSubscribeQueue.shift();
204
+ if (pending) {
205
+ clearTimeout(pending.timeout);
206
+ this.activeSubscriptions.set(subscriptionId, {
207
+ subscriptionId,
208
+ templateKey: pending.templateKey,
209
+ variables: pending.variables,
210
+ agentId: pending.agentId,
211
+ subscriberType: pending.subscriberType
212
+ });
213
+ pending.resolve({ subscriptionId, streamKey });
214
+ }
215
+ break;
216
+ }
217
+ case "bitquery_unsubscribed": {
218
+ const subscriptionId = msg.subscriptionId;
219
+ this.activeSubscriptions.delete(subscriptionId);
220
+ const pending = this.pendingUnsubscribes.get(subscriptionId);
221
+ if (pending) {
222
+ clearTimeout(pending.timeout);
223
+ this.pendingUnsubscribes.delete(subscriptionId);
224
+ pending.resolve({ unsubscribed: true });
225
+ }
226
+ this.disconnectIfIdle();
227
+ break;
228
+ }
229
+ case "error": {
230
+ const code = msg.code;
231
+ this.log("error", `${code}: ${msg.message || ""}`);
232
+ if ([
233
+ "WS_SUBSCRIBE_VALIDATION_ERROR",
234
+ "BITQUERY_SUBSCRIPTION_TEMPLATE_NOT_FOUND",
235
+ "WS_SUBSCRIPTION_LIMIT_REACHED",
236
+ "WS_BRIDGE_UNAVAILABLE"
237
+ ].includes(code)) {
238
+ const pending = this.pendingSubscribeQueue.shift();
239
+ if (pending) {
240
+ clearTimeout(pending.timeout);
241
+ pending.reject(new Error(`${code}: ${msg.message || ""}`));
242
+ }
243
+ }
244
+ if (["WS_AUTH_REQUIRED", "WS_AUTH_INVALID", "ACCESS_TOKEN_EXPIRED"].includes(code)) {
245
+ this.close();
246
+ }
247
+ break;
248
+ }
249
+ }
250
+ }
251
+ drainPendingOnClose() {
252
+ for (const pending of this.pendingSubscribeQueue) {
253
+ clearTimeout(pending.timeout);
254
+ pending.reject(new Error("WebSocket closed before subscription was confirmed"));
255
+ }
256
+ this.pendingSubscribeQueue = [];
257
+ for (const [, pending] of this.pendingUnsubscribes) {
258
+ clearTimeout(pending.timeout);
259
+ pending.resolve({ unsubscribed: true });
260
+ }
261
+ this.pendingUnsubscribes.clear();
262
+ }
263
+ async resubscribeAll() {
264
+ if (this.activeSubscriptions.size === 0) return;
265
+ const subs = [...this.activeSubscriptions.values()];
266
+ this.activeSubscriptions.clear();
267
+ this.log("info", `Re-subscribing ${subs.length} subscription(s) after reconnect`);
268
+ for (const sub of subs) {
269
+ try {
270
+ const result = await this.subscribe({
271
+ templateKey: sub.templateKey,
272
+ variables: sub.variables,
273
+ agentId: sub.agentId,
274
+ subscriberType: sub.subscriberType
275
+ });
276
+ this.log("info", `Re-subscribed ${sub.templateKey} \u2192 new id: ${result.subscriptionId}`);
277
+ } catch (err) {
278
+ this.log("error", `Re-subscribe failed for ${sub.templateKey}: ${err}`);
279
+ }
280
+ }
281
+ }
282
+ scheduleReconnect() {
283
+ if (this.intentionalClose) return;
284
+ const delay = RECONNECT_DELAYS[Math.min(this.reconnectAttempt, RECONNECT_DELAYS.length - 1)];
285
+ this.reconnectAttempt++;
286
+ this.log("info", `Reconnecting in ${delay}ms (attempt ${this.reconnectAttempt})`);
287
+ this.reconnectTimer = setTimeout(async () => {
288
+ try {
289
+ await this.connect();
290
+ } catch (err) {
291
+ this.log("error", `Reconnect failed: ${err instanceof Error ? err.message : String(err)}`);
292
+ this.scheduleReconnect();
293
+ }
294
+ }, delay);
295
+ }
296
+ log(level, msg) {
297
+ this.config.logger?.[level](`[bitquery-ws] ${msg}`);
298
+ }
299
+ };
300
+
301
+ export {
302
+ BitqueryStreamManager
303
+ };
package/dist/index.js CHANGED
@@ -16,18 +16,15 @@ import {
16
16
  import {
17
17
  AlphaStreamManager
18
18
  } from "./chunk-3YPZOXWE.js";
19
+ import {
20
+ BitqueryStreamManager
21
+ } from "./chunk-VR5WP5S4.js";
19
22
  import {
20
23
  orchestratorRequest
21
24
  } from "./chunk-NDPVVAV7.js";
22
25
  import {
23
26
  IntelligenceLab
24
27
  } from "./chunk-FBS5FGW2.js";
25
- import {
26
- scrubUntrustedText
27
- } from "./chunk-AI6MTHUN.js";
28
- import {
29
- readRecoverySecretFromDisk
30
- } from "./chunk-SBYHSJLU.js";
31
28
  import {
32
29
  generateBulletinDigest,
33
30
  generateDecisionDigest,
@@ -35,6 +32,12 @@ import {
35
32
  resolveMemoryDir,
36
33
  resolveWorkspaceRoot
37
34
  } from "./chunk-JO3BXAUQ.js";
35
+ import {
36
+ scrubUntrustedText
37
+ } from "./chunk-AI6MTHUN.js";
38
+ import {
39
+ readRecoverySecretFromDisk
40
+ } from "./chunk-SBYHSJLU.js";
38
41
 
39
42
  // index.ts
40
43
  import { Type } from "@sinclair/typebox";
@@ -1085,7 +1088,7 @@ var solanaTraderPlugin = {
1085
1088
  const logsDir = path.join(dataDir, "logs");
1086
1089
  const sharedLogsDir = path.join(logsDir, "shared");
1087
1090
  const memoryDir = resolveMemoryDir(workspaceRoot);
1088
- const memoryMdPath = path.join(workspaceRoot, "STATE.md");
1091
+ const memoryMdPath = path.join(workspaceRoot, "MEMORY.md");
1089
1092
  const intelligenceLab = new IntelligenceLab(workspaceRoot);
1090
1093
  const ensureDir = (dirPath) => {
1091
1094
  if (!fs.existsSync(dirPath)) fs.mkdirSync(dirPath, { recursive: true });
@@ -2205,6 +2208,16 @@ ${notes}
2205
2208
  })
2206
2209
  )
2207
2210
  });
2211
+ const bitqueryStreamManager = new BitqueryStreamManager({
2212
+ wsUrl: orchestratorUrl.replace(/^http/, "ws").replace(/\/$/, "") + "/ws",
2213
+ walletId,
2214
+ getAccessToken: () => sessionManager.getAccessToken(),
2215
+ logger: {
2216
+ info: (msg) => api.logger.info(`[solana-trader] ${msg}`),
2217
+ warn: (msg) => api.logger.warn(`[solana-trader] ${msg}`),
2218
+ error: (msg) => api.logger.error(`[solana-trader] ${msg}`)
2219
+ }
2220
+ });
2208
2221
  api.registerTool({
2209
2222
  name: "solana_bitquery_subscribe",
2210
2223
  description: "Subscribe to a managed real-time Bitquery data stream. The orchestrator manages the WebSocket connection and broadcasts events. Available templates: realtimeTokenPricesSolana, ohlc1s, dexPoolLiquidityChanges, pumpFunTokenCreation, pumpFunTrades, pumpSwapTrades, raydiumNewPools. Returns a subscriptionId for tracking. Pass agentId to enable event-to-agent forwarding \u2014 orchestrator delivers each event to your Gateway via /v1/responses in addition to normal WS delivery. Subscriptions expire after 24h and emit subscription_expiring/subscription_expired events. See websocket-streaming.md in the solana-trader skill for the full message contract and usage patterns.",
@@ -2215,18 +2228,13 @@ ${notes}
2215
2228
  subscriberType: Type.Optional(Type.Union([Type.Literal("agent"), Type.Literal("client")], { description: "Subscriber type. Inferred as 'agent' when agentId is present. Defaults to 'client'." }))
2216
2229
  }),
2217
2230
  execute: wrapExecute("solana_bitquery_subscribe", async (_id, params) => {
2218
- const body = {
2219
- templateKey: params.templateKey,
2220
- variables: params.variables || {}
2221
- };
2222
2231
  const effectiveAgentId = params.agentId || config.agentId;
2223
- if (effectiveAgentId) {
2224
- body.agentId = effectiveAgentId;
2225
- body.subscriberType = params.subscriberType || "agent";
2226
- } else if (params.subscriberType) {
2227
- body.subscriberType = params.subscriberType;
2228
- }
2229
- return post("/api/bitquery/subscribe", body);
2232
+ return bitqueryStreamManager.subscribe({
2233
+ templateKey: params.templateKey,
2234
+ variables: params.variables || {},
2235
+ agentId: effectiveAgentId,
2236
+ subscriberType: params.subscriberType || (effectiveAgentId ? "agent" : void 0)
2237
+ });
2230
2238
  })
2231
2239
  });
2232
2240
  api.registerTool({
@@ -2237,9 +2245,7 @@ ${notes}
2237
2245
  }),
2238
2246
  execute: wrapExecute(
2239
2247
  "solana_bitquery_unsubscribe",
2240
- async (_id, params) => post("/api/bitquery/unsubscribe", {
2241
- subscriptionId: params.subscriptionId
2242
- })
2248
+ async (_id, params) => bitqueryStreamManager.unsubscribe(params.subscriptionId)
2243
2249
  )
2244
2250
  });
2245
2251
  api.registerTool({
@@ -0,0 +1,6 @@
1
+ import {
2
+ BitqueryStreamManager
3
+ } from "../chunk-VR5WP5S4.js";
4
+ export {
5
+ BitqueryStreamManager
6
+ };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "solana-traderclaw",
3
- "version": "1.0.90",
3
+ "version": "1.0.92",
4
4
  "description": "TraderClaw V1-Upgraded — Solana trading for OpenClaw with intelligence lab, tool envelopes, prompt scrubbing, read-only X social intel, and split skill docs",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",
@@ -10,7 +10,9 @@ Read MEMORY.md (auto-loaded). If empty or missing wallet/tier/strategy → run M
10
10
 
11
11
  1. **MEMORY.md** (already in context): tier, wallet, mode, strategy version, watchlist, regime canary
12
12
  2. **Daily log** (`memory/YYYY-MM-DD.md`, auto-loaded): what already happened today — don't repeat work
13
- 3. **Server-side memory** call `solana_memory_search` for: `"source_reputation"`, `"strategy_drift_warning"`, `"pre_trade_rationale"`, `"meta_rotation"`
13
+ 3. **Context engine** (automatic): `[TraderClaw Trading Context]` is injected into your system prompt at session start with current state, last 3 decisions, and entitlement limits. You do not need to call anything — just read it when present.
14
+ 4. **Server-side memory** — call `solana_memory_search` for: `"source_reputation"`, `"strategy_drift_warning"`, `"pre_trade_rationale"`, `"meta_rotation"`
15
+ 5. **QMD recall** — before analyzing any candidate, call `memory_search` with the token symbol or contract address. If you've seen this token before, use prior analysis to: skip repeat work, apply re-entry penalties, catch repeat rug patterns, and reference prior confidence scores.
14
16
 
15
17
  ## User Preferences Override (apply before any other step)
16
18
 
@@ -430,5 +432,5 @@ NEXT CYCLE: [1 sentence — what you're watching for]
430
432
  | API endpoint reference | refs/api-reference.md |
431
433
  | Wallet proof vs signup | SKILL.md § Wallet proof vs signup |
432
434
  | Strategy evolution details | refs/strategy-evolution.md |
433
- | Cron job definitions | refs/cron-jobs.md |
435
+ | Cron job definitions | refs/cron-jobs.md (10 consolidated jobs, ~39 sessions/day) |
434
436
  | Position management details | refs/position-management.md |
@@ -19,131 +19,157 @@ At start of every cron job, check whether sufficient new data exists since last
19
19
 
20
20
  ---
21
21
 
22
- ## Job: `strategy_evolution`
22
+ ## Job: `alpha_scan`
23
23
 
24
- **Schedule:** Every 4 hours (`0 */4 * * *`)
24
+ **Schedule:** Every 3 hours (`0 */3 * * *`) — 8 runs/day
25
25
 
26
- **Purpose:** Full self-improvement cycle — recurring pattern detection, drift investigation, ADL/VFM-validated weight adjustments, named pattern recognition, discovery filter evolution.
26
+ **Purpose:** Scan new token launches, filter candidates, score quality, log alpha signals.
27
27
 
28
- **Full details:** refs/strategy-evolution.md
28
+ **Tools:** `solana_scan_launches`, `solana_token_snapshot`, `solana_token_holders`, `solana_token_risk`, `solana_alpha_log`, `solana_memory_write`
29
29
 
30
- **Tools:** `solana_journal_summary`, `solana_strategy_state`, `solana_memory_search`, `solana_trades`, `solana_strategy_update`, `solana_memory_write`
30
+ **Workflow:** Scan launches filter (vol>30K, mcap>10K, liq>5K) → snapshot survivors → quality filter (top10 <50%, deployer <3 abandoned, has social) → score 0-100 → log 65+ via alpha_log.
31
31
 
32
- **Report to user:** What weights changed (if any), key patterns detected, strategy trending direction.
32
+ **Configuration:**
33
+ - Model: Sonnet (judgment — scoring candidates, filtering quality signals)
34
+ - Thinking: off
35
+ - lightContext: on
36
+ - Delivery: announce/last/bestEffort
33
37
 
34
38
  ---
35
39
 
36
- ## Job: `daily_performance_report`
40
+ ## Job: `portfolio_health`
37
41
 
38
- **Schedule:** Daily at 04:00 UTC (`0 4 * * *`)
42
+ **Schedule:** Every 4 hours (`0 */4 * * *`) — 6 runs/day
39
43
 
40
- **Purpose:** Comprehensive daily performance summary.
44
+ **Purpose:** Combined dead-money sweep + whale activity scan + portfolio risk audit. Replaces the old separate `dead_money_sweep`, `whale_watch`, and `risk_audit` jobs.
41
45
 
42
- **Gating:** Only if trading activity in past 24 hours. Check via `solana_journal_summary`.
46
+ **Tools:** `solana_capital_status`, `solana_positions`, `solana_token_snapshot`, `solana_token_holders`, `solana_trade_execute` (defensive exits), `solana_trade_review`, `solana_memory_write`, `solana_killswitch_status`
43
47
 
44
- **Context retrieval:**
45
- - `solana_memory_search` with `"daily_report"` — yesterday's report for comparison
46
- - `solana_memory_search` with `"strategy_evolution"` — most recent evolution cycle
48
+ **Workflow:** Capital + positions → per-position snapshot → dead money exit (loss>40% or 90min+down+low vol) → whale flags (>5% supply moves) → risk checks (concentration/drawdown/exposure) → sell if CRITICAL → write tag 'portfolio_health'.
47
49
 
48
- **Tools:** `solana_journal_summary`, `solana_positions`, `solana_capital_status`, `solana_trades`, `solana_memory_search`, `solana_memory_write`
50
+ **Configuration:**
51
+ - Model: Sonnet (judgment — dead money exits, whale flags, risk assessment)
52
+ - Thinking: off
53
+ - lightContext: on
54
+ - Delivery: announce/last/bestEffort
49
55
 
50
- **Outputs:** Memory entry with: daily PnL, win/loss count, win rate, best/worst trades, avg hold time, capital utilization, regime summary, lessons. Tag: `daily_report`.
56
+ ---
57
+
58
+ ## Job: `trust_refresh`
59
+
60
+ **Schedule:** Every 8 hours (`0 */8 * * *`) — 3 runs/day
61
+
62
+ **Purpose:** Combined source reputation recalculation + deployer trust refresh. Replaces the old separate `source_reputation_recalc` and `source_trust_refresh` jobs.
63
+
64
+ **Tools:** `solana_source_trust_refresh`, `solana_deployer_trust_refresh`, `solana_alpha_sources`, `solana_trades`, `solana_source_trust_get`, `solana_deployer_trust_get`, `solana_memory_write`
65
+
66
+ **Workflow:** Run both refresh functions → read source/deployer scores → flag any below 30 → write tag 'trust_refresh'.
67
+
68
+ **Configuration:**
69
+ - Model: Haiku (mechanical — run refresh functions, read/flag scores)
70
+ - Thinking: off
71
+ - lightContext: on
72
+ - Delivery: none
51
73
 
52
74
  ---
53
75
 
54
- ## Job: `source_reputation_recalc`
76
+ ## Job: `meta_rotation_analysis`
55
77
 
56
- **Schedule:** Every 3 hours (`0 */3 * * *`)
78
+ **Schedule:** Every 8 hours, offset by 30 min (`30 */8 * * *`) — 3 runs/day
57
79
 
58
- **Purpose:** Analyze which alpha sources led to wins vs losses. Maintain per-source reputation scores.
80
+ **Purpose:** Analyze which narrative metas are hot, cooling, or dead.
59
81
 
60
- **Gating:** New trade outcomes on alpha-sourced positions since last recalc.
82
+ **Tools:** `x_search_tweets`, `solana_scan_launches`, `solana_memory_search`, `solana_memory_write`
61
83
 
62
- **Workflow:**
63
- 1. Retrieve last recalc state from memory
64
- 2. Query recent alpha-sourced trade outcomes
65
- 3. Calculate per-source metrics (win rate, avg PnL, conversion rate)
66
- 4. Compute reputation score (0-100)
67
- 5. Historical analysis via `solana_alpha_history`
68
- 6. Store updated scores with tag `source_reputation`
84
+ **Workflow:** Search X/Twitter trending → scan launches → categorize by narrative cluster → per-cluster metrics → compare vs prior rotation → declare hot/fading → write tag 'meta_rotation'.
69
85
 
70
- **Tools:** `solana_memory_search`, `solana_trades`, `solana_alpha_history`, `solana_alpha_sources`, `solana_memory_write`
86
+ **Configuration:**
87
+ - Model: Sonnet (judgment — categorize narratives, detect rotation)
88
+ - Thinking: off
89
+ - lightContext: on
90
+ - Delivery: announce/last/bestEffort
71
91
 
72
92
  ---
73
93
 
74
- ## Job: `dead_money_sweep`
94
+ ## Job: `strategy_evolution`
75
95
 
76
- **Schedule:** Every 2 hours (`0 */2 * * *`)
96
+ **Schedule:** Daily at 06:00 UTC (`0 6 * * *`) — 1 run/day
77
97
 
78
- **Purpose:** Find and exit positions that are dead money.
98
+ **Purpose:** Full self-improvement cycle recurring pattern detection, drift investigation, ADL/VFM-validated weight adjustments, named pattern recognition, discovery filter evolution.
79
99
 
80
- **Criteria (ALL must be true):**
81
- 1. Loss > 40%
82
- 2. Held 90+ minutes AND still down 5%+
83
- 3. 24h volume < $5,000
84
- 4. Price flat (±5%) for 4+ hours
100
+ **Full details:** refs/strategy-evolution.md
85
101
 
86
- **Tools:** `solana_positions`, `solana_token_snapshot`, `solana_trade_execute` (for exits), `solana_trade_review`, `solana_memory_write`, `solana_sweep_dead_tokens` (sell losing positions below threshold — use `dryRun: true` first to preview, then without to execute)
102
+ **Tools:** `solana_journal_summary`, `solana_strategy_state`, `solana_memory_search`, `solana_trades`, `solana_strategy_update`, `solana_memory_write`
87
103
 
88
- **PnL check:** For Solana positions, use `unrealizedPnl` / `realizedPnl` from `solana_positions`. Those fields are SOL-native on that endpoint.
104
+ **Workflow:** Journal summary gate on 10+ closed trades bucket by confidence tier current weights → analyze tier performance → conservative adjustments (max 10% per weight per cycle) → write tag 'strategy_evolution'.
89
105
 
90
- **Report to user:** List positions exited as dead money with hold duration and loss amount. Include any rent SOL recovered from sweep.
106
+ **Configuration:**
107
+ - Model: Sonnet (deep reasoning — weight adjustments, pattern detection)
108
+ - Thinking: **on** (multi-step reasoning chain benefits from extended thinking)
109
+ - lightContext: **off** (needs full strategy state, historical patterns, workspace context)
110
+ - Delivery: announce/last/bestEffort
91
111
 
92
112
  ---
93
113
 
94
114
  ## Job: `subscription_cleanup`
95
115
 
96
- **Schedule:** Every hour (`0 * * * *`)
116
+ **Schedule:** Every 8 hours, offset by 15 min (`15 */8 * * *`) — 3 runs/day
97
117
 
98
- **Purpose:** Manage Bitquery subscription lifecycle.
118
+ **Purpose:** Manage Bitquery subscription lifecycle — remove orphaned subscriptions, reopen expiring ones.
99
119
 
100
- **Workflow:**
101
- 1. List active subscriptions via `solana_bitquery_subscriptions`
102
- 2. Reopen subscriptions nearing 24h expiry via `solana_bitquery_subscription_reopen`
103
- 3. Unsubscribe from tokens no longer held or monitored
104
- 4. Verify critical subscriptions (discovery streams) are healthy
120
+ **Tools:** `solana_positions`, `solana_bitquery_subscriptions`, `solana_bitquery_unsubscribe`, `solana_bitquery_subscription_reopen`, `solana_memory_write`
121
+
122
+ **Workflow:** List open position CAs list active subs (if AUTH_SCOPE_MISSING, log and stop) → match subs to positions → unsubscribe orphans → write tag 'subscription_cleanup'.
105
123
 
106
- **Tools:** `solana_bitquery_subscriptions`, `solana_bitquery_subscription_reopen`, `solana_bitquery_unsubscribe`, `solana_positions`
124
+ **Configuration:**
125
+ - Model: Haiku (mechanical — match subs to positions, unsubscribe orphans)
126
+ - Thinking: off
127
+ - lightContext: on
128
+ - Delivery: announce/last/bestEffort
107
129
 
108
130
  ---
109
131
 
110
- ## Job: `meta_rotation_analysis`
132
+ ## Job: `daily_performance_report`
111
133
 
112
- **Schedule:** Every 3 hours, offset by 30 min (`30 */3 * * *`)
134
+ **Schedule:** Daily at 04:00 UTC (`0 4 * * *`) — 1 run/day
113
135
 
114
- **Purpose:** Analyze which narrative metas are hot, cooling, or dead.
136
+ **Purpose:** Comprehensive daily performance summary.
115
137
 
116
- **Workflow:**
117
- 1. Review recent scan results and alpha signals for narrative patterns
118
- 2. Group tokens by narrative cluster (AI, animals, political, culture, etc.)
119
- 3. Compare volume/momentum trends across clusters
120
- 4. Identify hot metas (rising volume) and cooling metas (declining volume)
121
- 5. Log observations with tag `meta_rotation`
138
+ **Gating:** Only if trading activity in past 24 hours. Check via `solana_journal_summary`.
122
139
 
123
- **Tools:** `solana_memory_search`, `solana_alpha_history`, `solana_memory_write`
140
+ **Tools:** `solana_journal_summary`, `solana_positions`, `solana_capital_status`, `solana_trades`, `solana_strategy_state`, `solana_memory_search`, `solana_memory_write`
141
+
142
+ **Outputs:** Memory entry with: daily PnL, win/loss count, win rate, best/worst trades, avg hold time, capital utilization, regime summary, lessons. Tag: `daily_report`.
143
+
144
+ **Configuration:**
145
+ - Model: Sonnet (judgment — compile narrative report with recommendations)
146
+ - Thinking: off
147
+ - lightContext: **off** (needs complete workspace context for comprehensive report)
148
+ - Delivery: announce/**telegram**
124
149
 
125
150
  ---
126
151
 
127
152
  ## Job: `intelligence_lab_eval`
128
153
 
129
- **Schedule:** Every 12 hours (`0 */12 * * *`)
154
+ **Schedule:** Daily at 16:00 UTC (`0 16 * * *`) — 1 run/day
130
155
 
131
156
  **Purpose:** Run intelligence lab evaluation — compute model accuracy, compare champion vs challenger, generate replay reports.
132
157
 
133
- **Workflow:**
134
- 1. Check candidate dataset size via `solana_candidate_get`
135
- 2. Run evaluation report: `solana_evaluation_report`
136
- 3. If challenger model exists, run replay: `solana_replay_run` + `solana_replay_report`
137
- 4. If challenger outperforms champion, promote: `solana_model_promote`
138
- 5. Refresh source/deployer trust scores: `solana_source_trust_refresh`, `solana_deployer_trust_refresh`
158
+ **Tools:** `solana_candidate_get`, `solana_evaluation_report`, `solana_model_registry`, `solana_replay_run`, `solana_replay_report`, `solana_model_promote`, `solana_memory_write`
159
+
160
+ **Workflow:** Check candidate count (gate on 20+) → evaluation report → check for challengers → replay eval if challenger exists → promote if >5% F1 improvement → write tag 'intelligence_lab'.
139
161
 
140
- **Tools:** `solana_candidate_get`, `solana_evaluation_report`, `solana_replay_run`, `solana_replay_report`, `solana_model_promote`, `solana_source_trust_refresh`, `solana_deployer_trust_refresh`, `solana_memory_write`
162
+ **Configuration:**
163
+ - Model: Sonnet (deep reasoning — model comparison, promotion decisions)
164
+ - Thinking: **on** (requires careful reasoning about statistical significance)
165
+ - lightContext: **off** (needs full model registry context and evaluation history)
166
+ - Delivery: none
141
167
 
142
168
  ---
143
169
 
144
170
  ## Job: `memory_trim`
145
171
 
146
- **Schedule:** Daily at 03:00 UTC (`0 3 * * *`)
172
+ **Schedule:** Daily at 03:00 UTC (`0 3 * * *`) — 1 run/day
147
173
 
148
174
  **Purpose:** Smart memory compaction — trims local memory footprint to the last 2 days while preserving all critical data (positions, rules, identity, strategy weights, permanent learnings).
149
175
 
@@ -166,3 +192,45 @@ At start of every cron job, check whether sufficient new data exists since last
166
192
  4. Log results via `solana_memory_write` with tag `memory_trim`
167
193
 
168
194
  **Tools:** `solana_memory_trim`, `solana_memory_write`
195
+
196
+ **Configuration:**
197
+ - Model: Haiku (mechanical — prune old entries, simple retention logic)
198
+ - Thinking: off
199
+ - lightContext: on
200
+ - Delivery: none
201
+
202
+ ---
203
+
204
+ ## Job: `balance_watchdog`
205
+
206
+ **Schedule:** Every 2 hours (`0 */2 * * *`) — 12 runs/day
207
+
208
+ **Purpose:** Context snapshot drift correction — compare real wallet/position state against believed state, correct mismatches.
209
+
210
+ **Tools:** `solana_capital_status`, `solana_positions`, `solana_context_snapshot_read`, `solana_context_snapshot_write`
211
+
212
+ **Workflow:** Read real state (capital + positions) → read believed state (context snapshot) → compare → if mismatch: write corrected snapshot, summarize changes → if match: reply WATCHDOG_OK.
213
+
214
+ **Configuration:**
215
+ - Model: Haiku (mechanical — compare real vs believed state, correct drift)
216
+ - Thinking: off
217
+ - lightContext: on
218
+ - Delivery: announce/**telegram**
219
+
220
+ ---
221
+
222
+ ## Schedule Summary
223
+
224
+ | # | Job ID | Cron Expression | Runs/Day | Model | Thinking | lightContext | Delivery |
225
+ |---|--------|----------------|----------|-------|----------|-------------|----------|
226
+ | 1 | `alpha-scan` | `0 */3 * * *` | 8 | Sonnet | off | on | announce/last |
227
+ | 2 | `portfolio-health` | `0 */4 * * *` | 6 | Sonnet | off | on | announce/last |
228
+ | 3 | `trust-refresh` | `0 */8 * * *` | 3 | Haiku | off | on | none |
229
+ | 4 | `meta-rotation` | `30 */8 * * *` | 3 | Sonnet | off | on | announce/last |
230
+ | 5 | `strategy-evolution` | `0 6 * * *` | 1 | Sonnet | **on** | **off** | announce/last |
231
+ | 6 | `subscription-cleanup` | `15 */8 * * *` | 3 | Haiku | off | on | announce/last |
232
+ | 7 | `daily-performance-report` | `0 4 * * *` | 1 | Sonnet | off | **off** | announce/telegram |
233
+ | 8 | `intelligence-lab-eval` | `0 16 * * *` | 1 | Sonnet | **on** | **off** | none |
234
+ | 9 | `memory-trim` | `0 3 * * *` | 1 | Haiku | off | on | none |
235
+ | 10 | `balance-watchdog` | `0 */2 * * *` | 12 | Haiku | off | on | announce/telegram |
236
+ | | **Total** | | **39** | | | | |