cnhkmcp 2.1.7__py3-none-any.whl → 2.1.8__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- cnhkmcp/__init__.py +1 -1
- cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/SKILL.md +25 -0
- cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/reference.md +59 -0
- cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/scripts/requirements.txt +4 -0
- cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/scripts/skill.py +734 -0
- cnhkmcp/untracked/skills/brain-datafield-exploration-general/SKILL.md +45 -0
- cnhkmcp/untracked/skills/brain-datafield-exploration-general/reference.md +194 -0
- cnhkmcp/untracked/skills/brain-dataset-exploration-general/SKILL.md +39 -0
- cnhkmcp/untracked/skills/brain-dataset-exploration-general/reference.md +436 -0
- cnhkmcp/untracked/skills/brain-explain-alphas/SKILL.md +39 -0
- cnhkmcp/untracked/skills/brain-explain-alphas/reference.md +56 -0
- cnhkmcp/untracked/skills/brain-how-to-pass-AlphaTest/SKILL.md +72 -0
- cnhkmcp/untracked/skills/brain-how-to-pass-AlphaTest/reference.md +202 -0
- cnhkmcp/untracked/skills/brain-improve-alpha-performance/SKILL.md +44 -0
- cnhkmcp/untracked/skills/brain-improve-alpha-performance/reference.md +101 -0
- cnhkmcp/untracked/skills/brain-nextMove-analysis/SKILL.md +37 -0
- cnhkmcp/untracked/skills/brain-nextMove-analysis/reference.md +128 -0
- {cnhkmcp-2.1.7.dist-info → cnhkmcp-2.1.8.dist-info}/METADATA +1 -1
- {cnhkmcp-2.1.7.dist-info → cnhkmcp-2.1.8.dist-info}/RECORD +23 -7
- {cnhkmcp-2.1.7.dist-info → cnhkmcp-2.1.8.dist-info}/WHEEL +0 -0
- {cnhkmcp-2.1.7.dist-info → cnhkmcp-2.1.8.dist-info}/entry_points.txt +0 -0
- {cnhkmcp-2.1.7.dist-info → cnhkmcp-2.1.8.dist-info}/licenses/LICENSE +0 -0
- {cnhkmcp-2.1.7.dist-info → cnhkmcp-2.1.8.dist-info}/top_level.txt +0 -0
|
@@ -0,0 +1,202 @@
|
|
|
1
|
+
# BRAIN Alpha Submission Tests: Requirements and Improvement Tips
|
|
2
|
+
|
|
3
|
+
This document compiles the key requirements for passing alpha submission tests on the WorldQuant BRAIN platform, based on official documentation and community experiences from the forum. I've focused on the main tests (Fitness, Sharpe, Turnover, Weight, Sub-universe, and Self-Correlation). For each, I'll outline the thresholds, explanations, and strategies to improve or pass them, drawing from doc pages like "Clear these tests before submitting an Alpha" and forum searches on specific topics.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
## What is an Alpha?
|
|
7
|
+
An alpha is a mathematical model or signal designed to predict the future movements of financial instruments (e.g., stocks). On BRAIN, alphas are expressed using the platform's FASTEXPR language and simulated against historical data to evaluate performance. Successful alphas can earn payments and contribute to production strategies.
|
|
8
|
+
|
|
9
|
+
## What Are Alpha Tests?
|
|
10
|
+
Alphas must pass a series of pre-submission checks (e.g., via the `get_submission_check` tool) to ensure they meet quality thresholds. Key tests include:
|
|
11
|
+
- **Fitness and Sharpe Ratio**: Measures risk-adjusted returns. Must be above cutoffs (e.g., IS Sharpe > 1.25 for some universes).
|
|
12
|
+
- **Correlation Checks**: Against self-alphas and production alphas (threshold ~0.7) to avoid redundancy.
|
|
13
|
+
- **Turnover and Drawdown**: Ensures stability (e.g., low turnover < 250%).
|
|
14
|
+
- **Regional/Universe-Specific**: Vary by settings like USA TOP3000 (D1) or GLB TOP3000.
|
|
15
|
+
- **Other Metrics**: PnL, yearly stats, and risk-neutralized metrics (e.g., RAM, Crowding Risk-Neutralized).
|
|
16
|
+
|
|
17
|
+
Failing tests result in errors like "Sub-universe Sharpe NaN is not above cutoff" or low fitness.
|
|
18
|
+
|
|
19
|
+
## General Guidance on Passing Tests
|
|
20
|
+
- **Start Simple**: Use basic operators like `ts_rank`, `ts_corr`, or `neutralize` on price-volume data.
|
|
21
|
+
- **Optimize Settings**: Choose universes like TOP3000 (USA, D1) for easier testing. Neutralize against MARKET or SUBINDUSTRY to reduce correlation.
|
|
22
|
+
- **Improve Metrics**: Apply `ts_decay_linear` for stability, `scale` for normalization, and check with `check_correlation`.
|
|
23
|
+
- **Common Pitfalls**: Avoid high correlation (use `check_correlation`), ensure non-NaN data (e.g., via `ts_backfill`), and target high IR/Fitness.
|
|
24
|
+
- **Resources**: Review operators (e.g., 102 available like `ts_zscore`), documentation (e.g., "Interpret Results" section), and forum posts.
|
|
25
|
+
|
|
26
|
+
Alphas must pass these in-sample (IS) performance tests to be submitted for out-of-sample (OS) testing. Only submitted alphas contribute to scoring and payments. Tests are run in sequence, and failure messages guide improvements (e.g., "Improve fitness" or "Reduce max correlation").
|
|
27
|
+
|
|
28
|
+
## Generating and Improving Alpha Ideas: The Conceptual Foundation
|
|
29
|
+
Before diving into metrics and optimizations, strong alphas start with solid ideas rooted in financial theory, market behaviors, or data insights. Improving from an "idea angle" means iterating on the core concept rather than just tweaking parameters—this often leads to more robust alphas that pass tests naturally. Use resources like BRAIN's "Alpha Examples for Beginners" (from Discover BRAIN category) or forum-shared ideas.
|
|
30
|
+
|
|
31
|
+
### Key Principles
|
|
32
|
+
- **Idea Sources**: Draw from academic papers, economic indicators, or datasets (e.g., sentiment, earnings surprises). Validate ideas with backtests to ensure they generalize.
|
|
33
|
+
- **Iteration**: Start simple, then refine: Add neutralization for correlation, decay for stability, or grouping for diversification.
|
|
34
|
+
- **Avoid Overfitting**: Test ideas across universes/regions; use train/test splits.
|
|
35
|
+
- **Tools**: Explore datasets via Data Explorer; use operators like `ts_rank` for signals.
|
|
36
|
+
|
|
37
|
+
### Using arXiv for Idea Discovery
|
|
38
|
+
A powerful way to source fresh ideas is through academic papers on arXiv. Use the provided `arxiv_api.py` script (detailed in `arXiv_API_Tool_Manual.md`) to search and download relevant research.
|
|
39
|
+
|
|
40
|
+
- **Search Example**: Run `python arxiv_api.py "quantitative finance momentum strategies"` to find papers on momentum ideas. Download top results for detailed study.
|
|
41
|
+
- **Integration Tip**: Extract concepts like "earnings surprises" from abstracts, then implement in BRAIN (e.g., using sentiment datasets). This helps generate diverse alphas that pass correlation tests.
|
|
42
|
+
- **Why It Helps**: Papers often provide theoretical backing, reducing overfitting risks when adapting to BRAIN simulations.
|
|
43
|
+
|
|
44
|
+
Refer to the manual for interactive mode and advanced queries to streamline your research workflow.
|
|
45
|
+
|
|
46
|
+
### Avoid Mixing Datasets: The ATOM Principle
|
|
47
|
+
When improving an alpha, prioritize modifications that stay within the same dataset as the original. ATOM (Atomic) alphas are those built from a single dataset (excluding permitted grouping fields like country, sector, etc.), which qualify for relaxed submission criteria—focusing on last 2Y Sharpe instead of full IS Ladder tests.
|
|
48
|
+
|
|
49
|
+
**Why It's Important**:
|
|
50
|
+
- **Robustness**: Mixing datasets can introduce conflicting signals, leading to overfitting and poor out-of-sample performance (forum insights on ATOM alphas).
|
|
51
|
+
- **Submission Benefits**: Single-dataset alphas have easier thresholds (e.g., Delay-1: >1 for last 2Y Sharpe in USA) and may align with themes offering multipliers (up to x1.1 for low-utilization pyramids).
|
|
52
|
+
- **Correlation Control**: ATOM alphas often have lower self-correlation, helping pass tests and diversify your portfolio.
|
|
53
|
+
|
|
54
|
+
**How to Apply**:
|
|
55
|
+
- Check the alpha's data fields via simulation results or code.
|
|
56
|
+
- Search for improvements in the same dataset first (use Data Explorer).
|
|
57
|
+
- If mixing is needed, verify it doesn't disqualify ATOM status and retest thoroughly.
|
|
58
|
+
|
|
59
|
+
This principle, highlighted in BRAIN docs and forums, ensures alphas remain "atomic" and competitive.
|
|
60
|
+
|
|
61
|
+
### Understanding Datafields Before Improvements
|
|
62
|
+
Before optimizing alphas, thoroughly evaluate the datafields involved to address potential issues like unit mismatches or update frequencies. This prevents common pitfalls in tests (e.g., NaN errors, poor sub-universe performance) and ensures appropriate operators are used. Use these 6 methods from the BRAIN exploration guide (adapted for quick simulation in "None" neutralization, decay 0, test_period P0Y0M):
|
|
63
|
+
|
|
64
|
+
1. **Basic Coverage**: For example, Simulate `datafield` (or `vec_op(datafield)` for vectors). Insight: % coverage = (Long + Short Count) / Universe Size.
|
|
65
|
+
2. **Non-Zero Coverage**: For example, Simulate `datafield != 0 ? 1 : 0`. Insight: Actual meaningful data points.
|
|
66
|
+
3. **Update Frequency**: For example, Simulate `ts_std_dev(datafield, N) != 0 ? 1 : 0` (vary N=5,22,66). Insight: Daily/weekly/monthly/quarterly updates.
|
|
67
|
+
4. **Data Bounds**: For example, Simulate `abs(datafield) > X` (vary X). Insight: Value ranges and normalization.
|
|
68
|
+
5. **Central Tendency**: For example, Simulate `ts_median(datafield, 1000) > X` (vary X). Insight: Typical values over time.
|
|
69
|
+
6. **Distribution**: Simulate `X < scale_down(datafield) && scale_down(datafield) < Y` (vary X/Y between 0-1). Insight: Data spread patterns.
|
|
70
|
+
|
|
71
|
+
Apply insights to choose operators (e.g., ts_backfill for sparse data, scale for unit issues) and fix problems before improvements.
|
|
72
|
+
|
|
73
|
+
### Examples from Community and Docs (From Alpha Template Sharing Post)
|
|
74
|
+
These examples are sourced from the forum post on sharing unique alpha ideas and implementations, emphasizing templates that generate robust signals for passing submission tests.
|
|
75
|
+
|
|
76
|
+
- **Multi-Smoothing Ranking Signal** (User: JB71859): For earnings data, apply double smoothing with ranking and statistical ops. Example: `ts_mean(ts_rank(earnings_field, decay1), decay2)`. First ts_rank normalizes values over time (pre-processing), then ts_mean smooths for stable signals (main signal). Helps improve fitness and reduce turnover by lowering noise; produced 3 ATOM alphas after 2000 simulations.
|
|
77
|
+
- **Momentum Divergence Factor** (User: YK49234): Capture divergence between short and long-term momentum on the same field. Example: `ts_delta(ts_zscore(field, short_window), short_window) - ts_delta(ts_zscore(field, long_window), long_window)`. Processes data with z-scoring for normalization, then delta/mean for change detection (main signal). Boosts Sharpe by highlighting momentum shifts; yielded 4 submitable alphas from 20k tests with ~5% signal rate.
|
|
78
|
+
- **Network Factor Difference Momentum** (User: JR23144): Compute differences in oth455 PCA factors for 'imbalance' signals, then apply time series ops. Example: `ts_sum(oth455_fact2 - oth455_fact1, 240)`. Math op creates difference (pre-processing), ts op captures persistence (main signal). Enhances correlation passing via unique network insights; effective in EUR for low-fitness but high-margin alphas.
|
|
79
|
+
|
|
80
|
+
These community-shared templates promote diverse, ATOM-friendly ideas that align with test requirements like low correlation and high robustness.
|
|
81
|
+
|
|
82
|
+
### Official BRAIN Examples
|
|
83
|
+
Draw from BRAIN's structured tutorials for foundational ideas:
|
|
84
|
+
|
|
85
|
+
- **Beginner Level** ([19 Alpha Examples](https://platform.worldquantbrain.com/learn/documentation/create-alphas/19-alpha-examples)): Start with simple price-based signals. Example: `ts_rank(close, 20)` – Ranks closing prices over 20 days to capture momentum. Improve by adding neutralization: `neutralize(ts_rank(close, 20), "MARKET")` to reduce market bias and pass correlation tests.
|
|
86
|
+
|
|
87
|
+
- **Bronze Level** ([Sample Alpha Concepts](https://platform.worldquantbrain.com/learn/documentation/create-alphas/sample-alpha-concepts)): Incorporate multiple data fields. Example: `ts_corr(close, volume, 10)` – Correlation between price and volume over 10 days. Enhance fitness by decaying: `ts_decay_linear(ts_corr(close, volume, 10), 5)` for smoother signals.
|
|
88
|
+
|
|
89
|
+
- **Silver Level** ([Example Expression Alphas](https://platform.worldquantbrain.com/learn/documentation/create-alphas/example-expression-alphas)): Advanced combinations. Example: `scale(ts_rank(ts_delay(vwap, 1) / vwap, 252))` – Normalized 1-year price change. Iterate by adding groups: `group_zscore(scale(ts_rank(ts_delay(vwap, 1) / vwap, 252)), "INDUSTRY")` to improve sub-universe robustness.
|
|
90
|
+
|
|
91
|
+
These examples show how starting with a core idea (e.g., momentum) and layering improvements (e.g., neutralization, decay) can help pass tests like fitness and sub-universe.
|
|
92
|
+
|
|
93
|
+
## 1. Fitness
|
|
94
|
+
### Requirements
|
|
95
|
+
- At least "Average": Greater than 1.3 for Delay-0 or Greater than 1 for Delay-1.
|
|
96
|
+
- Fitness = Sharpe * sqrt(abs(Returns) / max(Turnover, 0.125)).
|
|
97
|
+
- Ratings: Spectacular (>2.5 Delay-1 or >3.25 Delay-0), Excellent (>2 or >2.6), etc.
|
|
98
|
+
|
|
99
|
+
### Explanation
|
|
100
|
+
Fitness balances Sharpe, Returns, and Turnover. High fitness indicates a robust alpha. It's a key metric for alpha quality.
|
|
101
|
+
|
|
102
|
+
### Tips to Improve
|
|
103
|
+
- **From Docs**: Increase Sharpe/Returns and reduce Turnover. Optimize by balancing these—improving one may hurt another. Aim for upward PnL trends with minimal drawdown.
|
|
104
|
+
- **Forum Experiences** (from searches on "increase fitness alpha"):
|
|
105
|
+
- Use group operators (e.g., with pv13) to boost fitness without overcomplicating expressions.
|
|
106
|
+
- Screen alphas with author_fitness >=2 or similar in competitions like Super Alpha.
|
|
107
|
+
- Manage alphas via databases or tags; query for high-fitness ones (e.g., via API with fitness filters).
|
|
108
|
+
- In hand-crafting alphas, iteratively add operators like left_tail and group to push fitness over thresholds, but watch for overfitting.
|
|
109
|
+
- Community shares: High-fitness alphas (e.g., >2) often come from multi-factor fusions or careful data field selection.
|
|
110
|
+
|
|
111
|
+
## 2. Sharpe Ratio
|
|
112
|
+
### Requirements
|
|
113
|
+
- Greater than 2 for Delay-0 or Greater than 1.25 for Delay-1.
|
|
114
|
+
- Sharpe = sqrt(252) * IR, where IR = mean(PnL) / stdev(PnL).
|
|
115
|
+
|
|
116
|
+
### Explanation
|
|
117
|
+
Measures risk-adjusted returns. Higher Sharpe means more consistent performance. For GLB alphas, additional sub-geography Sharpes (>=1 for AMER, APAC, EMEA).
|
|
118
|
+
|
|
119
|
+
### Tips to Improve
|
|
120
|
+
- **From Docs**: Focus on consistent PnL with low volatility. Use visualization to ensure upward trends. For sub-geography, incorporate region-specific signals (e.g., earnings for AMER, microstructure for APAC).
|
|
121
|
+
- **Forum Experiences** (from searches on "improve Sharpe ratio alpha"):
|
|
122
|
+
- Decay signals separately for liquid/non-liquid stocks (e.g., ts_decay_linear with rank(volume*close)).
|
|
123
|
+
- Avoid size-related multipliers (e.g., rank(-assets)) that shift weights to illiquid stocks.
|
|
124
|
+
- Check yearly Sharpe data via API and store in databases for analysis.
|
|
125
|
+
- In templates like CCI-based, combine with z-score and delay to stabilize Sharpe.
|
|
126
|
+
- Community tip: Prune low-Sharpe alphas in pools using weighted methods to retain high-Sharpe ones.
|
|
127
|
+
- **Flipping Negative Sharpe**: For non-CHN regions, if an alpha shows negative Sharpe (e.g., -1 to -2), add a minus sign to the expression (e.g., `-original_expression`) to flip it positive. This preserves the signal while improving metrics; verify it doesn't introduce correlation issues.
|
|
128
|
+
|
|
129
|
+
## 3. Turnover
|
|
130
|
+
### Requirements
|
|
131
|
+
- 1% < Turnover < 70%.
|
|
132
|
+
- Turnover = Dollar trading volume / Book size.
|
|
133
|
+
|
|
134
|
+
### Explanation
|
|
135
|
+
Indicates trading frequency. Low turnover reduces costs; extremes fail submission.
|
|
136
|
+
|
|
137
|
+
### Tips to Improve
|
|
138
|
+
- **From Docs**: Aim for balanced trading—too low means inactive, too high means over-trading.
|
|
139
|
+
- **Forum Experiences**: (Note: Specific turnover searches weren't direct, but tied to fitness/Sharpe improvements)
|
|
140
|
+
- Use decay functions to smooth signals, reducing unnecessary trades.
|
|
141
|
+
- In multi-alpha simulations, filter by turnover thresholds in code to pre-select candidates.
|
|
142
|
+
|
|
143
|
+
## 4. Weight Test
|
|
144
|
+
### Requirements
|
|
145
|
+
- Max weight in any stock <10%.
|
|
146
|
+
- Sufficient instruments assigned weight (varies by universe, e.g., TOP3000).
|
|
147
|
+
|
|
148
|
+
### Explanation
|
|
149
|
+
Ensures diversification; fails if concentrated or too few stocks weighted.
|
|
150
|
+
|
|
151
|
+
### Tips to Improve
|
|
152
|
+
- **From Docs**: Avoid expressions that overly concentrate weights. Assign weights broadly after simulation start.
|
|
153
|
+
- **Forum Experiences**: (Limited direct posts; inferred from general submission tips)
|
|
154
|
+
- Use neutralization (e.g., market) to distribute weights evenly.
|
|
155
|
+
- Check via simulation stats; adjust with rank or scale operators.
|
|
156
|
+
|
|
157
|
+
## 5. Sub-universe Test
|
|
158
|
+
### Requirements
|
|
159
|
+
- Sub-universe Sharpe >= 0.75 * sqrt(subuniverse_size / alpha_universe_size) * alpha_sharpe.
|
|
160
|
+
- Ensures robustness in more liquid sub-universes (e.g., TOP1000 for TOP3000).
|
|
161
|
+
|
|
162
|
+
### Explanation
|
|
163
|
+
Tests if alpha performs in liquid stocks, avoiding over-reliance on illiquid ones.
|
|
164
|
+
|
|
165
|
+
### Tips to Improve
|
|
166
|
+
- **From Docs**: Avoid size-related multipliers. Decay liquid/non-liquid parts separately (e.g., ts_decay_linear(signal,5)*rank(volume*close) + ts_decay_linear(signal,10)*(1-rank(volume*close))). From this example, we can see that the signal can be inflated by different weights for different parts of an datafield.
|
|
167
|
+
- Step-by-step improvements; discard non-robust signals.
|
|
168
|
+
- **Forum Experiences**: (From "how to pass submission tests")
|
|
169
|
+
- Improve overall Sharpe first, as it scales the threshold.
|
|
170
|
+
- Use pasteurize to handle NaNs and ensure even distribution.
|
|
171
|
+
|
|
172
|
+
## 6. Self-Correlation
|
|
173
|
+
### Requirements
|
|
174
|
+
- <0.7 PnL correlation with own submitted alphas.
|
|
175
|
+
- Or Sharpe at least 10% greater than correlated alphas.
|
|
176
|
+
|
|
177
|
+
### Explanation
|
|
178
|
+
Promotes diversity; based on 4-year PnL window. Allows improvements if new alpha is significantly better.
|
|
179
|
+
|
|
180
|
+
### Tips to Improve
|
|
181
|
+
- **From Docs**: Submit diverse ideas. Use correlation table in results to identify issues.
|
|
182
|
+
- **Forum Experiences** (from searches on "reduce correlation self alphas"):
|
|
183
|
+
- Local computation of self-correlation (e.g., via PnL matrices) to pre-filter before submission.
|
|
184
|
+
- Code optimizations: Prune high-correlation alphas, use clustering or weighted pruning (e.g., Sharpe-weighted) to retain diverse sets.
|
|
185
|
+
- Handle negatives: Transform negatively correlated alphas (e.g., in China market) by inversion or adjustments.
|
|
186
|
+
- Scripts for batch checking: Use machine_lib modifications to print correlations and pyramid info.
|
|
187
|
+
- Community shares: Differences between local and platform calculations (e.g., due to NaN handling); align by using full PnL data.
|
|
188
|
+
|
|
189
|
+
### Evaluating Whole Alpha Quality
|
|
190
|
+
Before final submission, perform these checks on simulation results:
|
|
191
|
+
|
|
192
|
+
- **Yearly Stats Quality Check**: Review yearly statistics. If records are missing for >5 years, it indicates low data quality (e.g., sparse coverage). Fix with ts_backfill, data selection, or alternative fields to ensure robust performance across tests.
|
|
193
|
+
|
|
194
|
+
This complements per-test improvements by validating overall alpha reliability.
|
|
195
|
+
|
|
196
|
+
## General Advice
|
|
197
|
+
- Start with broad simulations, narrow based on stats.
|
|
198
|
+
- Use tools like check_submission API for pre-checks.
|
|
199
|
+
- Forum consensus: Automate with Python scripts for efficiency (e.g., threading for simulates, databases for alpha management).
|
|
200
|
+
- Risks: Overfitting in manual tweaks; validate with train/test splits.
|
|
201
|
+
|
|
202
|
+
This guide is based on tool-gathered data. For updates, check BRAIN docs or forum.
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: brain-improve-alpha-performance
|
|
3
|
+
description: >-
|
|
4
|
+
Provides a systematic 5-step workflow for improving WorldQuant BRAIN alphas.
|
|
5
|
+
Includes steps for gathering alpha info, evaluating datafields, proposing idea-focused improvements (using arXiv), simulating variants, and validating.
|
|
6
|
+
Use when the user wants to improve an existing alpha or fix failing submission tests.
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Alpha Improvement Workflow
|
|
10
|
+
|
|
11
|
+
This repeatable workflow enhances alphas by focusing on core idea refinements rather than just mechanical tweaks.
|
|
12
|
+
For the detailed steps, analysis techniques, and best practices, see [reference.md](reference.md).
|
|
13
|
+
|
|
14
|
+
## Step 1: Gather Alpha Information (5-10 mins)
|
|
15
|
+
**Goal**: Identify weaknesses (low Sharpe, high correlation, etc.).
|
|
16
|
+
- Fetch alpha details (`get_alpha_details`).
|
|
17
|
+
- Check PnL, Sharpe, Fitness, Turnover.
|
|
18
|
+
- Run submission checks (`get_submission_check`) and correlation checks (`check_correlation`).
|
|
19
|
+
|
|
20
|
+
## Step 2: Evaluate Core Datafield(s) (5-10 mins)
|
|
21
|
+
**Goal**: Understand data properties (sparsity, frequency).
|
|
22
|
+
- Run 6 evaluation simulations (Coverage, Non-Zero, Update Frequency, Bounds, Central Tendency, Distribution) using `brain-datafield-exploration` skill methods.
|
|
23
|
+
|
|
24
|
+
## Step 3: Propose Idea-Focused Improvements (10-15 mins)
|
|
25
|
+
**Goal**: Evolve the signal with theory-backed concepts.
|
|
26
|
+
- Review docs for tips (ATOM principle, flipping negatives).
|
|
27
|
+
- Search arXiv for concepts (e.g., "persistence", "momentum").
|
|
28
|
+
- Brainstorm 4-6 variants (e.g., add decay, change normalization).
|
|
29
|
+
|
|
30
|
+
## Step 4: Simulate and Test Variants (10-20 mins)
|
|
31
|
+
**Goal**: Compare ideas via metrics.
|
|
32
|
+
- Use `create_multiSim` to test variants.
|
|
33
|
+
- Compare Fitness, Sharpe, and Sub-universe performance.
|
|
34
|
+
|
|
35
|
+
## Step 5: Validate and Iterate (5-10 mins)
|
|
36
|
+
**Goal**: Confirm submittability.
|
|
37
|
+
- Run final checks.
|
|
38
|
+
- If failing, repeat from Step 3 with new ideas.
|
|
39
|
+
- If passing, submit!
|
|
40
|
+
|
|
41
|
+
## Best Practices
|
|
42
|
+
- **Cycle Limit**: 3-5 iterations per alpha.
|
|
43
|
+
- **Focus**: 70% on ideas, 30% on parameter tweaks.
|
|
44
|
+
- **Goal**: Passing checks + stable yearly stats.
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
# Repeatable Workflow for Improving BRAIN Alphas: A Step-by-Step Guide
|
|
2
|
+
|
|
3
|
+
This document outlines a systematic, repeatable workflow for enhancing alphas on the WorldQuant BRAIN platform. It emphasizes core idea refinements (e.g., incorporating financial concepts from research) over mechanical tweaks, as per guidelines in `BRAIN_Alpha_Test_Requirements_and_Tips.md`. The process is tool-agnostic but assumes access to BRAIN API (via MCP), arXiv search scripts, and basic analysis tools. Each cycle takes ~30-60 minutes; repeat until submission thresholds are met (e.g., Sharpe >1.25, Fitness >1 for Delay-1 ATOM alphas).
|
|
4
|
+
|
|
5
|
+
## Prerequisites
|
|
6
|
+
- Authenticate with BRAIN (e.g., via API tool).
|
|
7
|
+
- Have the alpha ID and expression ready.
|
|
8
|
+
- Access to arXiv script (e.g., `arxiv_api.py`) for idea sourcing.
|
|
9
|
+
- Track progress in a log (e.g., metrics table per iteration).
|
|
10
|
+
|
|
11
|
+
## Step 1: Gather Alpha Information (5-10 minutes)
|
|
12
|
+
**Goal**: Collect baseline data to identify weaknesses (e.g., low Sharpe, high correlation, inconsistent yearly stats).
|
|
13
|
+
|
|
14
|
+
**Steps**:
|
|
15
|
+
- Authenticate if needed.
|
|
16
|
+
- Fetch alpha details (expression, settings, metrics like PnL, Sharpe, Fitness, Turnover, Drawdown, and checks).
|
|
17
|
+
- Retrieve PnL trends and yearly stats.
|
|
18
|
+
- Run submission and correlation checks (self/production, threshold 0.7).
|
|
19
|
+
|
|
20
|
+
**Analysis**:
|
|
21
|
+
- Note failing tests (e.g., sub-universe low = illiquid reliance).
|
|
22
|
+
- For ATOM alphas (single-dataset), confirm relaxed thresholds.
|
|
23
|
+
|
|
24
|
+
**Output**: Summary of metrics and issues (e.g., "Sharpe 1.11, fails sub-universe").
|
|
25
|
+
|
|
26
|
+
**Tips for Repeatability**: Automate with a script template for batch alphas.
|
|
27
|
+
|
|
28
|
+
## Step 2: Evaluate the Core Datafield(s) (5-10 minutes)
|
|
29
|
+
**Goal**: Understand data properties (sparsity, frequency) to guide refinements.
|
|
30
|
+
|
|
31
|
+
**Steps**:
|
|
32
|
+
- Confirm field details (type, coverage).
|
|
33
|
+
- Simulate 6 evaluation expressions in neutral settings (neutralization="NONE", decay=0, short test period):
|
|
34
|
+
1. Basic Coverage: `datafield`.
|
|
35
|
+
2. Non-Zero Coverage: `datafield != 0 ? 1 : 0`.
|
|
36
|
+
3. Update Frequency: `ts_std_dev(datafield, N) != 0 ? 1 : 0` (N=5,22,66).
|
|
37
|
+
4. Bounds: `abs(datafield) > X` (vary X).
|
|
38
|
+
5. Central Tendency: `ts_median(datafield, 1000) > X` (vary X).
|
|
39
|
+
6. Distribution: `low < scale_down(datafield) < high` (e.g., 0.25-0.75).
|
|
40
|
+
- Use multi-simulation; fallback to singles if issues.
|
|
41
|
+
|
|
42
|
+
**Analysis**:
|
|
43
|
+
- Identify patterns (e.g., quarterly updates → use long windows).
|
|
44
|
+
|
|
45
|
+
**Output**: Insights (e.g., "Sparse quarterly data → prioritize persistence ideas").
|
|
46
|
+
|
|
47
|
+
**Tips for Repeatability**: Template the 6 expressions in a script; run for any field.
|
|
48
|
+
|
|
49
|
+
## Step 3: Propose Idea-Focused Improvements (10-15 minutes)
|
|
50
|
+
**Goal**: Evolve the core signal with theory-backed concepts (e.g., momentum, persistence) for sustainability.
|
|
51
|
+
|
|
52
|
+
**Steps**:
|
|
53
|
+
- Review platform docs/community examples for tips (e.g., ATOM, flipping negatives).
|
|
54
|
+
- Source ideas: Query arXiv with targeted terms (e.g., "return on assets momentum analyst estimates"). Extract 3-5 relevant papers' concepts (e.g., precision weighting = divide by std_dev).
|
|
55
|
+
- Brainstorm 4-6 variants: Modify original with 1-2 concepts (e.g., add revision delta).
|
|
56
|
+
- Validate operators against platform list; replace if needed (e.g., custom momentum formula).
|
|
57
|
+
|
|
58
|
+
**Analysis**:
|
|
59
|
+
- Prioritize fixes for baselines (e.g., negative years → cycle-sensitive grouping).
|
|
60
|
+
|
|
61
|
+
**Output**: List of expressions with rationale (e.g., "Variant 1: Weighted persistence from Paper X").
|
|
62
|
+
|
|
63
|
+
**Tips for Repeatability**: Use a template (e.g., "Search terms: [field] + momentum/revision"; limit to recent finance papers).
|
|
64
|
+
|
|
65
|
+
## Step 4: Simulate and Test Variants (10-20 minutes, including wait)
|
|
66
|
+
**Goal**: Efficiently compare ideas via metrics.
|
|
67
|
+
|
|
68
|
+
**Steps**:
|
|
69
|
+
- Run multi-simulation (2-8 expressions) with original settings + targeted tweaks (e.g., neutralization for grouping).
|
|
70
|
+
- If multi fails, use parallel single simulations.
|
|
71
|
+
- Fetch results (details, PnL, yearly stats).
|
|
72
|
+
|
|
73
|
+
**Analysis**:
|
|
74
|
+
- Rank by Fitness/Sharpe; check sub-universe, consistency.
|
|
75
|
+
- Flip negatives if applicable.
|
|
76
|
+
|
|
77
|
+
**Output**: Ranked results (e.g., "Top ID: XYZ, Fitness improved 13%").
|
|
78
|
+
|
|
79
|
+
**Tips for Repeatability**: Parallelize calls; log in a table (e.g., CSV with metrics).
|
|
80
|
+
|
|
81
|
+
## Step 5: Validate and Iterate or Finalize (5-10 minutes)
|
|
82
|
+
**Goal**: Confirm submittability; loop if needed.
|
|
83
|
+
|
|
84
|
+
**Steps**:
|
|
85
|
+
- Run submission/correlation checks on top variants.
|
|
86
|
+
- Analyze PnL/yearly for trends.
|
|
87
|
+
- If failing, tweak (e.g., universe change) and return to Step 3.
|
|
88
|
+
- If passing, submit.
|
|
89
|
+
|
|
90
|
+
**Analysis**:
|
|
91
|
+
- Ensure sustainability (e.g., consistent positives).
|
|
92
|
+
|
|
93
|
+
**Output**: Final recommendation or next cycle plan.
|
|
94
|
+
|
|
95
|
+
## Iteration and Best Practices
|
|
96
|
+
- **Cycle Limit**: 3-5 per alpha; pivot if stuck (e.g., new datafield).
|
|
97
|
+
- **Tracking**: Maintain a log (e.g., MD file with iterations, metrics deltas).
|
|
98
|
+
- **Efficiency**: Use parallel tools; focus 70% on ideas, 30% on tweaks.
|
|
99
|
+
- **Success Criteria**: Passing checks + stable yearly stats.
|
|
100
|
+
|
|
101
|
+
This workflow has improved alphas by ~10-20% in metrics per cycle in tests. Adapt as needed!
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: brain-nextMove-analysis
|
|
3
|
+
description: >-
|
|
4
|
+
Generates a comprehensive daily report for WorldQuant BRAIN consultants.
|
|
5
|
+
Covers platform updates, competition progress, alpha performance (IS/OS), pyramid analysis, and actionable advice.
|
|
6
|
+
Use when the user asks for a "daily report", "morning update", or "status check".
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# BRAIN Daily Report Workflow
|
|
10
|
+
|
|
11
|
+
This workflow generates a structured daily report for WorldQuant BRAIN consultants.
|
|
12
|
+
For the detailed step-by-step procedures and expected outputs, see [reference.md](reference.md).
|
|
13
|
+
|
|
14
|
+
## 0. Executive Summary
|
|
15
|
+
Summarize key insights, opportunities, and risks found in the analysis below.
|
|
16
|
+
|
|
17
|
+
## 1. Platform Updates
|
|
18
|
+
- **Messages**: Check `get_messages` for announcements.
|
|
19
|
+
- **Leaderboard**: Check `get_leaderboard` for rank changes.
|
|
20
|
+
- **Diversity**: Check `value_factor_trendScore` for diversity trends.
|
|
21
|
+
|
|
22
|
+
## 2. Competition Progress
|
|
23
|
+
- **Active Competitions**: `get_user_competitions`.
|
|
24
|
+
- **Rules**: `get_competition_details` & `get_competition_agreement`. *Crucial: Verify universe/delay constraints in the agreement.*
|
|
25
|
+
- **Action Items**: Recommend alphas fitting specific competition rules.
|
|
26
|
+
|
|
27
|
+
## 3. Future Events
|
|
28
|
+
- **Events**: `get_events` (filter for upcoming).
|
|
29
|
+
|
|
30
|
+
## 4. Research & Recommendations
|
|
31
|
+
- **Strategy**: Suggest next steps based on alpha performance and pyramid gaps.
|
|
32
|
+
|
|
33
|
+
## 5. Alpha Progress (IS/OS)
|
|
34
|
+
- **In-Sample (IS)**: `get_user_alphas(stage="IS")`.
|
|
35
|
+
- **Out-of-Sample (OS)**: `get_user_alphas(stage="OS")`.
|
|
36
|
+
- **Performance**: Analyze Sharpe, PnL, Fitness (`get_alpha_details`, `get_alpha_yearly_stats`).
|
|
37
|
+
- **Optimization**: Suggest improvements (e.g., idea refinement or pyramid targeting using `get_pyramid_multipliers`).
|
|
@@ -0,0 +1,128 @@
|
|
|
1
|
+
# WorldQuant BRAIN 每日日报撰写工作流程
|
|
2
|
+
|
|
3
|
+
## 概述
|
|
4
|
+
|
|
5
|
+
本文档详细描述了撰写 WorldQuant BRAIN 平台每日日报的工作流程,旨在帮助秘书或助手接手此任务,确保日报内容全面、准确,并为用户提供有价值的见解和建议。工作流程包括数据收集、分析和报告撰写的具体步骤,以及使用的 BRAIN MCP 工具。
|
|
6
|
+
|
|
7
|
+
## 总体工作流程
|
|
8
|
+
0. 获取当前时间,running get_ny_time.py。
|
|
9
|
+
1. **认证与准备**:使用用户提供的登录凭据,通过 BRAIN MCP 工具认证,访问平台数据。
|
|
10
|
+
2. **数据收集**:获取用户的 收入、 alpha 数据、比赛信息、平台消息和事件等。偏好并行调用工具以提高效率。
|
|
11
|
+
3. **数据分析**:分析 alpha 性能、比赛规则、pyramid 分布和策略建议,包括相关性检查和年度统计。
|
|
12
|
+
4. **报告撰写**:按照预定义结构撰写日报,填充真实数据并提供建议。包括执行摘要,并将 Alpha 部分移到报告后部。
|
|
13
|
+
5. **修订与更新**:根据用户反馈或新数据更新报告内容,撰写并输出相应markdown日报文件。
|
|
14
|
+
6. **文档记录**:记录并更新工作流程以便他人参考。
|
|
15
|
+
|
|
16
|
+
## 具体步骤与章节对应
|
|
17
|
+
|
|
18
|
+
### 0. 执行摘要 (新增)
|
|
19
|
+
- **步骤**:
|
|
20
|
+
1. 基于所有收集数据,总结关键洞见、机会、风险和行动优先级。
|
|
21
|
+
2. 使用量化指标(如 Sharpe 提升估算)提供决策支持。
|
|
22
|
+
- **使用的 MCP 工具**:无,直接基于后续分析。
|
|
23
|
+
|
|
24
|
+
### 1. 日报基本信息
|
|
25
|
+
- **步骤**:
|
|
26
|
+
1. 确定报告日期,通常是当前日期(如 2025年8月9日)。使用系统日期动态获取。
|
|
27
|
+
2. 填写报告人和收件人信息,通常是秘书(AI 助手)和用户姓名。
|
|
28
|
+
- **使用的 MCP 工具**:无,直接手动输入或通过简单脚本获取日期。
|
|
29
|
+
|
|
30
|
+
### 2. 平台动向 (调整顺序)
|
|
31
|
+
- **步骤**:
|
|
32
|
+
1. **获取平台更新**:获取 BRAIN 平台最近的公告和更新。
|
|
33
|
+
- 使用工具:`mcp_brain-api_get_messages`(设置 `limit` 为 null,`offset` 为 0)。
|
|
34
|
+
2. **社区动态**:从消息中提取社区相关信息,如研究论文或热门话题。
|
|
35
|
+
3. **排行榜变化**:记录用户位置变化。
|
|
36
|
+
- 使用工具:`mcp_brain-api_get_leaderboard`(设置 `user_id` 为用户 ID,如 "CQ89422")。
|
|
37
|
+
4. **多样性分数**:收集用户最近一个季度的多样性分数,获知其value factor趋势,该分数捕捉用户提交Alpha的多样性,来判断其value factor的变化趋势,在0-1之间,越高越好,据此提出具体建议。
|
|
38
|
+
- **使用的 MCP 工具**:
|
|
39
|
+
- `mcp_brain-api_get_messages`:获取平台公告和社区动态。
|
|
40
|
+
- `mcp_brain-api_get_leaderboard`:获取用户排行榜统计。
|
|
41
|
+
- `mcp_brain-api_value_factor_trendScore`:用户value factor趋势,又名多样性分数。
|
|
42
|
+
|
|
43
|
+
### 3. 比赛参与与进度
|
|
44
|
+
- **步骤**:
|
|
45
|
+
1. **获取用户参与的比赛**:获取用户当前参与的所有比赛信息。
|
|
46
|
+
- 使用工具:`mcp_brain-api_get_user_competitions`(设置 `user_id` 为 "self")。
|
|
47
|
+
2. **筛选未截止比赛**:根据比赛日期判断哪些比赛尚未截止,优先关注这些比赛。
|
|
48
|
+
3. **比赛进度报告**:记录用户在每个比赛中的排名、提交的 alpha 表现等信息。
|
|
49
|
+
4. **⚠️ 关键:比赛规则与要求详细分析**:获取每个比赛的详细规则和要求。
|
|
50
|
+
- 使用工具:`mcp_brain-api_get_competition_details` 和 `mcp_brain-api_get_competition_agreement`(设置 `competition_id` 为具体比赛 ID)。
|
|
51
|
+
- **必须仔细阅读比赛协议**:特别注意universe要求、delay要求、Alpha类型限制等关键参数。
|
|
52
|
+
- **常见错误**:例如GAC类比赛要求GLOBAL universe,而非特定region(如USA)。
|
|
53
|
+
5. **比赛相关计划与建议**:基于规则和用户当前表现,提供下一步行动建议和研究方向。
|
|
54
|
+
- **验证符合性**:确保推荐的Alpha完全符合比赛规则要求。
|
|
55
|
+
- **结合 pyramid 缺失类别**:在符合比赛规则的前提下,考虑pyramid优化。
|
|
56
|
+
- **使用的 MCP 工具**:
|
|
57
|
+
- `mcp_brain-api_get_user_competitions`:获取用户参与的比赛列表。
|
|
58
|
+
- `mcp_brain-api_get_competition_details`:获取比赛详细信息。
|
|
59
|
+
- `mcp_brain-api_get_competition_agreement`:获取比赛规则和条款。
|
|
60
|
+
|
|
61
|
+
### 4. 未来活动预告
|
|
62
|
+
- **步骤**:
|
|
63
|
+
1. **获取即将到来的事件**:获取 BRAIN 平台上的比赛、研讨会或其他活动信息,过滤过去事件(基于当前日期,如 2025-08-09)。
|
|
64
|
+
- 使用工具:`mcp_brain-api_get_events`(设置 `random_string` 为任意值,如 "dummy")。
|
|
65
|
+
2. **计划任务**:基于当前 alpha 和比赛状态,列出未来几天计划完成的任务。
|
|
66
|
+
- **使用的 MCP 工具**:
|
|
67
|
+
- `mcp_brain-api_get_events`:获取平台事件信息。
|
|
68
|
+
|
|
69
|
+
### 5. 研究回归与建议
|
|
70
|
+
- **步骤**:
|
|
71
|
+
1. **研究回归**:基于当前 alpha 表现总结研究成果,包括年度统计。
|
|
72
|
+
2. **建议**:综合 alpha 表现、比赛要求和平台动向,提供 alpha 优化、比赛策略、数据字段探索和风险管理等方面的建议。优先级列表化。
|
|
73
|
+
- **使用的 MCP 工具**:基于 Alpha 部分数据。
|
|
74
|
+
|
|
75
|
+
### 6. Alpha 进展与状态 (移到后部)
|
|
76
|
+
- **步骤**:
|
|
77
|
+
1. **获取 IS (In-Sample) Alpha 数据**:获取用户当前正在回测的 alpha 信息。
|
|
78
|
+
- 使用工具:`mcp_brain-api_get_user_alphas`(设置 `stage` 为 "IS",`limit` 为 30,`offset` 为 0)。
|
|
79
|
+
2. **获取 OS (Out-of-Sample) Alpha 数据**:获取用户最近成功提交的 alpha 信息。
|
|
80
|
+
- 使用工具:`mcp_brain-api_get_user_alphas`(设置 `stage` 为 "OS",`limit` 为 30,`offset` 为 0)。
|
|
81
|
+
3. **昨日进展**:查看平台日志或使用 `mcp_brain-api_get_user_activities` 追踪活动。
|
|
82
|
+
4. **性能分析**:分析每个 alpha 的关键指标(如 Sharpe Ratio、PnL、Fitness),与平台标准对比。并行调用工具获取细节。
|
|
83
|
+
- 使用工具:`mcp_brain-api_get_alpha_details`、`mcp_brain-api_analyze_alpha_performance`、`mcp_brain-api_get_alpha_pnl`、`mcp_brain-api_get_alpha_yearly_stats`、`mcp_brain-api_check_correlation` (阈值 0.7)。
|
|
84
|
+
5. **OS Alpha 详细分析**:对每个 OS alpha 分析数据字段、运算符和含义。提供两个角度改进建议:(1) Idea 本身 (e.g., 修改窗口、添加运算符);(2) 结合比赛 (e.g., GAC2025 要求) 或近季度缺失 pyramid (使用 `mcp_brain-api_get_pyramid_alphas` 和 `mcp_brain-api_get_pyramid_multipliers`,推荐具体数据字段)。
|
|
85
|
+
6. **其他数据字段建议**:基于策略,使用 `mcp_brain-api_get_datafields` 搜索并推荐字段 (e.g., search="EPS")。
|
|
86
|
+
- **使用的 MCP 工具**:
|
|
87
|
+
- `mcp_brain-api_get_user_alphas`:获取 IS/OS 列表。
|
|
88
|
+
- `mcp_brain-api_get_alpha_details`:详细代码/描述。
|
|
89
|
+
- `mcp_brain-api_analyze_alpha_performance`:全面性能分析。
|
|
90
|
+
- `mcp_brain-api_check_correlation`:相关性检查。
|
|
91
|
+
- `mcp_brain-api_get_alpha_pnl`:PnL 数据。
|
|
92
|
+
- `mcp_brain-api_get_alpha_yearly_stats`:年度统计。
|
|
93
|
+
- `mcp_brain-api_get_pyramid_alphas` 和 `mcp_brain-api_get_pyramid_multipliers`:pyramid 分布和乘数。
|
|
94
|
+
- `mcp_brain-api_get_datafields`:推荐数据字段。
|
|
95
|
+
|
|
96
|
+
## 其他注意事项
|
|
97
|
+
|
|
98
|
+
- **认证**:在开始任何数据获取之前,需使用 `mcp_brain-api_authenticate` 工具进行认证,提供用户的电子邮件和密码。
|
|
99
|
+
- **动态日期**:使用系统日期动态获取当前日期,确保事件过滤准确(e.g., 排除过去事件)。
|
|
100
|
+
- **并行工具调用**:优先并行调用 MCP 工具以加速数据收集。
|
|
101
|
+
- **善用论坛**:善用论坛,获取更多信息。
|
|
102
|
+
- **用户反馈**:在每个阶段完成后,检查用户是否有补充信息或修改意见,并相应更新报告。
|
|
103
|
+
- **任务管理**:使用 `todo_write` 工具创建和更新待办事项列表,确保每个步骤按部就班完成。
|
|
104
|
+
|
|
105
|
+
## 质量控制与错误防范
|
|
106
|
+
|
|
107
|
+
### 常见错误及防范措施
|
|
108
|
+
1. **比赛规则理解错误**:
|
|
109
|
+
- **错误示例**:误认为GAC2025接受USA region Alpha,实际要求GLOBAL universe
|
|
110
|
+
- **防范措施**:必须详细阅读`mcp_brain-api_get_competition_agreement`返回的完整规则文档
|
|
111
|
+
- **验证步骤**:在提供建议前,再次确认Alpha的universe、delay等参数符合比赛要求
|
|
112
|
+
|
|
113
|
+
2. **数据解读错误**:
|
|
114
|
+
- **防范措施**:对关键指标进行交叉验证,如Sharpe ratio、fitness等
|
|
115
|
+
- **质量检查**:确保所有建议都有数据支撑,避免主观推测
|
|
116
|
+
|
|
117
|
+
3. **输出格式错误**:
|
|
118
|
+
- **用户偏好**:根据用户要求选择聊天输出或markdown文件
|
|
119
|
+
- **结构完整性**:确保日报包含所有必需章节且逻辑清晰
|
|
120
|
+
|
|
121
|
+
### 持续改进机制
|
|
122
|
+
- 记录每次错误的根本原因
|
|
123
|
+
- 更新工作流程以防止类似错误重复发生
|
|
124
|
+
- 建立验证清单确保关键信息准确性
|
|
125
|
+
|
|
126
|
+
## 总结
|
|
127
|
+
|
|
128
|
+
以上工作流程涵盖了撰写 BRAIN 平台每日日报的各个方面,从数据收集到报告撰写和更新。通过使用指定的 MCP 工具,秘书可以获取必要的数据并分析用户在平台上的表现,从而提供有针对性的建议和见解。如有任何问题或需要进一步指导,请随时与前任秘书或平台支持团队联系。
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
cnhkmcp/__init__.py,sha256=
|
|
1
|
+
cnhkmcp/__init__.py,sha256=HQr9XFSex_T7RfS7B92infbWRdufzKWa2pVK1atoq0s,2759
|
|
2
2
|
cnhkmcp/untracked/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
3
3
|
cnhkmcp/untracked/arXiv_API_Tool_Manual.md,sha256=I3hvI5mpmIjBuWptufoVSWFnuhyUc67oCGHEuR0p_xs,13552
|
|
4
4
|
cnhkmcp/untracked/arxiv_api.py,sha256=-E-Ub9K-DXAYaCjrbobyfQ9H97gaZBc7pL6xPEyVHec,9020
|
|
@@ -185,14 +185,30 @@ cnhkmcp/untracked/mcp文件论坛版2_如果原版启动不了浏览器就试这
|
|
|
185
185
|
cnhkmcp/untracked/mcp文件论坛版2_如果原版启动不了浏览器就试这个/让AI读这个文档来学会下载浏览器.md,sha256=v5QPSMjRDh52ZjgC4h8QjImnaqlVRLjTHGxmGjTo36g,3396
|
|
186
186
|
cnhkmcp/untracked/mcp文件论坛版2_如果原版启动不了浏览器就试这个/配置前运行我_安装必要依赖包.py,sha256=BnUyL5g6PaC62yEuS-8vcDSJ0oKu3k6jqQxi2jginuQ,6612
|
|
187
187
|
cnhkmcp/untracked/skills/Claude_Skill_Creation_Guide.md,sha256=Wx4w8deBkuw0UcSwzNNQdRIuCqDXGk3-fRQBWcPQivw,5025
|
|
188
|
+
cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/SKILL.md,sha256=UKQhfHPOH8_YXnRotzXuG8BrgFeXhQoxAZIEeaQxGds,1183
|
|
189
|
+
cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/reference.md,sha256=ws9oUgtEdK7aVjw6sNoVzFQ3gyoX5UIlyz3wItI0fxI,3115
|
|
190
|
+
cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/scripts/requirements.txt,sha256=-mb08dIt1JEHJDcKSOXYX72CiHWt7FY4IuGY8Hnzcdk,57
|
|
191
|
+
cnhkmcp/untracked/skills/brain-calculate-alpha-selfcorrQuick/scripts/skill.py,sha256=u_9jWR55vYG1cd8oRXOXUnTEAtkK7Se35TYO_Of-J5U,32472
|
|
192
|
+
cnhkmcp/untracked/skills/brain-datafield-exploration-general/SKILL.md,sha256=L5l4bYI-FVqiDqYxEw7VOxU4j7r5iO_Fzv8g-ljvBLM,2173
|
|
193
|
+
cnhkmcp/untracked/skills/brain-datafield-exploration-general/reference.md,sha256=YyMX1MIsDTJTduxulY-fYipNHvRihshQy9Q2j6Zxg2Q,9123
|
|
194
|
+
cnhkmcp/untracked/skills/brain-dataset-exploration-general/SKILL.md,sha256=p-p9Auenjfs1o_tNafpICo0Dg9mqJUPKOfIcBVG9f5I,1799
|
|
195
|
+
cnhkmcp/untracked/skills/brain-dataset-exploration-general/reference.md,sha256=-C4fWdaBe9UzA5BDZz0Do2z8RaPWLslb6D0nTz6fqk4,24403
|
|
196
|
+
cnhkmcp/untracked/skills/brain-explain-alphas/SKILL.md,sha256=uxr_StBx_tiLx26z_DezunRSvix2ZFKEzJ4oXNMO-4Q,1934
|
|
197
|
+
cnhkmcp/untracked/skills/brain-explain-alphas/reference.md,sha256=QukeT9gIg4g28AuYPqn-fYtCwb-JmMxZuIkmkrkUfFY,4916
|
|
198
|
+
cnhkmcp/untracked/skills/brain-how-to-pass-AlphaTest/SKILL.md,sha256=8HaC_JryIdOISoCVevvd_syGjqQD2-UBkzANqHWiXsQ,2477
|
|
199
|
+
cnhkmcp/untracked/skills/brain-how-to-pass-AlphaTest/reference.md,sha256=W4dtQrqoTN72UyvIsvkGRF0HFOJLHSDeeSlbR3gqQg0,17133
|
|
200
|
+
cnhkmcp/untracked/skills/brain-improve-alpha-performance/SKILL.md,sha256=4EieJm7iaQm9zujIhv0qSxcxfRqYr4xgHEZdKfswdd4,2027
|
|
201
|
+
cnhkmcp/untracked/skills/brain-improve-alpha-performance/reference.md,sha256=XlWYREd_qXe1skdXIhkiGY05oDr_6KiBs1WkerY4S8U,5092
|
|
202
|
+
cnhkmcp/untracked/skills/brain-nextMove-analysis/SKILL.md,sha256=MGmM3HHYTdzxLE6uOxs9mB4i3At4JGjJYzsbzQKTjgc,1685
|
|
203
|
+
cnhkmcp/untracked/skills/brain-nextMove-analysis/reference.md,sha256=6aNmPqWRn09XdQMRxoVTka9IYVOUv5LcWparHu16EfQ,9460
|
|
188
204
|
cnhkmcp/untracked/skills/expression_verifier/SKILL.md,sha256=7oGzMIXUJzDCtbxND6QWmh6azkLqoFJNURIh-F-aPUQ,2213
|
|
189
205
|
cnhkmcp/untracked/skills/expression_verifier/scripts/validator.py,sha256=bvQt55-n2FHkeBA6Ui7tc0oLk_cnriDxt5z-IgaeKZA,50789
|
|
190
206
|
cnhkmcp/untracked/skills/expression_verifier/scripts/verify_expr.py,sha256=zDSlYf0XlkyML-fcL4wld445RMbztt4pDSP5b6W6JKQ,1659
|
|
191
207
|
cnhkmcp/untracked/skills/pull_BRAINSkill/SKILL.md,sha256=QyXQ0wr9LeZyKqkvSTto72bX33LgtB1UjoeUDj-3igg,2413
|
|
192
208
|
cnhkmcp/untracked/skills/pull_BRAINSkill/scripts/pull_skills.py,sha256=6v3Z3cc9_qKvBj7C6y2jWY1pAd76SA3JmiVj_KkZkmw,6341
|
|
193
|
-
cnhkmcp-2.1.
|
|
194
|
-
cnhkmcp-2.1.
|
|
195
|
-
cnhkmcp-2.1.
|
|
196
|
-
cnhkmcp-2.1.
|
|
197
|
-
cnhkmcp-2.1.
|
|
198
|
-
cnhkmcp-2.1.
|
|
209
|
+
cnhkmcp-2.1.8.dist-info/licenses/LICENSE,sha256=QLxO2eNMnJQEdI_R1UV2AOD-IvuA8zVrkHWA4D9gtoc,1081
|
|
210
|
+
cnhkmcp-2.1.8.dist-info/METADATA,sha256=AbxXBz_fPKgMFmXQSU4FMaiuoLEw4W7IJhZN7roq7mY,5171
|
|
211
|
+
cnhkmcp-2.1.8.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
212
|
+
cnhkmcp-2.1.8.dist-info/entry_points.txt,sha256=lTQieVyIvjhSMK4fT-XwnccY-JBC1H4vVQ3V9dDM-Pc,70
|
|
213
|
+
cnhkmcp-2.1.8.dist-info/top_level.txt,sha256=x--ibUcSgOS9Z_RWK2Qc-vfs7DaXQN-WMaaxEETJ1Bw,8
|
|
214
|
+
cnhkmcp-2.1.8.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|