polymarket-intel-mcp 1.0.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- polymarket_intel_mcp-1.0.0/.gitignore +13 -0
- polymarket_intel_mcp-1.0.0/DEPLOYMENT.md +269 -0
- polymarket_intel_mcp-1.0.0/Dockerfile +39 -0
- polymarket_intel_mcp-1.0.0/PKG-INFO +307 -0
- polymarket_intel_mcp-1.0.0/README.md +272 -0
- polymarket_intel_mcp-1.0.0/api/__init__.py +0 -0
- polymarket_intel_mcp-1.0.0/api/auth.py +181 -0
- polymarket_intel_mcp-1.0.0/api/cache.py +49 -0
- polymarket_intel_mcp-1.0.0/api/main.py +557 -0
- polymarket_intel_mcp-1.0.0/core/__init__.py +20 -0
- polymarket_intel_mcp-1.0.0/core/client.py +168 -0
- polymarket_intel_mcp-1.0.0/core/models.py +121 -0
- polymarket_intel_mcp-1.0.0/core/scorer.py +279 -0
- polymarket_intel_mcp-1.0.0/core/signals.py +425 -0
- polymarket_intel_mcp-1.0.0/db/__init__.py +54 -0
- polymarket_intel_mcp-1.0.0/db/converters.py +88 -0
- polymarket_intel_mcp-1.0.0/db/records.py +69 -0
- polymarket_intel_mcp-1.0.0/db/repository.py +173 -0
- polymarket_intel_mcp-1.0.0/db/schema.sql +112 -0
- polymarket_intel_mcp-1.0.0/db/supabase_repo.py +294 -0
- polymarket_intel_mcp-1.0.0/glama.json +11 -0
- polymarket_intel_mcp-1.0.0/mcp_server/claude_desktop_config.example.json +10 -0
- polymarket_intel_mcp-1.0.0/mcp_server/server.py +200 -0
- polymarket_intel_mcp-1.0.0/pyproject.toml +74 -0
- polymarket_intel_mcp-1.0.0/requirements-dev.txt +4 -0
- polymarket_intel_mcp-1.0.0/requirements.txt +9 -0
- polymarket_intel_mcp-1.0.0/scripts/analyze_wallet.py +116 -0
- polymarket_intel_mcp-1.0.0/scripts/snapshot_job.py +199 -0
- polymarket_intel_mcp-1.0.0/server.json +52 -0
- polymarket_intel_mcp-1.0.0/smithery.yaml +48 -0
- polymarket_intel_mcp-1.0.0/tests/__init__.py +0 -0
- polymarket_intel_mcp-1.0.0/tests/fixtures.py +209 -0
- polymarket_intel_mcp-1.0.0/tests/test_auth.py +125 -0
- polymarket_intel_mcp-1.0.0/tests/test_endpoint_split.py +196 -0
- polymarket_intel_mcp-1.0.0/tests/test_history_endpoints.py +186 -0
- polymarket_intel_mcp-1.0.0/tests/test_repository.py +225 -0
- polymarket_intel_mcp-1.0.0/tests/test_scorer.py +63 -0
- polymarket_intel_mcp-1.0.0/tests/test_signals.py +126 -0
- polymarket_intel_mcp-1.0.0/tests/test_snapshot_job.py +151 -0
- polymarket_intel_mcp-1.0.0/zzz/mnt/user-data/outputs/polymarket-intel/api/__init__.py +0 -0
- polymarket_intel_mcp-1.0.0/zzz/mnt/user-data/outputs/polymarket-intel/core/__init__.py +20 -0
- polymarket_intel_mcp-1.0.0/zzz/mnt/user-data/outputs/polymarket-intel/db/__init__.py +59 -0
|
@@ -0,0 +1,269 @@
|
|
|
1
|
+
# Deployment Runbook
|
|
2
|
+
|
|
3
|
+
Step-by-step from zero to listed on three MCP marketplaces. Total time: **~2 hours active work** (most of it waiting for things to propagate).
|
|
4
|
+
|
|
5
|
+
The path:
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
GitHub repo
|
|
9
|
+
├── Railway deploy (REST API + MCP HTTP transport)
|
|
10
|
+
├── Supabase setup (persistence)
|
|
11
|
+
├── Daily snapshot cron (the moat)
|
|
12
|
+
├── PyPI publish (lets users install MCP server locally)
|
|
13
|
+
├── Official MCP Registry (server.json → registry.modelcontextprotocol.io)
|
|
14
|
+
├── Smithery (smithery.yaml → smithery.ai)
|
|
15
|
+
└── Glama, mcp.so, awesome-mcp (auto-index from GitHub repo)
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## Phase 1: Hosting (30 min)
|
|
21
|
+
|
|
22
|
+
### 1.1 Push to GitHub
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
gh repo create polymarket-intel --public --source=. --push
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
The repo must be public for Glama / awesome-mcp / the official registry to index it.
|
|
29
|
+
|
|
30
|
+
### 1.2 Set up Supabase
|
|
31
|
+
|
|
32
|
+
1. Sign up at supabase.com, create a project (free tier is enough to start).
|
|
33
|
+
2. **SQL Editor → New query → paste `db/schema.sql` → Run**.
|
|
34
|
+
3. Project Settings → API → copy:
|
|
35
|
+
- `URL` → `SUPABASE_URL`
|
|
36
|
+
- `service_role` key → `SUPABASE_KEY` (use the service-role here; the snapshot job needs writes)
|
|
37
|
+
|
|
38
|
+
### 1.3 Deploy to Railway
|
|
39
|
+
|
|
40
|
+
1. Sign up at railway.com, "Deploy from GitHub repo" → pick polymarket-intel.
|
|
41
|
+
2. Railway auto-detects the `Dockerfile`. If it picks Nixpacks instead, just set the builder to Dockerfile in service settings.
|
|
42
|
+
3. **Variables tab** — add:
|
|
43
|
+
- `SUPABASE_URL` = (from 1.2)
|
|
44
|
+
- `SUPABASE_KEY` = (service-role from 1.2)
|
|
45
|
+
- `API_KEYS` = (leave empty for now; add later when you start charging)
|
|
46
|
+
4. **Networking tab** → "Generate domain" → you get something like `polymarket-intel-production.up.railway.app`.
|
|
47
|
+
5. Wait ~2 min for first deploy. Visit the domain — you should see the JSON health page.
|
|
48
|
+
|
|
49
|
+
### 1.4 Verify the snapshot job works
|
|
50
|
+
|
|
51
|
+
Run it once manually before scheduling. From your laptop:
|
|
52
|
+
|
|
53
|
+
```bash
|
|
54
|
+
SUPABASE_URL=... SUPABASE_KEY=... python scripts/snapshot_job.py --top 10
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
Should print one row per wallet, end with "wallets_scored: 10". Confirm in Supabase dashboard → Table editor → `wallet_scores` has 10 rows.
|
|
58
|
+
|
|
59
|
+
### 1.5 Schedule the snapshot job
|
|
60
|
+
|
|
61
|
+
Two options:
|
|
62
|
+
|
|
63
|
+
**Option A — Railway cron** (simplest):
|
|
64
|
+
1. New service in the same project → Empty service → Source = same GitHub repo, same Dockerfile.
|
|
65
|
+
2. Settings → Custom Start Command: `python scripts/snapshot_job.py --top 50`
|
|
66
|
+
3. Settings → Cron Schedule: `0 8 * * *` (daily at 8am UTC)
|
|
67
|
+
4. Same env vars as the API service.
|
|
68
|
+
|
|
69
|
+
**Option B — GitHub Actions** (free, uses GitHub's runners):
|
|
70
|
+
|
|
71
|
+
Create `.github/workflows/snapshot.yml`:
|
|
72
|
+
```yaml
|
|
73
|
+
name: Daily snapshot
|
|
74
|
+
on:
|
|
75
|
+
schedule: [{ cron: "0 8 * * *" }]
|
|
76
|
+
workflow_dispatch:
|
|
77
|
+
jobs:
|
|
78
|
+
snapshot:
|
|
79
|
+
runs-on: ubuntu-latest
|
|
80
|
+
steps:
|
|
81
|
+
- uses: actions/checkout@v4
|
|
82
|
+
- uses: actions/setup-python@v5
|
|
83
|
+
with: { python-version: "3.12" }
|
|
84
|
+
- run: pip install -r requirements.txt
|
|
85
|
+
- run: python scripts/snapshot_job.py --top 50
|
|
86
|
+
env:
|
|
87
|
+
SUPABASE_URL: ${{ secrets.SUPABASE_URL }}
|
|
88
|
+
SUPABASE_KEY: ${{ secrets.SUPABASE_KEY }}
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
Add the two secrets in repo settings → Secrets and variables → Actions.
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
95
|
+
## Phase 2: Publish the local MCP package (45 min)
|
|
96
|
+
|
|
97
|
+
Even though the API is hosted, many users will want to run the MCP server locally with stdio (Claude Desktop, Cursor). For that, the package needs to be on PyPI so they can `pip install` it.
|
|
98
|
+
|
|
99
|
+
### 2.1 Create a `pyproject.toml`
|
|
100
|
+
|
|
101
|
+
(Created already in this repo; if you want to publish under a different name, edit there.)
|
|
102
|
+
|
|
103
|
+
### 2.2 Build and publish
|
|
104
|
+
|
|
105
|
+
```bash
|
|
106
|
+
pip install build twine
|
|
107
|
+
python -m build # creates dist/polymarket_intel_mcp-1.0.0-*.whl
|
|
108
|
+
twine upload dist/* # prompts for PyPI credentials
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
You'll need a PyPI account first; get an API token at pypi.org/manage/account/.
|
|
112
|
+
|
|
113
|
+
### 2.3 Add the `mcp-name` marker
|
|
114
|
+
|
|
115
|
+
The official registry verifies that your PyPI package matches your `server.json` namespace. Add to your README (in the published package):
|
|
116
|
+
|
|
117
|
+
```html
|
|
118
|
+
<!-- mcp-name: io.github.YOUR_USERNAME/polymarket-intel -->
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
This can be in an HTML comment so it's invisible.
|
|
122
|
+
|
|
123
|
+
---
|
|
124
|
+
|
|
125
|
+
## Phase 3: Listing — official MCP Registry (15 min)
|
|
126
|
+
|
|
127
|
+
The canonical source of truth. Listing here gets you indexed by Claude Desktop, Cursor, and most other MCP clients.
|
|
128
|
+
|
|
129
|
+
### 3.1 Edit `server.json`
|
|
130
|
+
|
|
131
|
+
Replace `YOUR_GITHUB_USERNAME` everywhere. Update the `remotes[0].url` to your Railway domain. Bump `version` if you've shipped changes.
|
|
132
|
+
|
|
133
|
+
### 3.2 Install the publisher CLI
|
|
134
|
+
|
|
135
|
+
```bash
|
|
136
|
+
# macOS
|
|
137
|
+
brew install mcp-publisher
|
|
138
|
+
|
|
139
|
+
# or download from
|
|
140
|
+
# https://github.com/modelcontextprotocol/registry/releases
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
### 3.3 Authenticate + publish
|
|
144
|
+
|
|
145
|
+
```bash
|
|
146
|
+
mcp-publisher login github
|
|
147
|
+
# → opens a browser, GitHub OAuth flow
|
|
148
|
+
mcp-publisher publish --dry-run # validate first
|
|
149
|
+
mcp-publisher publish
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
If successful: `Server io.github.YOUR_USERNAME/polymarket-intel version 1.0.0` is live.
|
|
153
|
+
|
|
154
|
+
Verify:
|
|
155
|
+
```bash
|
|
156
|
+
curl "https://registry.modelcontextprotocol.io/v0.1/servers?search=polymarket-intel"
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
---
|
|
160
|
+
|
|
161
|
+
## Phase 4: Listing — Smithery (10 min)
|
|
162
|
+
|
|
163
|
+
### 4.1 Push smithery.yaml to GitHub root
|
|
164
|
+
|
|
165
|
+
(Already in the repo.)
|
|
166
|
+
|
|
167
|
+
### 4.2 Submit
|
|
168
|
+
|
|
169
|
+
Two paths:
|
|
170
|
+
|
|
171
|
+
**Web dashboard**: smithery.ai/dashboard → "New Server" → connect GitHub → pick repo → Smithery reads smithery.yaml automatically.
|
|
172
|
+
|
|
173
|
+
**CLI**:
|
|
174
|
+
```bash
|
|
175
|
+
npm install -g @smithery/cli
|
|
176
|
+
smithery mcp publish https://your-railway-domain/mcp -n YOUR_USERNAME/polymarket-intel
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
Smithery will run a quality probe — a few automated MCP calls to verify the server responds. Make sure the API is up and the `/mcp` endpoint responds before publishing.
|
|
180
|
+
|
|
181
|
+
---
|
|
182
|
+
|
|
183
|
+
## Phase 5: Auto-indexed marketplaces (5 min)
|
|
184
|
+
|
|
185
|
+
These crawl GitHub. You don't submit; they find you. But submitting accelerates discovery.
|
|
186
|
+
|
|
187
|
+
### 5.1 Glama
|
|
188
|
+
|
|
189
|
+
`glama.json` is already in the repo. Glama auto-indexes new MCP servers nightly. If you want to claim/manage your listing manually, sign up at glama.ai.
|
|
190
|
+
|
|
191
|
+
### 5.2 mcp.so
|
|
192
|
+
|
|
193
|
+
Visit mcp.so/submit, paste your repo URL. Manual review, usually live within a day.
|
|
194
|
+
|
|
195
|
+
### 5.3 awesome-mcp-servers
|
|
196
|
+
|
|
197
|
+
Open a PR on https://github.com/punkpeye/awesome-mcp-servers adding an entry under the right category (Finance / Data Analysis). Format:
|
|
198
|
+
|
|
199
|
+
```
|
|
200
|
+
- [polymarket-intel](https://github.com/YOUR_USERNAME/polymarket-intel) - Classify Polymarket wallets as human or bot, score their trading edge, and stream their current open positions.
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
### 5.4 PulseMCP
|
|
204
|
+
|
|
205
|
+
Visit pulsemcp.com/submit, similar to mcp.so.
|
|
206
|
+
|
|
207
|
+
---
|
|
208
|
+
|
|
209
|
+
## Phase 6: Verify discoverability (10 min)
|
|
210
|
+
|
|
211
|
+
Open Claude Desktop, edit `~/Library/Application Support/Claude/claude_desktop_config.json` to add the *remote* server:
|
|
212
|
+
|
|
213
|
+
```json
|
|
214
|
+
{
|
|
215
|
+
"mcpServers": {
|
|
216
|
+
"polymarket-intel": {
|
|
217
|
+
"url": "https://your-railway-domain/mcp"
|
|
218
|
+
}
|
|
219
|
+
}
|
|
220
|
+
}
|
|
221
|
+
```
|
|
222
|
+
|
|
223
|
+
Restart Claude Desktop. Ask Claude: *"Score the Polymarket wallet 0xf1528f12e645462c344799b62b1b421a6a4c64aa"*. Claude should pick `score_polymarket_wallet` from the registered tools and call it.
|
|
224
|
+
|
|
225
|
+
If that works, the loop is closed end-to-end. Your service is discoverable, callable, persisting, and growing the dataset daily.
|
|
226
|
+
|
|
227
|
+
---
|
|
228
|
+
|
|
229
|
+
## Operational notes
|
|
230
|
+
|
|
231
|
+
### Adding paid customers later
|
|
232
|
+
|
|
233
|
+
When you're ready to charge:
|
|
234
|
+
|
|
235
|
+
1. Create a key generation script (5 lines: `f"pmi_{secrets.token_urlsafe(24)}"`).
|
|
236
|
+
2. Add the key to Railway's `API_KEYS` env var: `API_KEYS=pmi_abc...:starter,pmi_xyz...:pro`.
|
|
237
|
+
3. Send the customer their key out-of-band.
|
|
238
|
+
4. They include `X-API-Key: pmi_abc...` in requests. Higher rate limit applies automatically.
|
|
239
|
+
|
|
240
|
+
For self-service signup (Stripe checkout → key emailed), add a `/auth/keys` endpoint backed by a `api_keys` table in Supabase. Keep the env-var path for grandfathered keys.
|
|
241
|
+
|
|
242
|
+
### Monitoring
|
|
243
|
+
|
|
244
|
+
- Railway has a logs tab — check it daily for the first week.
|
|
245
|
+
- The `/snapshots/latest` endpoint is your "is the cron working" canary.
|
|
246
|
+
- Supabase's Database → Reports tab tracks query count and storage growth.
|
|
247
|
+
|
|
248
|
+
### When to alarm
|
|
249
|
+
|
|
250
|
+
Realistic v1 alarms:
|
|
251
|
+
- `/snapshots/latest` returns `null` or `> 26 hours old` → cron is broken
|
|
252
|
+
- 5xx rate > 1% on Railway logs → upstream Polymarket issue or your code
|
|
253
|
+
- Supabase storage > 80% of free tier (8GB) → time to partition `open_position_snapshots` by month
|
|
254
|
+
|
|
255
|
+
---
|
|
256
|
+
|
|
257
|
+
## Distribution playbook (after technical launch)
|
|
258
|
+
|
|
259
|
+
The artifact-shipping above is necessary but not sufficient. Real traction comes from:
|
|
260
|
+
|
|
261
|
+
1. **README that scores high in answer engines.** When Claude / GPT / Cursor searches for "Polymarket trader analysis", does our README rank? Use clear structured prose, not marketing fluff. Concrete claims, copy-pasteable code, screenshots of agent responses calling the tool.
|
|
262
|
+
|
|
263
|
+
2. **Founders who ship copy-trading bots.** They'll be early users. DM them when they post their bots — "your strategy needs a wallet quality filter; here's mine."
|
|
264
|
+
|
|
265
|
+
3. **Tweet thread per scored wallet.** "We scored the top 50 Polymarket wallets. 23 are bots. Here are the 5 humans actually beating the market." This kind of content drives both API traffic and inbound interest.
|
|
266
|
+
|
|
267
|
+
4. **Be the cited source for Polymarket research.** When someone writes a Substack or a paper about prediction-market behavior, we want to be the data layer they cite.
|
|
268
|
+
|
|
269
|
+
The build is done. The directory listings get you on the shelf. The above is what gets you off the shelf.
|
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
FROM python:3.12-slim
|
|
2
|
+
|
|
3
|
+
# Tini for proper signal handling in containers
|
|
4
|
+
ENV PYTHONUNBUFFERED=1 \
|
|
5
|
+
PYTHONDONTWRITEBYTECODE=1 \
|
|
6
|
+
PIP_NO_CACHE_DIR=1 \
|
|
7
|
+
PIP_DISABLE_PIP_VERSION_CHECK=1
|
|
8
|
+
|
|
9
|
+
WORKDIR /app
|
|
10
|
+
|
|
11
|
+
# Install system deps (only if needed by lxml/numpy wheels — slim image is fine for now)
|
|
12
|
+
RUN apt-get update && apt-get install -y --no-install-recommends \
|
|
13
|
+
tini \
|
|
14
|
+
&& rm -rf /var/lib/apt/lists/*
|
|
15
|
+
|
|
16
|
+
# Install Python deps first for layer caching
|
|
17
|
+
COPY requirements.txt .
|
|
18
|
+
RUN pip install -r requirements.txt
|
|
19
|
+
|
|
20
|
+
# Copy app code
|
|
21
|
+
COPY core ./core
|
|
22
|
+
COPY api ./api
|
|
23
|
+
COPY db ./db
|
|
24
|
+
COPY mcp_server ./mcp_server
|
|
25
|
+
COPY scripts ./scripts
|
|
26
|
+
|
|
27
|
+
# Default port for the API; Railway/Render override $PORT
|
|
28
|
+
ENV PORT=8000
|
|
29
|
+
EXPOSE 8000
|
|
30
|
+
|
|
31
|
+
# Tini handles signals so SIGTERM is forwarded to uvicorn cleanly
|
|
32
|
+
ENTRYPOINT ["/usr/bin/tini", "--"]
|
|
33
|
+
|
|
34
|
+
# Default to running the FastAPI service. Override CMD to run the
|
|
35
|
+
# snapshot job or the MCP server in a different container/cron.
|
|
36
|
+
CMD ["uvicorn", "api.main:app", "--host", "0.0.0.0", "--port", "8000"]
|
|
37
|
+
# Force rebuild: 1778327936
|
|
38
|
+
|
|
39
|
+
# Force rebuild: 1778328256
|
|
@@ -0,0 +1,307 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: polymarket-intel-mcp
|
|
3
|
+
Version: 1.0.0
|
|
4
|
+
Summary: Classify Polymarket wallets as human or bot, score trading edge, and read open positions. MCP server for AI agents.
|
|
5
|
+
Project-URL: Homepage, https://github.com/aemery13/polymarket-intel
|
|
6
|
+
Project-URL: Repository, https://github.com/aemery13/polymarket-intel
|
|
7
|
+
Project-URL: Documentation, https://github.com/aemery13/polymarket-intel#readme
|
|
8
|
+
Project-URL: Issues, https://github.com/aemery13/polymarket-intel/issues
|
|
9
|
+
Author: Polymarket Intel
|
|
10
|
+
License: MIT
|
|
11
|
+
Keywords: bot-detection,copy-trading,mcp,model-context-protocol,polymarket,prediction-markets,trading,wallet-intelligence
|
|
12
|
+
Classifier: Development Status :: 4 - Beta
|
|
13
|
+
Classifier: Intended Audience :: Developers
|
|
14
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
15
|
+
Classifier: Programming Language :: Python :: 3
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
18
|
+
Classifier: Topic :: Software Development :: Libraries
|
|
19
|
+
Requires-Python: >=3.11
|
|
20
|
+
Requires-Dist: mcp[cli]>=1.0.0
|
|
21
|
+
Requires-Dist: numpy>=1.24.0
|
|
22
|
+
Requires-Dist: pandas>=2.0.0
|
|
23
|
+
Requires-Dist: pydantic>=2.5.0
|
|
24
|
+
Requires-Dist: requests>=2.31.0
|
|
25
|
+
Provides-Extra: api
|
|
26
|
+
Requires-Dist: fastapi>=0.110.0; extra == 'api'
|
|
27
|
+
Requires-Dist: python-dotenv>=1.0.0; extra == 'api'
|
|
28
|
+
Requires-Dist: supabase>=2.0.0; extra == 'api'
|
|
29
|
+
Requires-Dist: uvicorn[standard]>=0.27.0; extra == 'api'
|
|
30
|
+
Provides-Extra: dev
|
|
31
|
+
Requires-Dist: httpx>=0.27.0; extra == 'dev'
|
|
32
|
+
Requires-Dist: pytest>=7.4.0; extra == 'dev'
|
|
33
|
+
Requires-Dist: tabulate>=0.9.0; extra == 'dev'
|
|
34
|
+
Description-Content-Type: text/markdown
|
|
35
|
+
|
|
36
|
+
# Polymarket Wallet Intelligence
|
|
37
|
+
|
|
38
|
+
<!-- mcp-name: io.github.aemery13/polymarket-intel -->
|
|
39
|
+
|
|
40
|
+
**An MCP server and REST API that classifies Polymarket wallets as human or bot, scores their trading edge from 0–10, and streams their current open positions.** Built for AI agents on copy-trading and signal-following stacks.
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
# Use it from any MCP client (Claude Desktop, Cursor, etc.)
|
|
44
|
+
pip install polymarket-intel-mcp
|
|
45
|
+
polymarket-intel-mcp
|
|
46
|
+
|
|
47
|
+
# Or call the hosted REST API directly
|
|
48
|
+
curl https://polymarket-intel-production.up.railway.app/wallet/0xf1528f12e645462c344799b62b1b421a6a4c64aa
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
## What it answers
|
|
52
|
+
|
|
53
|
+
- **"Is this trader a human or a bot?"** — `score_polymarket_wallet(wallet_address)` → returns `classification ∈ {human, bot, insufficient_data}` plus a confidence score and reason codes.
|
|
54
|
+
- **"Do they actually have an edge?"** — `edge_score` from 0–10, gated on net realised PnL so distributed-but-losing wallets don't get false positives.
|
|
55
|
+
- **"What are they betting on right now?"** — `get_open_positions(wallet_address)` returns live positions sorted by size, refreshed every 30s.
|
|
56
|
+
- **"How has their edge changed over time?"** — `/wallet/{address}/history` returns the score time series from the daily snapshots.
|
|
57
|
+
|
|
58
|
+
## Why this exists
|
|
59
|
+
|
|
60
|
+
The Polymarket leaderboard is misleading. It includes unrealised PnL marked-to-current-price, so the names at the top are dominated by bots running structural arb plus a few wallets sitting on huge open positions that may never resolve in their favour. Agents that copy-trade naively from the leaderboard get burned.
|
|
61
|
+
|
|
62
|
+
This service runs every leaderboard wallet through behavioural fingerprinting (focus ratio, holding period, timing regularity, category concentration) plus PnL reconstruction from raw activity, and only surfaces traders that look like genuine humans with a real edge.
|
|
63
|
+
|
|
64
|
+
**The dataset grows more valuable over time** — every day the snapshot job runs, historical signals accumulate. Wallets that have been consistently above edge 7 for 90 days are a stronger signal than any single point-in-time score.
|
|
65
|
+
|
|
66
|
+
## Distributed as both a REST API and an MCP server
|
|
67
|
+
|
|
68
|
+
| Surface | Use case | Setup |
|
|
69
|
+
|-------------|---------------------------------------------|-------------------------------------|
|
|
70
|
+
| MCP server | Agent that needs tool-style access | `pip install polymarket-intel-mcp` |
|
|
71
|
+
| REST API | Custom HTTP integration, dashboards | `curl https://polymarket-intel-production.up.railway.app/...` |
|
|
72
|
+
| Hosted MCP | Agent on any MCP-compatible client | Add `https://polymarket-intel-production.up.railway.app/mcp` to client config |
|
|
73
|
+
|
|
74
|
+
## Architecture
|
|
75
|
+
|
|
76
|
+
```
|
|
77
|
+
┌──────────────────────────────────────────────┐
|
|
78
|
+
│ core/ │
|
|
79
|
+
│ client.py — Polymarket data API client │
|
|
80
|
+
│ signals.py — pure signal calculators │
|
|
81
|
+
│ scorer.py — classifier + edge score │
|
|
82
|
+
│ models.py — Pydantic response schemas │
|
|
83
|
+
├──────────────────────────────────────────────┤
|
|
84
|
+
│ db/ │
|
|
85
|
+
│ schema.sql — Postgres tables + indexes │
|
|
86
|
+
│ repository.py — Repository protocol + │
|
|
87
|
+
│ InMemoryRepository │
|
|
88
|
+
│ supabase_repo.py — Supabase impl │
|
|
89
|
+
│ converters.py — ScoreResult ↔ records │
|
|
90
|
+
├──────────────────────────────────────────────┤
|
|
91
|
+
│ api/main.py — FastAPI HTTP server │
|
|
92
|
+
│ mcp_server/ — MCP server (stdio) │
|
|
93
|
+
│ scripts/ │
|
|
94
|
+
│ analyze_wallet.py — CLI │
|
|
95
|
+
│ snapshot_job.py — daily cron entry │
|
|
96
|
+
│ tests/ │
|
|
97
|
+
└──────────────────────────────────────────────┘
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
Core has no idea persistence exists. The API and snapshot job depend on the `Repository` protocol — Supabase in production, in-memory in tests and when env vars are unset. This is what makes the suite run without a database and what lets you swap Supabase for Neon, RDS, or anything else later by adding one file.
|
|
101
|
+
|
|
102
|
+
## Quickstart
|
|
103
|
+
|
|
104
|
+
```bash
|
|
105
|
+
git clone <repo> && cd polymarket-intel
|
|
106
|
+
python -m venv .venv && source .venv/bin/activate
|
|
107
|
+
pip install -r requirements-dev.txt
|
|
108
|
+
pytest # 19 tests, all green
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
### CLI
|
|
112
|
+
|
|
113
|
+
```bash
|
|
114
|
+
python scripts/analyze_wallet.py phonesculptor
|
|
115
|
+
python scripts/analyze_wallet.py 0xf1528f12e645462c344799b62b1b421a6a4c64aa --json
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
### REST API
|
|
119
|
+
|
|
120
|
+
```bash
|
|
121
|
+
uvicorn api.main:app --reload --port 8000
|
|
122
|
+
open http://localhost:8000/docs
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
The API is split into a **slow tier** (cached aggressively, cheap, ideal for one-off discovery) and a **fast tier** (short cache, ideal for live copy-trading agents). The split exists because the underlying data has different freshness needs — a wallet's classification doesn't change minute-to-minute, but their open positions do.
|
|
126
|
+
|
|
127
|
+
| Tier | Method | Path | TTL | Notes |
|
|
128
|
+
|------|--------|------------------------------------------|------|------------------------------------------|
|
|
129
|
+
| slow | GET | `/wallet/{address}` | 1h | Score blob — classification, edge_score, signals. No positions. Persisted to history (debounced). |
|
|
130
|
+
| fast | GET | `/wallet/{address}/positions` | 30s | Open positions only. No DB write per call. |
|
|
131
|
+
| — | GET | `/wallet/{address}/history` | DB | Score time series |
|
|
132
|
+
| — | GET | `/wallet/{address}/positions/history` | DB | Position changes over time |
|
|
133
|
+
| — | GET | `/wallet/by-username/{username}` | 1h | Convenience lookup |
|
|
134
|
+
| — | GET | `/leaderboard?limit=50` | 30m | Raw Polymarket top traders |
|
|
135
|
+
| — | GET | `/leaderboard/verified?min_edge=5` | 1h | Filtered to scored humans |
|
|
136
|
+
| — | GET | `/leaderboard/historical?date=…` | DB | Leaderboard at any past date |
|
|
137
|
+
| — | GET | `/snapshots/latest` | DB | When did the cron last run? |
|
|
138
|
+
|
|
139
|
+
**Why 30s on positions and not faster?** Polygon block time is ~2s and Polymarket's activity index lags a few seconds. Polling below 10s gets you no fresher data, just rate-limit errors. 30s is the sweet spot for cost/freshness/upstream-friendliness.
|
|
140
|
+
|
|
141
|
+
**Why debounced DB writes?** A trading agent may hit `/wallet/{address}` thousands of times an hour. Writing a row per call would bloat history with near-duplicate snapshots. The score endpoint persists at most once per wallet per hour. The daily snapshot job guarantees coverage of the top 50 regardless of API traffic.
|
|
142
|
+
|
|
143
|
+
### MCP server (Claude Desktop, Cursor, Continue)
|
|
144
|
+
|
|
145
|
+
```bash
|
|
146
|
+
python mcp_server/server.py
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
Then drop `mcp_server/claude_desktop_config.example.json` into your Claude Desktop config and edit the absolute path.
|
|
150
|
+
|
|
151
|
+
The server exposes four tools:
|
|
152
|
+
|
|
153
|
+
- `score_polymarket_wallet(wallet_address)` — full score
|
|
154
|
+
- `score_polymarket_user(username)` — lookup by display name
|
|
155
|
+
- `get_polymarket_leaderboard(limit)` — raw leaderboard
|
|
156
|
+
- `get_open_positions(wallet_address)` — fast snapshot of live bets
|
|
157
|
+
|
|
158
|
+
## Scoring methodology
|
|
159
|
+
|
|
160
|
+
### Bot triggers (any one fires → bot)
|
|
161
|
+
|
|
162
|
+
| Signal | Threshold | Source |
|
|
163
|
+
|---------------------|---------------|-----------------------------|
|
|
164
|
+
| Focus ratio | > 12 | Hubble Research, validated empirically |
|
|
165
|
+
| Median hold time | < 60s | HFT / MEV pattern |
|
|
166
|
+
| Timing CV | < 0.3 (n≥100) | Scheduled trading |
|
|
167
|
+
|
|
168
|
+
Soft signals stack: crypto-market-maker pattern, > 200 trades/day, etc.
|
|
169
|
+
|
|
170
|
+
### Edge score (0–10) for humans
|
|
171
|
+
|
|
172
|
+
```
|
|
173
|
+
Hard gate: net realised PnL ≤ 0 → capped at 2.0
|
|
174
|
+
Hard gate: < 10 winning markets → capped at 3.0
|
|
175
|
+
|
|
176
|
+
35% PnL magnitude (log-scaled)
|
|
177
|
+
25% win rate (capped at 70%)
|
|
178
|
+
15% PnL distribution (penalises top-1 concentration)
|
|
179
|
+
15% sample size (winning markets, capped at 50)
|
|
180
|
+
10% win/loss ratio (capped at 3x)
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
Net PnL is the hard gate so wallets like neutralwave23 — many distributed tiny wins masking $375k of losses — are correctly flagged as poor.
|
|
184
|
+
|
|
185
|
+
### PnL reconstruction
|
|
186
|
+
|
|
187
|
+
```
|
|
188
|
+
money_in = sum(BUY usdcSize per conditionId)
|
|
189
|
+
money_out = sum(SELL usdcSize) + sum(REDEEM usdcSize)
|
|
190
|
+
pnl = money_out - money_in
|
|
191
|
+
|
|
192
|
+
status:
|
|
193
|
+
REDEEM exists → won
|
|
194
|
+
SELL exists, no REDEEM → exited
|
|
195
|
+
no SELL, no REDEEM, last trade > 7 days old → lost
|
|
196
|
+
no SELL, no REDEEM, last trade within 7 days → open
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
Why activity rather than the positions endpoint: positions vanish from the API after redeem, so any naive analysis using `/positions` undercounts wins. Always reconstruct from `/activity?type=TRADE` + `/activity?type=REDEEM` (separate calls — comma-joined types return 400).
|
|
200
|
+
|
|
201
|
+
## Persistence (Supabase)
|
|
202
|
+
|
|
203
|
+
The historical dataset is the moat. Every day the snapshot job pulls the leaderboard, scores the top N wallets, and persists three things: the score itself (`wallet_scores`), the wallet's open positions at that moment (`open_position_snapshots`), and the leaderboard as it stood (`leaderboard_snapshots`). After 90 days you can answer questions no one else can: "who has been consistently above edge 7 for the last quarter?", "which wallets just entered the top 50?", "show me everyone who held YES on this market three days before resolution."
|
|
204
|
+
|
|
205
|
+
### Setup
|
|
206
|
+
|
|
207
|
+
```bash
|
|
208
|
+
# 1. Create a Supabase project, get the URL and service_role key
|
|
209
|
+
cp .env.example .env # fill in SUPABASE_URL and SUPABASE_KEY
|
|
210
|
+
|
|
211
|
+
# 2. Apply the schema (Supabase dashboard → SQL editor → paste db/schema.sql → run)
|
|
212
|
+
# Or via psql:
|
|
213
|
+
# psql "$DATABASE_URL" -f db/schema.sql
|
|
214
|
+
|
|
215
|
+
# 3. Run the snapshot job once to verify it writes:
|
|
216
|
+
python scripts/snapshot_job.py --top 10
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
If `SUPABASE_URL` and `SUPABASE_KEY` are unset, both the API and the snapshot job fall back to an in-memory repository — the suite still passes, the API still serves live scoring, but history endpoints will be empty until you wire up Supabase.
|
|
220
|
+
|
|
221
|
+
### Daily snapshot job
|
|
222
|
+
|
|
223
|
+
Schedule `python scripts/snapshot_job.py --top 50` daily (Railway cron, GitHub Actions, or Supabase pg_cron triggering an edge function — your call). The job is idempotent: running twice creates two snapshots, which is fine — history queries pick the closest one.
|
|
224
|
+
|
|
225
|
+
```bash
|
|
226
|
+
python scripts/snapshot_job.py --top 50 # production
|
|
227
|
+
python scripts/snapshot_job.py --top 5 --dry-run # local testing, no writes
|
|
228
|
+
```
|
|
229
|
+
|
|
230
|
+
Each run records an audit row in `snapshot_runs` with start/finish times, wallets scored, and error count.
|
|
231
|
+
|
|
232
|
+
### Schema overview
|
|
233
|
+
|
|
234
|
+
| Table | Purpose |
|
|
235
|
+
|-----------------------------|--------------------------------------------------|
|
|
236
|
+
| `wallets` | One row per wallet ever seen |
|
|
237
|
+
| `wallet_scores` | Append-only score time series |
|
|
238
|
+
| `open_position_snapshots` | What each wallet held at each tick |
|
|
239
|
+
| `leaderboard_snapshots` | Full leaderboard, preserved daily |
|
|
240
|
+
| `snapshot_runs` | Audit trail for the cron job |
|
|
241
|
+
|
|
242
|
+
Two views (`latest_wallet_scores`, `latest_leaderboard`) make the common "what's current" queries cheap.
|
|
243
|
+
|
|
244
|
+
### Repository pattern
|
|
245
|
+
|
|
246
|
+
`db/repository.py` defines a `Repository` protocol. Two implementations:
|
|
247
|
+
|
|
248
|
+
- `InMemoryRepository` — thread-safe, lossy across restarts. Used in tests and as the dev-mode fallback.
|
|
249
|
+
- `SupabaseRepository` — production. Wraps the supabase-py client.
|
|
250
|
+
|
|
251
|
+
The API and snapshot job depend only on the protocol. To swap Supabase for Neon or self-hosted Postgres, write one new class implementing the same six method signatures.
|
|
252
|
+
|
|
253
|
+
## Pricing dimensions
|
|
254
|
+
|
|
255
|
+
The endpoint split was designed so each tier maps cleanly to a billing model. Suggested ranges:
|
|
256
|
+
|
|
257
|
+
| Tier | Endpoints | Suggested price | Why |
|
|
258
|
+
|-----------------|----------------------------------------|--------------------------|----------------------------------|
|
|
259
|
+
| Discovery | `/wallet/{address}`, `/leaderboard/*` | $0.001–$0.01 / call | Slow cache, mostly DB reads |
|
|
260
|
+
| Monitoring | `/wallet/{address}/positions` | $0.01–$0.05 / call | Fresh data, hits Polymarket each time |
|
|
261
|
+
| Streaming (v2) | SSE feed of position changes | $20–$100 / month flat | Continuous fetch on our side |
|
|
262
|
+
| History | `/wallet/{address}/history` etc. | $0.005 / call | Pure DB read, value grows over time |
|
|
263
|
+
|
|
264
|
+
The streaming endpoint is the one serious copy-trading bots will actually pay for, but it requires a continuous-fetch worker on our side — leaving it for v2 once we have signal that the per-call business works.
|
|
265
|
+
|
|
266
|
+
## Deploy
|
|
267
|
+
|
|
268
|
+
### Railway
|
|
269
|
+
|
|
270
|
+
Push the repo, point at it. `railway.toml` handles the rest.
|
|
271
|
+
|
|
272
|
+
### Render / Heroku-style
|
|
273
|
+
|
|
274
|
+
`Procfile` is in place.
|
|
275
|
+
|
|
276
|
+
### Caching
|
|
277
|
+
|
|
278
|
+
`api/cache.py` is a thread-safe in-memory TTL cache with the same interface as a Redis client. For multi-worker production, swap the singleton for `redis.Redis()` in one file. TTLs:
|
|
279
|
+
|
|
280
|
+
- wallet score: 1h
|
|
281
|
+
- open positions only: 5m
|
|
282
|
+
- leaderboard: 30m
|
|
283
|
+
- verified leaderboard: 1h
|
|
284
|
+
|
|
285
|
+
## Testing
|
|
286
|
+
|
|
287
|
+
```bash
|
|
288
|
+
pytest -v
|
|
289
|
+
```
|
|
290
|
+
|
|
291
|
+
Synthetic fixtures in `tests/fixtures.py` mimic the three real wallet patterns from the research phase (phonesculptor MLB human, gabigol HFT bot, neutralwave23 tilt loser) plus a low-data newbie. Tests run against fixtures only — no live API calls — so the suite is deterministic and CI-safe.
|
|
292
|
+
|
|
293
|
+
## Distribution roadmap
|
|
294
|
+
|
|
295
|
+
1. **Now** — REST API on Railway, Supabase for daily snapshot persistence
|
|
296
|
+
2. **Next** — Publish to MCP Hub, Replit Agent Market, awesome-mcp-servers
|
|
297
|
+
3. **Later** — x402 micropayments per call (USDC), historical query endpoints (the moat: every day we run, the dataset grows)
|
|
298
|
+
|
|
299
|
+
## Verified wallet examples
|
|
300
|
+
|
|
301
|
+
These are the personas the test fixtures target. Live numbers will differ as activity changes:
|
|
302
|
+
|
|
303
|
+
| Wallet | Score | Notes |
|
|
304
|
+
|-----------------------------------------------|-------|-----------------------------------------------|
|
|
305
|
+
| `phonesculptor` | ~9/10 | MLB-focused human, distributed wins, real edge|
|
|
306
|
+
| `gabigol` | bot | Crypto 5-min Up/Down arb (edge largely dead post-Feb 2026) |
|
|
307
|
+
| `neutralwave23` | ~1/10 | Distributed tiny wins masking large net loss |
|