atris 3.1.0 → 3.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (54) hide show
  1. package/GETTING_STARTED.md +65 -131
  2. package/README.md +29 -4
  3. package/atris/GETTING_STARTED.md +65 -131
  4. package/atris/PERSONA.md +5 -1
  5. package/atris/atris.md +122 -153
  6. package/atris/skills/aeo/SKILL.md +117 -0
  7. package/atris/skills/atris/SKILL.md +49 -25
  8. package/atris/skills/create-member/SKILL.md +29 -9
  9. package/atris/skills/endgame/SKILL.md +9 -0
  10. package/atris/skills/improve/SKILL.md +2 -2
  11. package/atris/skills/research-search/SKILL.md +167 -0
  12. package/atris/skills/research-search/arxiv_search.py +157 -0
  13. package/atris/skills/research-search/program.md +48 -0
  14. package/atris/skills/research-search/results.tsv +6 -0
  15. package/atris/skills/research-search/scholar_search.py +154 -0
  16. package/atris/skills/tidy/SKILL.md +36 -21
  17. package/atris/team/_template/MEMBER.md +2 -0
  18. package/atris/team/validator/MEMBER.md +35 -1
  19. package/atris.md +118 -178
  20. package/bin/atris.js +37 -6
  21. package/cli/__pycache__/atris_code.cpython-314.pyc +0 -0
  22. package/cli/__pycache__/runtime_guard.cpython-312.pyc +0 -0
  23. package/cli/__pycache__/runtime_guard.cpython-314.pyc +0 -0
  24. package/cli/atris_code.py +889 -0
  25. package/cli/runtime_guard.py +693 -0
  26. package/commands/align.js +15 -0
  27. package/commands/app.js +316 -0
  28. package/commands/autopilot.js +948 -42
  29. package/commands/business.js +691 -11
  30. package/commands/computer.js +1979 -43
  31. package/commands/context-sync.js +5 -0
  32. package/commands/experiments.js +1 -1
  33. package/commands/lifecycle.js +12 -0
  34. package/commands/plugin.js +24 -0
  35. package/commands/pull.js +40 -1
  36. package/commands/push.js +44 -0
  37. package/commands/release.js +183 -0
  38. package/commands/research.js +52 -0
  39. package/commands/serve.js +1 -0
  40. package/commands/sync.js +372 -87
  41. package/commands/verify.js +53 -4
  42. package/commands/wiki.js +71 -26
  43. package/lib/file-ops.js +13 -1
  44. package/lib/journal.js +23 -0
  45. package/lib/reward-config.js +24 -0
  46. package/lib/scorecard.js +58 -6
  47. package/lib/sync-telemetry.js +59 -0
  48. package/lib/todo.js +6 -0
  49. package/lib/wiki.js +235 -60
  50. package/package.json +4 -2
  51. package/utils/api.js +19 -0
  52. package/utils/auth.js +25 -1
  53. package/utils/config.js +24 -0
  54. package/utils/update-check.js +16 -0
@@ -75,6 +75,15 @@ After running the three moves, write the result to `atris/TODO.md`:
75
75
 
76
76
  The tag must be exactly `[endgame]` (parser only matches `\w+`, no colons or hyphens). The slug lives in the section header.
77
77
 
78
+ 3. **Always append an RSI audit as the final task:**
79
+
80
+ ```markdown
81
+ - **TN:** RSI audit: read this endgame's halts, verify failures, and lessons. If the loop itself broke during this endgame (parser, reward, scorecard, verify wiring), fix it. If nothing broke, no-op. [endgame]
82
+ **Verify:** npm test
83
+ ```
84
+
85
+ This is non-negotiable. Every endgame ends by pointing the loop inward. The loop improves what it ships (RL) AND improves itself (RSI). Same chain, last task, always.
86
+
78
87
  3. **Each task must include a `Verify:` line** with a deterministic check:
79
88
  - **Test command:** `npm test` or `npm run test:feature`
80
89
  - **Grep pattern:** `grep -q "pattern" file.js`
@@ -23,7 +23,7 @@ This is the product. The thing the user pays for. One call, one verifiable resul
23
23
  → POST /api/improve { workspace: ".", mode: "full" }
24
24
  → backend picks a task, plans, builds, reviews, verifies
25
25
  → returns { task, reward, files_changed, verify_pass, summary }
26
- → CLI writes scorecard to atris/scorecards.md
26
+ → CLI writes scorecard to .atris/presidio/scorecards.md
27
27
  → CLI reports result to user
28
28
  ```
29
29
 
@@ -45,7 +45,7 @@ The inference is Claude Code (or whatever model the backend uses). The environme
45
45
  5. On success:
46
46
  - Show what shipped (task name, files changed, verify result)
47
47
  - Show the reward score
48
- - Write scorecard to `atris/scorecards.md`
48
+ - Write scorecard to `.atris/presidio/scorecards.md`
49
49
  - Append tick to today's journal
50
50
  6. On failure:
51
51
  - Show the error
@@ -0,0 +1,167 @@
1
+ ---
2
+ name: research-search
3
+ description: "Fast research sweep — arxiv, semantic scholar, github, web. Finds papers, scores relevance, extracts actionable insights, stores to wiki. Triggers on: research search, find papers, latest research, arxiv, what's new in, sweep papers, research sweep."
4
+ version: 1.0.0
5
+ tags:
6
+ - research
7
+ - arxiv
8
+ - papers
9
+ - knowledge
10
+ - ingestion
11
+ ---
12
+
13
+ # /research — Fast Research Sweep
14
+
15
+ Find the latest research on a topic, score it for relevance, extract what you can BUILD with it, store the best finds.
16
+
17
+ ## Usage
18
+
19
+ ```
20
+ /research <topic> # Sweep a topic, show top results
21
+ /research <topic> --ingest # Sweep + store best finds to wiki
22
+ /research <topic> --deep <arxiv-url> # Deep-read a specific paper
23
+ /research --sweep # Run all topics from program.md
24
+ /research --trending # What's hot this week in your areas
25
+ ```
26
+
27
+ ## On invoke
28
+
29
+ ### Step 0: Load the research program
30
+
31
+ Read `atris/skills/research/program.md` for:
32
+ - Active research topics (what to search for)
33
+ - Scoring criteria (what makes a paper relevant)
34
+ - Date window (default: last 6 months)
35
+ - Prior results from `atris/skills/research/results.tsv`
36
+
37
+ ### Step 1: Multi-source search
38
+
39
+ For the given topic, search ALL of these sources in parallel (use Agent tool for parallelism):
40
+
41
+ **Source A — arxiv API**
42
+ Run via Bash:
43
+ ```bash
44
+ python3 atris/skills/research/arxiv_search.py "<topic>" --after 2025-10-01 --limit 20
45
+ ```
46
+ Returns JSON array of papers with title, authors, abstract, date, url, categories.
47
+
48
+ **Source B — Semantic Scholar API**
49
+ Run via Bash:
50
+ ```bash
51
+ python3 atris/skills/research/scholar_search.py "<topic>" --after 2025-10-01 --limit 20
52
+ ```
53
+ Returns JSON array with title, authors, abstract, date, url, citation count, venue.
54
+
55
+ **Source C — Web search**
56
+ Use WebSearch tool: `"<topic>" site:arxiv.org OR site:github.com 2025..2026`
57
+
58
+ **Source D — GitHub**
59
+ Use WebSearch tool: `"<topic>" site:github.com stars:>100 pushed:>2025-10-01`
60
+
61
+ ### Step 2: Deduplicate and rank
62
+
63
+ Merge results from all sources. Deduplicate by title similarity.
64
+
65
+ For each paper, score 1-10 on:
66
+ - **Relevance**: Does this directly apply to our research program?
67
+ - **Recency**: Published in the target date window?
68
+ - **Actionability**: Can we BUILD something with this? Not just theory?
69
+ - **Novelty**: Is this a new technique, or incremental on known work?
70
+
71
+ Compute total = (relevance * 3 + actionability * 3 + recency * 2 + novelty * 2) / 10
72
+
73
+ ### Step 3: Present results
74
+
75
+ Show a ranked table:
76
+
77
+ ```
78
+ # Research Sweep: <topic>
79
+ ## Date: YYYY-MM-DD | Sources: arxiv, scholar, web, github | Papers found: N
80
+
81
+ | # | Score | Title | Date | Key Insight | Source |
82
+ |---|-------|-------|------|-------------|--------|
83
+ | 1 | 9.2 | ... | ... | ... | arxiv |
84
+ | 2 | 8.5 | ... | ... | ... | scholar|
85
+ ```
86
+
87
+ For the top 5, show:
88
+ - **One-line insight**: What's the actionable takeaway
89
+ - **Applies to**: Which of our projects/experiments this helps
90
+ - **Build it**: What we'd actually implement
91
+
92
+ ### Step 4: Deep read (optional, on request or --ingest)
93
+
94
+ For papers the user selects (or top 3 if --ingest):
95
+
96
+ 1. Use WebFetch to read the full arxiv abstract page
97
+ 2. If PDF: note the URL for manual reading, extract what you can from abstract + related work
98
+ 3. Extract:
99
+ - Core technique (one paragraph)
100
+ - Key results (numbers, benchmarks)
101
+ - How to implement at inference time (if applicable)
102
+ - Dependencies (what you need: fine-tuning? API access? special hardware?)
103
+ - Limitations the authors acknowledge
104
+
105
+ ### Step 5: Store (if --ingest)
106
+
107
+ Write each top paper to `atris/wiki/research/<slug>.md`:
108
+
109
+ ```markdown
110
+ ---
111
+ title: <paper title>
112
+ source: <arxiv/scholar/github url>
113
+ date: <publication date>
114
+ relevance_score: <1-10>
115
+ last_compiled: <today>
116
+ tags: [<topic tags>]
117
+ ---
118
+
119
+ # <Paper Title>
120
+
121
+ **Authors:** ...
122
+ **Published:** ...
123
+ **URL:** ...
124
+
125
+ ## Core Technique
126
+ <one paragraph>
127
+
128
+ ## Key Results
129
+ <bullet points with numbers>
130
+
131
+ ## How to Use (Inference-Time)
132
+ <practical implementation notes>
133
+
134
+ ## Applies To
135
+ <which of our projects benefit>
136
+
137
+ ## Limitations
138
+ <what the authors say doesn't work>
139
+ ```
140
+
141
+ Update `atris/wiki/index.md` with the new pages.
142
+
143
+ ### Step 6: Log
144
+
145
+ Append to `atris/skills/research/results.tsv`:
146
+ ```
147
+ timestamp topic papers_found top_score top_paper source_breakdown
148
+ ```
149
+
150
+ Over time, this log shows which topics are producing the best finds and which sources are most useful.
151
+
152
+ ## RL Integration
153
+
154
+ The research program evolves:
155
+ 1. After each sweep, note which papers scored highest and from which source
156
+ 2. If a paper leads to a successful implementation (tracked via /storysim or /autoresearch), boost that topic's weight
157
+ 3. If a sweep produces nothing actionable, refine the search queries
158
+ 4. The program.md file is the "policy" — update it as you learn what works
159
+
160
+ ## Rules
161
+
162
+ - Date filter is HARD. Do not include papers outside the configured window.
163
+ - Actionability > novelty. A mediocre paper you can build with beats a brilliant paper you can't.
164
+ - No summaries without sources. Every claim needs a URL.
165
+ - Prefer papers with code (GitHub links, "code available at...").
166
+ - Don't deep-read everything. Score first, read the top 3-5.
167
+ - If a paper requires fine-tuning and the user only has API access, flag it clearly.
@@ -0,0 +1,157 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ arxiv API search — returns structured JSON for papers matching a query.
4
+
5
+ Uses the arxiv Atom API (no key required, free, no rate limit for reasonable use).
6
+
7
+ Usage:
8
+ python3 arxiv_search.py "RL creative writing" --after 2025-10-01 --limit 20
9
+ python3 arxiv_search.py "multi-agent debate" --categories cs.AI cs.CL --limit 10
10
+ """
11
+
12
+ from __future__ import annotations
13
+
14
+ import argparse
15
+ import json
16
+ import sys
17
+ import urllib.parse
18
+ import urllib.request
19
+ import xml.etree.ElementTree as ET
20
+ from datetime import datetime
21
+
22
+
23
+ ARXIV_API = "http://export.arxiv.org/api/query"
24
+ ATOM_NS = "{http://www.w3.org/2005/Atom}"
25
+ ARXIV_NS = "{http://arxiv.org/schemas/atom}"
26
+
27
+
28
+ def search_arxiv(
29
+ query: str,
30
+ after: str | None = None,
31
+ categories: list[str] | None = None,
32
+ limit: int = 20,
33
+ ) -> list[dict]:
34
+ """Search arxiv API and return structured results."""
35
+
36
+ # Build search query — use AND between words for broader matching
37
+ # Quoting the whole phrase is too strict; split into AND-ed terms
38
+ terms = query.strip().split()
39
+ if len(terms) <= 3:
40
+ term_query = " AND ".join(f"all:{t}" for t in terms)
41
+ else:
42
+ # For longer queries, group into bigrams + individual key terms
43
+ term_query = " AND ".join(f"all:{t}" for t in terms)
44
+
45
+ search_parts = [term_query]
46
+ if categories:
47
+ cat_query = " OR ".join(f"cat:{c}" for c in categories)
48
+ search_parts.append(f"({cat_query})")
49
+
50
+ search_query = " AND ".join(search_parts)
51
+
52
+ params = {
53
+ "search_query": search_query,
54
+ "start": 0,
55
+ "max_results": min(limit, 50), # arxiv caps at 50 per request
56
+ "sortBy": "submittedDate",
57
+ "sortOrder": "descending",
58
+ }
59
+
60
+ url = f"{ARXIV_API}?{urllib.parse.urlencode(params)}"
61
+
62
+ try:
63
+ req = urllib.request.Request(url, headers={"User-Agent": "AtrisResearch/1.0"})
64
+ with urllib.request.urlopen(req, timeout=30) as resp:
65
+ xml_data = resp.read().decode("utf-8")
66
+ except Exception as e:
67
+ print(json.dumps({"error": str(e), "papers": []}))
68
+ sys.exit(1)
69
+
70
+ # Parse Atom XML
71
+ root = ET.fromstring(xml_data)
72
+ entries = root.findall(f"{ATOM_NS}entry")
73
+
74
+ papers = []
75
+ for entry in entries:
76
+ # Extract fields
77
+ title = entry.findtext(f"{ATOM_NS}title", "").strip().replace("\n", " ")
78
+ abstract = entry.findtext(f"{ATOM_NS}summary", "").strip().replace("\n", " ")
79
+ published = entry.findtext(f"{ATOM_NS}published", "")
80
+ updated = entry.findtext(f"{ATOM_NS}updated", "")
81
+
82
+ # Authors
83
+ authors = []
84
+ for author in entry.findall(f"{ATOM_NS}author"):
85
+ name = author.findtext(f"{ATOM_NS}name", "")
86
+ if name:
87
+ authors.append(name)
88
+
89
+ # Links
90
+ arxiv_url = ""
91
+ pdf_url = ""
92
+ for link in entry.findall(f"{ATOM_NS}link"):
93
+ href = link.get("href", "")
94
+ link_type = link.get("type", "")
95
+ link_title = link.get("title", "")
96
+ if link_title == "pdf":
97
+ pdf_url = href
98
+ elif link_type == "text/html" or (not arxiv_url and "abs" in href):
99
+ arxiv_url = href
100
+
101
+ if not arxiv_url:
102
+ id_elem = entry.findtext(f"{ATOM_NS}id", "")
103
+ arxiv_url = id_elem
104
+
105
+ # Categories
106
+ cats = []
107
+ for cat in entry.findall(f"{ARXIV_NS}primary_category"):
108
+ term = cat.get("term", "")
109
+ if term:
110
+ cats.append(term)
111
+ for cat in entry.findall(f"{ATOM_NS}category"):
112
+ term = cat.get("term", "")
113
+ if term and term not in cats:
114
+ cats.append(term)
115
+
116
+ # Parse date
117
+ pub_date = published[:10] if published else ""
118
+
119
+ # Date filter
120
+ if after and pub_date < after:
121
+ continue
122
+
123
+ papers.append({
124
+ "title": title,
125
+ "authors": authors[:5], # Cap at 5 authors
126
+ "abstract": abstract[:500], # Cap abstract length
127
+ "date": pub_date,
128
+ "url": arxiv_url,
129
+ "pdf": pdf_url,
130
+ "categories": cats[:5],
131
+ "source": "arxiv",
132
+ })
133
+
134
+ return papers
135
+
136
+
137
+ def main() -> int:
138
+ parser = argparse.ArgumentParser(description="Search arxiv for papers")
139
+ parser.add_argument("query", help="Search query")
140
+ parser.add_argument("--after", help="Only papers after this date (YYYY-MM-DD)")
141
+ parser.add_argument("--categories", nargs="*", help="arxiv categories (e.g. cs.AI cs.CL)")
142
+ parser.add_argument("--limit", type=int, default=20, help="Max results")
143
+ args = parser.parse_args()
144
+
145
+ papers = search_arxiv(
146
+ query=args.query,
147
+ after=args.after,
148
+ categories=args.categories,
149
+ limit=args.limit,
150
+ )
151
+
152
+ print(json.dumps({"papers": papers, "count": len(papers), "query": args.query}))
153
+ return 0
154
+
155
+
156
+ if __name__ == "__main__":
157
+ raise SystemExit(main())
@@ -0,0 +1,48 @@
1
+ # Research Program
2
+
3
+ > Customize this file for your project. Add your topics, adjust the date window, define what "actionable" means for you.
4
+
5
+ ## Date Window
6
+ **After:** 2025-10-01
7
+ **Before:** 2026-12-31
8
+
9
+ ## Active Topics
10
+
11
+ > Replace these with your own research interests. Each topic should be specific enough to produce useful search results.
12
+
13
+ ### 1. Example: Inference-time compute scaling
14
+ - Best-of-N rejection sampling
15
+ - Tree of thought / MCTS for LLMs
16
+ - Compute-optimal allocation
17
+ - Extended thinking for complex tasks
18
+
19
+ ### 2. Example: LLM-as-Judge calibration
20
+ - Scoring bias in LLM judges
21
+ - Pairwise vs absolute scoring reliability
22
+ - Multi-criteria rubric design
23
+ - Position bias, length bias, verbosity bias
24
+
25
+ ### 3. Example: Self-improving AI systems
26
+ - Curiosity-driven RL (anti-mode-collapse)
27
+ - Verbalized sampling for diversity
28
+ - Agent self-reflection and metacognition
29
+ - Keep/revert experiment loops
30
+
31
+ ## Scoring Criteria
32
+
33
+ | Criterion | Weight | What it means |
34
+ |-----------|--------|---------------|
35
+ | Relevance | 3x | Directly applies to one of your active topics |
36
+ | Actionability | 3x | Can you BUILD something with this using API access only (no fine-tuning)? |
37
+ | Recency | 2x | Published within your date window |
38
+ | Novelty | 2x | New technique, not incremental on known work |
39
+
40
+ **Total = (relevance * 3 + actionability * 3 + recency * 2 + novelty * 2) / 10**
41
+
42
+ ## Preferences
43
+
44
+ - Papers with code > papers without
45
+ - Inference-time techniques > training-required techniques
46
+ - Applied results > theoretical frameworks
47
+ - Concrete numbers > vague claims
48
+ - Short papers that say one thing well > long surveys
@@ -0,0 +1,6 @@
1
+ timestamp topic papers_found top_score top_paper source_breakdown
2
+ 2026-04-13T11:50:00Z rl-creative-writing+self-improvement+story-coherence 30 9.2 R2-Write (arxiv:2604.03004) arxiv:30 scholar:0 web:0
3
+ 2026-04-13T17:50:00Z rubric-refinement+scene-rewriting 20 9.4 RRD Rubric Refinement (arxiv:2602.05125) arxiv:10 web:10
4
+ 2026-04-13T23:10:00Z sensory-language+embodiment 12 8.6 Zero Body Problem (arxiv:2504.06393) web:12
5
+ 2026-04-14T04:10:00Z scorer-variance+judge-consistency 20 9.0 Efficient Noisy LLM Judge (arxiv:2601.05420) web:20
6
+ 2026-04-14T05:00:00Z micro-gesture+embodied-fiction 10 6.0 none-actionable web:10
@@ -0,0 +1,154 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Semantic Scholar API search — returns structured JSON for papers.
4
+
5
+ Uses the Semantic Scholar Academic Graph API (free, no key required for basic use,
6
+ rate limited to 100 requests/5 min without key).
7
+
8
+ Usage:
9
+ python3 scholar_search.py "reinforcement learning creative writing" --after 2025-10-01 --limit 20
10
+ python3 scholar_search.py "LLM self-play" --min-citations 5
11
+ """
12
+
13
+ from __future__ import annotations
14
+
15
+ import argparse
16
+ import json
17
+ import sys
18
+ import urllib.parse
19
+ import urllib.request
20
+ import time
21
+
22
+
23
+ S2_API = "https://api.semanticscholar.org/graph/v1/paper/search"
24
+ S2_FIELDS = "title,authors,abstract,year,publicationDate,externalIds,citationCount,venue,openAccessPdf,url"
25
+
26
+
27
+ def search_scholar(
28
+ query: str,
29
+ after: str | None = None,
30
+ limit: int = 20,
31
+ min_citations: int = 0,
32
+ ) -> list[dict]:
33
+ """Search Semantic Scholar and return structured results."""
34
+
35
+ # Build year filter
36
+ year_filter = ""
37
+ if after:
38
+ start_year = after[:4]
39
+ year_filter = f"{start_year}-"
40
+
41
+ params = {
42
+ "query": query,
43
+ "limit": min(limit, 100),
44
+ "fields": S2_FIELDS,
45
+ }
46
+ if year_filter:
47
+ params["year"] = year_filter
48
+
49
+ url = f"{S2_API}?{urllib.parse.urlencode(params)}"
50
+
51
+ try:
52
+ req = urllib.request.Request(url, headers={
53
+ "User-Agent": "AtrisResearch/1.0",
54
+ "Accept": "application/json",
55
+ })
56
+ with urllib.request.urlopen(req, timeout=30) as resp:
57
+ data = json.loads(resp.read().decode("utf-8"))
58
+ except urllib.error.HTTPError as e:
59
+ if e.code == 429:
60
+ # Rate limited — wait and retry once
61
+ time.sleep(5)
62
+ try:
63
+ with urllib.request.urlopen(req, timeout=30) as resp:
64
+ data = json.loads(resp.read().decode("utf-8"))
65
+ except Exception as e2:
66
+ print(json.dumps({"error": f"Rate limited: {e2}", "papers": []}))
67
+ sys.exit(1)
68
+ else:
69
+ print(json.dumps({"error": f"HTTP {e.code}: {e.reason}", "papers": []}))
70
+ sys.exit(1)
71
+ except Exception as e:
72
+ print(json.dumps({"error": str(e), "papers": []}))
73
+ sys.exit(1)
74
+
75
+ results = data.get("data", [])
76
+ papers = []
77
+
78
+ for item in results:
79
+ if not item:
80
+ continue
81
+
82
+ title = (item.get("title") or "").strip()
83
+ if not title:
84
+ continue
85
+
86
+ # Authors
87
+ authors = []
88
+ for author in (item.get("authors") or [])[:5]:
89
+ name = author.get("name", "")
90
+ if name:
91
+ authors.append(name)
92
+
93
+ abstract = (item.get("abstract") or "")[:500]
94
+ pub_date = item.get("publicationDate") or ""
95
+ year = item.get("year") or ""
96
+ citations = item.get("citationCount") or 0
97
+ venue = item.get("venue") or ""
98
+
99
+ # URL
100
+ paper_url = item.get("url") or ""
101
+ external_ids = item.get("externalIds") or {}
102
+ arxiv_id = external_ids.get("ArXiv")
103
+ if arxiv_id:
104
+ paper_url = f"https://arxiv.org/abs/{arxiv_id}"
105
+
106
+ # PDF
107
+ pdf_info = item.get("openAccessPdf") or {}
108
+ pdf_url = pdf_info.get("url") or ""
109
+
110
+ # Date filter
111
+ date_str = pub_date[:10] if pub_date else (str(year) if year else "")
112
+ if after and date_str and date_str < after:
113
+ continue
114
+
115
+ # Citation filter
116
+ if citations < min_citations:
117
+ continue
118
+
119
+ papers.append({
120
+ "title": title,
121
+ "authors": authors,
122
+ "abstract": abstract,
123
+ "date": date_str,
124
+ "url": paper_url,
125
+ "pdf": pdf_url,
126
+ "citations": citations,
127
+ "venue": venue,
128
+ "source": "semantic_scholar",
129
+ })
130
+
131
+ return papers
132
+
133
+
134
+ def main() -> int:
135
+ parser = argparse.ArgumentParser(description="Search Semantic Scholar for papers")
136
+ parser.add_argument("query", help="Search query")
137
+ parser.add_argument("--after", help="Only papers after this date (YYYY-MM-DD)")
138
+ parser.add_argument("--limit", type=int, default=20, help="Max results")
139
+ parser.add_argument("--min-citations", type=int, default=0, help="Minimum citation count")
140
+ args = parser.parse_args()
141
+
142
+ papers = search_scholar(
143
+ query=args.query,
144
+ after=args.after,
145
+ limit=args.limit,
146
+ min_citations=args.min_citations,
147
+ )
148
+
149
+ print(json.dumps({"papers": papers, "count": len(papers), "query": args.query}))
150
+ return 0
151
+
152
+
153
+ if __name__ == "__main__":
154
+ raise SystemExit(main())