atris 3.2.0 → 3.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (55) hide show
  1. package/GETTING_STARTED.md +65 -131
  2. package/README.md +18 -2
  3. package/atris/GETTING_STARTED.md +65 -131
  4. package/atris/PERSONA.md +5 -1
  5. package/atris/atris.md +122 -153
  6. package/atris/skills/aeo/SKILL.md +117 -0
  7. package/atris/skills/atris/SKILL.md +49 -25
  8. package/atris/skills/create-member/SKILL.md +29 -9
  9. package/atris/skills/endgame/SKILL.md +9 -0
  10. package/atris/skills/research-search/SKILL.md +167 -0
  11. package/atris/skills/research-search/arxiv_search.py +157 -0
  12. package/atris/skills/research-search/program.md +48 -0
  13. package/atris/skills/research-search/results.tsv +6 -0
  14. package/atris/skills/research-search/scholar_search.py +154 -0
  15. package/atris/skills/tidy/SKILL.md +36 -21
  16. package/atris/team/_template/MEMBER.md +2 -0
  17. package/atris/team/validator/MEMBER.md +35 -1
  18. package/atris.md +118 -178
  19. package/bin/atris.js +46 -12
  20. package/cli/__pycache__/atris_code.cpython-314.pyc +0 -0
  21. package/cli/__pycache__/runtime_guard.cpython-312.pyc +0 -0
  22. package/cli/__pycache__/runtime_guard.cpython-314.pyc +0 -0
  23. package/cli/atris_code.py +889 -0
  24. package/cli/runtime_guard.py +693 -0
  25. package/commands/align.js +16 -0
  26. package/commands/app.js +316 -0
  27. package/commands/autopilot.js +863 -23
  28. package/commands/brainstorm.js +7 -5
  29. package/commands/business.js +677 -2
  30. package/commands/clean.js +19 -3
  31. package/commands/computer.js +2022 -43
  32. package/commands/context-sync.js +5 -0
  33. package/commands/integrations.js +14 -9
  34. package/commands/lifecycle.js +12 -0
  35. package/commands/plugin.js +24 -0
  36. package/commands/pull.js +86 -11
  37. package/commands/push.js +153 -9
  38. package/commands/serve.js +1 -0
  39. package/commands/sync.js +272 -76
  40. package/commands/verify.js +50 -1
  41. package/commands/wiki.js +27 -2
  42. package/commands/workflow.js +24 -9
  43. package/lib/file-ops.js +13 -1
  44. package/lib/journal.js +23 -0
  45. package/lib/manifest.js +3 -0
  46. package/lib/scorecard.js +42 -4
  47. package/lib/sync-telemetry.js +59 -0
  48. package/lib/todo.js +6 -0
  49. package/lib/wiki.js +150 -6
  50. package/lib/workspace-safety.js +87 -0
  51. package/package.json +2 -1
  52. package/utils/api.js +19 -0
  53. package/utils/auth.js +25 -1
  54. package/utils/config.js +24 -0
  55. package/utils/update-check.js +16 -0
@@ -75,6 +75,15 @@ After running the three moves, write the result to `atris/TODO.md`:
75
75
 
76
76
  The tag must be exactly `[endgame]` (parser only matches `\w+`, no colons or hyphens). The slug lives in the section header.
77
77
 
78
+ 3. **Always append an RSI audit as the final task:**
79
+
80
+ ```markdown
81
+ - **TN:** RSI audit: read this endgame's halts, verify failures, and lessons. If the loop itself broke during this endgame (parser, reward, scorecard, verify wiring), fix it. If nothing broke, no-op. [endgame]
82
+ **Verify:** npm test
83
+ ```
84
+
85
+ This is non-negotiable. Every endgame ends by pointing the loop inward. The loop improves what it ships (RL) AND improves itself (RSI). Same chain, last task, always.
86
+
78
87
  3. **Each task must include a `Verify:` line** with a deterministic check:
79
88
  - **Test command:** `npm test` or `npm run test:feature`
80
89
  - **Grep pattern:** `grep -q "pattern" file.js`
@@ -0,0 +1,167 @@
1
+ ---
2
+ name: research-search
3
+ description: "Fast research sweep — arxiv, semantic scholar, github, web. Finds papers, scores relevance, extracts actionable insights, stores to wiki. Triggers on: research search, find papers, latest research, arxiv, what's new in, sweep papers, research sweep."
4
+ version: 1.0.0
5
+ tags:
6
+ - research
7
+ - arxiv
8
+ - papers
9
+ - knowledge
10
+ - ingestion
11
+ ---
12
+
13
+ # /research — Fast Research Sweep
14
+
15
+ Find the latest research on a topic, score it for relevance, extract what you can BUILD with it, store the best finds.
16
+
17
+ ## Usage
18
+
19
+ ```
20
+ /research <topic> # Sweep a topic, show top results
21
+ /research <topic> --ingest # Sweep + store best finds to wiki
22
+ /research <topic> --deep <arxiv-url> # Deep-read a specific paper
23
+ /research --sweep # Run all topics from program.md
24
+ /research --trending # What's hot this week in your areas
25
+ ```
26
+
27
+ ## On invoke
28
+
29
+ ### Step 0: Load the research program
30
+
31
+ Read `atris/skills/research/program.md` for:
32
+ - Active research topics (what to search for)
33
+ - Scoring criteria (what makes a paper relevant)
34
+ - Date window (default: last 6 months)
35
+ - Prior results from `atris/skills/research/results.tsv`
36
+
37
+ ### Step 1: Multi-source search
38
+
39
+ For the given topic, search ALL of these sources in parallel (use Agent tool for parallelism):
40
+
41
+ **Source A — arxiv API**
42
+ Run via Bash:
43
+ ```bash
44
+ python3 atris/skills/research/arxiv_search.py "<topic>" --after 2025-10-01 --limit 20
45
+ ```
46
+ Returns JSON array of papers with title, authors, abstract, date, url, categories.
47
+
48
+ **Source B — Semantic Scholar API**
49
+ Run via Bash:
50
+ ```bash
51
+ python3 atris/skills/research/scholar_search.py "<topic>" --after 2025-10-01 --limit 20
52
+ ```
53
+ Returns JSON array with title, authors, abstract, date, url, citation count, venue.
54
+
55
+ **Source C — Web search**
56
+ Use WebSearch tool: `"<topic>" site:arxiv.org OR site:github.com 2025..2026`
57
+
58
+ **Source D — GitHub**
59
+ Use WebSearch tool: `"<topic>" site:github.com stars:>100 pushed:>2025-10-01`
60
+
61
+ ### Step 2: Deduplicate and rank
62
+
63
+ Merge results from all sources. Deduplicate by title similarity.
64
+
65
+ For each paper, score 1-10 on:
66
+ - **Relevance**: Does this directly apply to our research program?
67
+ - **Recency**: Published in the target date window?
68
+ - **Actionability**: Can we BUILD something with this? Not just theory?
69
+ - **Novelty**: Is this a new technique, or incremental on known work?
70
+
71
+ Compute total = (relevance * 3 + actionability * 3 + recency * 2 + novelty * 2) / 10
72
+
73
+ ### Step 3: Present results
74
+
75
+ Show a ranked table:
76
+
77
+ ```
78
+ # Research Sweep: <topic>
79
+ ## Date: YYYY-MM-DD | Sources: arxiv, scholar, web, github | Papers found: N
80
+
81
+ | # | Score | Title | Date | Key Insight | Source |
82
+ |---|-------|-------|------|-------------|--------|
83
+ | 1 | 9.2 | ... | ... | ... | arxiv |
84
+ | 2 | 8.5 | ... | ... | ... | scholar|
85
+ ```
86
+
87
+ For the top 5, show:
88
+ - **One-line insight**: What's the actionable takeaway
89
+ - **Applies to**: Which of our projects/experiments this helps
90
+ - **Build it**: What we'd actually implement
91
+
92
+ ### Step 4: Deep read (optional, on request or --ingest)
93
+
94
+ For papers the user selects (or top 3 if --ingest):
95
+
96
+ 1. Use WebFetch to read the full arxiv abstract page
97
+ 2. If PDF: note the URL for manual reading, extract what you can from abstract + related work
98
+ 3. Extract:
99
+ - Core technique (one paragraph)
100
+ - Key results (numbers, benchmarks)
101
+ - How to implement at inference time (if applicable)
102
+ - Dependencies (what you need: fine-tuning? API access? special hardware?)
103
+ - Limitations the authors acknowledge
104
+
105
+ ### Step 5: Store (if --ingest)
106
+
107
+ Write each top paper to `atris/wiki/research/<slug>.md`:
108
+
109
+ ```markdown
110
+ ---
111
+ title: <paper title>
112
+ source: <arxiv/scholar/github url>
113
+ date: <publication date>
114
+ relevance_score: <1-10>
115
+ last_compiled: <today>
116
+ tags: [<topic tags>]
117
+ ---
118
+
119
+ # <Paper Title>
120
+
121
+ **Authors:** ...
122
+ **Published:** ...
123
+ **URL:** ...
124
+
125
+ ## Core Technique
126
+ <one paragraph>
127
+
128
+ ## Key Results
129
+ <bullet points with numbers>
130
+
131
+ ## How to Use (Inference-Time)
132
+ <practical implementation notes>
133
+
134
+ ## Applies To
135
+ <which of our projects benefit>
136
+
137
+ ## Limitations
138
+ <what the authors say doesn't work>
139
+ ```
140
+
141
+ Update `atris/wiki/index.md` with the new pages.
142
+
143
+ ### Step 6: Log
144
+
145
+ Append to `atris/skills/research/results.tsv`:
146
+ ```
147
+ timestamp topic papers_found top_score top_paper source_breakdown
148
+ ```
149
+
150
+ Over time, this log shows which topics are producing the best finds and which sources are most useful.
151
+
152
+ ## RL Integration
153
+
154
+ The research program evolves:
155
+ 1. After each sweep, note which papers scored highest and from which source
156
+ 2. If a paper leads to a successful implementation (tracked via /storysim or /autoresearch), boost that topic's weight
157
+ 3. If a sweep produces nothing actionable, refine the search queries
158
+ 4. The program.md file is the "policy" — update it as you learn what works
159
+
160
+ ## Rules
161
+
162
+ - Date filter is HARD. Do not include papers outside the configured window.
163
+ - Actionability > novelty. A mediocre paper you can build with beats a brilliant paper you can't.
164
+ - No summaries without sources. Every claim needs a URL.
165
+ - Prefer papers with code (GitHub links, "code available at...").
166
+ - Don't deep-read everything. Score first, read the top 3-5.
167
+ - If a paper requires fine-tuning and the user only has API access, flag it clearly.
@@ -0,0 +1,157 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ arxiv API search — returns structured JSON for papers matching a query.
4
+
5
+ Uses the arxiv Atom API (no key required, free, no rate limit for reasonable use).
6
+
7
+ Usage:
8
+ python3 arxiv_search.py "RL creative writing" --after 2025-10-01 --limit 20
9
+ python3 arxiv_search.py "multi-agent debate" --categories cs.AI cs.CL --limit 10
10
+ """
11
+
12
+ from __future__ import annotations
13
+
14
+ import argparse
15
+ import json
16
+ import sys
17
+ import urllib.parse
18
+ import urllib.request
19
+ import xml.etree.ElementTree as ET
20
+ from datetime import datetime
21
+
22
+
23
+ ARXIV_API = "http://export.arxiv.org/api/query"
24
+ ATOM_NS = "{http://www.w3.org/2005/Atom}"
25
+ ARXIV_NS = "{http://arxiv.org/schemas/atom}"
26
+
27
+
28
+ def search_arxiv(
29
+ query: str,
30
+ after: str | None = None,
31
+ categories: list[str] | None = None,
32
+ limit: int = 20,
33
+ ) -> list[dict]:
34
+ """Search arxiv API and return structured results."""
35
+
36
+ # Build search query — use AND between words for broader matching
37
+ # Quoting the whole phrase is too strict; split into AND-ed terms
38
+ terms = query.strip().split()
39
+ if len(terms) <= 3:
40
+ term_query = " AND ".join(f"all:{t}" for t in terms)
41
+ else:
42
+ # For longer queries, group into bigrams + individual key terms
43
+ term_query = " AND ".join(f"all:{t}" for t in terms)
44
+
45
+ search_parts = [term_query]
46
+ if categories:
47
+ cat_query = " OR ".join(f"cat:{c}" for c in categories)
48
+ search_parts.append(f"({cat_query})")
49
+
50
+ search_query = " AND ".join(search_parts)
51
+
52
+ params = {
53
+ "search_query": search_query,
54
+ "start": 0,
55
+ "max_results": min(limit, 50), # arxiv caps at 50 per request
56
+ "sortBy": "submittedDate",
57
+ "sortOrder": "descending",
58
+ }
59
+
60
+ url = f"{ARXIV_API}?{urllib.parse.urlencode(params)}"
61
+
62
+ try:
63
+ req = urllib.request.Request(url, headers={"User-Agent": "AtrisResearch/1.0"})
64
+ with urllib.request.urlopen(req, timeout=30) as resp:
65
+ xml_data = resp.read().decode("utf-8")
66
+ except Exception as e:
67
+ print(json.dumps({"error": str(e), "papers": []}))
68
+ sys.exit(1)
69
+
70
+ # Parse Atom XML
71
+ root = ET.fromstring(xml_data)
72
+ entries = root.findall(f"{ATOM_NS}entry")
73
+
74
+ papers = []
75
+ for entry in entries:
76
+ # Extract fields
77
+ title = entry.findtext(f"{ATOM_NS}title", "").strip().replace("\n", " ")
78
+ abstract = entry.findtext(f"{ATOM_NS}summary", "").strip().replace("\n", " ")
79
+ published = entry.findtext(f"{ATOM_NS}published", "")
80
+ updated = entry.findtext(f"{ATOM_NS}updated", "")
81
+
82
+ # Authors
83
+ authors = []
84
+ for author in entry.findall(f"{ATOM_NS}author"):
85
+ name = author.findtext(f"{ATOM_NS}name", "")
86
+ if name:
87
+ authors.append(name)
88
+
89
+ # Links
90
+ arxiv_url = ""
91
+ pdf_url = ""
92
+ for link in entry.findall(f"{ATOM_NS}link"):
93
+ href = link.get("href", "")
94
+ link_type = link.get("type", "")
95
+ link_title = link.get("title", "")
96
+ if link_title == "pdf":
97
+ pdf_url = href
98
+ elif link_type == "text/html" or (not arxiv_url and "abs" in href):
99
+ arxiv_url = href
100
+
101
+ if not arxiv_url:
102
+ id_elem = entry.findtext(f"{ATOM_NS}id", "")
103
+ arxiv_url = id_elem
104
+
105
+ # Categories
106
+ cats = []
107
+ for cat in entry.findall(f"{ARXIV_NS}primary_category"):
108
+ term = cat.get("term", "")
109
+ if term:
110
+ cats.append(term)
111
+ for cat in entry.findall(f"{ATOM_NS}category"):
112
+ term = cat.get("term", "")
113
+ if term and term not in cats:
114
+ cats.append(term)
115
+
116
+ # Parse date
117
+ pub_date = published[:10] if published else ""
118
+
119
+ # Date filter
120
+ if after and pub_date < after:
121
+ continue
122
+
123
+ papers.append({
124
+ "title": title,
125
+ "authors": authors[:5], # Cap at 5 authors
126
+ "abstract": abstract[:500], # Cap abstract length
127
+ "date": pub_date,
128
+ "url": arxiv_url,
129
+ "pdf": pdf_url,
130
+ "categories": cats[:5],
131
+ "source": "arxiv",
132
+ })
133
+
134
+ return papers
135
+
136
+
137
+ def main() -> int:
138
+ parser = argparse.ArgumentParser(description="Search arxiv for papers")
139
+ parser.add_argument("query", help="Search query")
140
+ parser.add_argument("--after", help="Only papers after this date (YYYY-MM-DD)")
141
+ parser.add_argument("--categories", nargs="*", help="arxiv categories (e.g. cs.AI cs.CL)")
142
+ parser.add_argument("--limit", type=int, default=20, help="Max results")
143
+ args = parser.parse_args()
144
+
145
+ papers = search_arxiv(
146
+ query=args.query,
147
+ after=args.after,
148
+ categories=args.categories,
149
+ limit=args.limit,
150
+ )
151
+
152
+ print(json.dumps({"papers": papers, "count": len(papers), "query": args.query}))
153
+ return 0
154
+
155
+
156
+ if __name__ == "__main__":
157
+ raise SystemExit(main())
@@ -0,0 +1,48 @@
1
+ # Research Program
2
+
3
+ > Customize this file for your project. Add your topics, adjust the date window, define what "actionable" means for you.
4
+
5
+ ## Date Window
6
+ **After:** 2025-10-01
7
+ **Before:** 2026-12-31
8
+
9
+ ## Active Topics
10
+
11
+ > Replace these with your own research interests. Each topic should be specific enough to produce useful search results.
12
+
13
+ ### 1. Example: Inference-time compute scaling
14
+ - Best-of-N rejection sampling
15
+ - Tree of thought / MCTS for LLMs
16
+ - Compute-optimal allocation
17
+ - Extended thinking for complex tasks
18
+
19
+ ### 2. Example: LLM-as-Judge calibration
20
+ - Scoring bias in LLM judges
21
+ - Pairwise vs absolute scoring reliability
22
+ - Multi-criteria rubric design
23
+ - Position bias, length bias, verbosity bias
24
+
25
+ ### 3. Example: Self-improving AI systems
26
+ - Curiosity-driven RL (anti-mode-collapse)
27
+ - Verbalized sampling for diversity
28
+ - Agent self-reflection and metacognition
29
+ - Keep/revert experiment loops
30
+
31
+ ## Scoring Criteria
32
+
33
+ | Criterion | Weight | What it means |
34
+ |-----------|--------|---------------|
35
+ | Relevance | 3x | Directly applies to one of your active topics |
36
+ | Actionability | 3x | Can you BUILD something with this using API access only (no fine-tuning)? |
37
+ | Recency | 2x | Published within your date window |
38
+ | Novelty | 2x | New technique, not incremental on known work |
39
+
40
+ **Total = (relevance * 3 + actionability * 3 + recency * 2 + novelty * 2) / 10**
41
+
42
+ ## Preferences
43
+
44
+ - Papers with code > papers without
45
+ - Inference-time techniques > training-required techniques
46
+ - Applied results > theoretical frameworks
47
+ - Concrete numbers > vague claims
48
+ - Short papers that say one thing well > long surveys
@@ -0,0 +1,6 @@
1
+ timestamp topic papers_found top_score top_paper source_breakdown
2
+ 2026-04-13T11:50:00Z rl-creative-writing+self-improvement+story-coherence 30 9.2 R2-Write (arxiv:2604.03004) arxiv:30 scholar:0 web:0
3
+ 2026-04-13T17:50:00Z rubric-refinement+scene-rewriting 20 9.4 RRD Rubric Refinement (arxiv:2602.05125) arxiv:10 web:10
4
+ 2026-04-13T23:10:00Z sensory-language+embodiment 12 8.6 Zero Body Problem (arxiv:2504.06393) web:12
5
+ 2026-04-14T04:10:00Z scorer-variance+judge-consistency 20 9.0 Efficient Noisy LLM Judge (arxiv:2601.05420) web:20
6
+ 2026-04-14T05:00:00Z micro-gesture+embodied-fiction 10 6.0 none-actionable web:10
@@ -0,0 +1,154 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Semantic Scholar API search — returns structured JSON for papers.
4
+
5
+ Uses the Semantic Scholar Academic Graph API (free, no key required for basic use,
6
+ rate limited to 100 requests/5 min without key).
7
+
8
+ Usage:
9
+ python3 scholar_search.py "reinforcement learning creative writing" --after 2025-10-01 --limit 20
10
+ python3 scholar_search.py "LLM self-play" --min-citations 5
11
+ """
12
+
13
+ from __future__ import annotations
14
+
15
+ import argparse
16
+ import json
17
+ import sys
18
+ import urllib.parse
19
+ import urllib.request
20
+ import time
21
+
22
+
23
+ S2_API = "https://api.semanticscholar.org/graph/v1/paper/search"
24
+ S2_FIELDS = "title,authors,abstract,year,publicationDate,externalIds,citationCount,venue,openAccessPdf,url"
25
+
26
+
27
+ def search_scholar(
28
+ query: str,
29
+ after: str | None = None,
30
+ limit: int = 20,
31
+ min_citations: int = 0,
32
+ ) -> list[dict]:
33
+ """Search Semantic Scholar and return structured results."""
34
+
35
+ # Build year filter
36
+ year_filter = ""
37
+ if after:
38
+ start_year = after[:4]
39
+ year_filter = f"{start_year}-"
40
+
41
+ params = {
42
+ "query": query,
43
+ "limit": min(limit, 100),
44
+ "fields": S2_FIELDS,
45
+ }
46
+ if year_filter:
47
+ params["year"] = year_filter
48
+
49
+ url = f"{S2_API}?{urllib.parse.urlencode(params)}"
50
+
51
+ try:
52
+ req = urllib.request.Request(url, headers={
53
+ "User-Agent": "AtrisResearch/1.0",
54
+ "Accept": "application/json",
55
+ })
56
+ with urllib.request.urlopen(req, timeout=30) as resp:
57
+ data = json.loads(resp.read().decode("utf-8"))
58
+ except urllib.error.HTTPError as e:
59
+ if e.code == 429:
60
+ # Rate limited — wait and retry once
61
+ time.sleep(5)
62
+ try:
63
+ with urllib.request.urlopen(req, timeout=30) as resp:
64
+ data = json.loads(resp.read().decode("utf-8"))
65
+ except Exception as e2:
66
+ print(json.dumps({"error": f"Rate limited: {e2}", "papers": []}))
67
+ sys.exit(1)
68
+ else:
69
+ print(json.dumps({"error": f"HTTP {e.code}: {e.reason}", "papers": []}))
70
+ sys.exit(1)
71
+ except Exception as e:
72
+ print(json.dumps({"error": str(e), "papers": []}))
73
+ sys.exit(1)
74
+
75
+ results = data.get("data", [])
76
+ papers = []
77
+
78
+ for item in results:
79
+ if not item:
80
+ continue
81
+
82
+ title = (item.get("title") or "").strip()
83
+ if not title:
84
+ continue
85
+
86
+ # Authors
87
+ authors = []
88
+ for author in (item.get("authors") or [])[:5]:
89
+ name = author.get("name", "")
90
+ if name:
91
+ authors.append(name)
92
+
93
+ abstract = (item.get("abstract") or "")[:500]
94
+ pub_date = item.get("publicationDate") or ""
95
+ year = item.get("year") or ""
96
+ citations = item.get("citationCount") or 0
97
+ venue = item.get("venue") or ""
98
+
99
+ # URL
100
+ paper_url = item.get("url") or ""
101
+ external_ids = item.get("externalIds") or {}
102
+ arxiv_id = external_ids.get("ArXiv")
103
+ if arxiv_id:
104
+ paper_url = f"https://arxiv.org/abs/{arxiv_id}"
105
+
106
+ # PDF
107
+ pdf_info = item.get("openAccessPdf") or {}
108
+ pdf_url = pdf_info.get("url") or ""
109
+
110
+ # Date filter
111
+ date_str = pub_date[:10] if pub_date else (str(year) if year else "")
112
+ if after and date_str and date_str < after:
113
+ continue
114
+
115
+ # Citation filter
116
+ if citations < min_citations:
117
+ continue
118
+
119
+ papers.append({
120
+ "title": title,
121
+ "authors": authors,
122
+ "abstract": abstract,
123
+ "date": date_str,
124
+ "url": paper_url,
125
+ "pdf": pdf_url,
126
+ "citations": citations,
127
+ "venue": venue,
128
+ "source": "semantic_scholar",
129
+ })
130
+
131
+ return papers
132
+
133
+
134
+ def main() -> int:
135
+ parser = argparse.ArgumentParser(description="Search Semantic Scholar for papers")
136
+ parser.add_argument("query", help="Search query")
137
+ parser.add_argument("--after", help="Only papers after this date (YYYY-MM-DD)")
138
+ parser.add_argument("--limit", type=int, default=20, help="Max results")
139
+ parser.add_argument("--min-citations", type=int, default=0, help="Minimum citation count")
140
+ args = parser.parse_args()
141
+
142
+ papers = search_scholar(
143
+ query=args.query,
144
+ after=args.after,
145
+ limit=args.limit,
146
+ min_citations=args.min_citations,
147
+ )
148
+
149
+ print(json.dumps({"papers": papers, "count": len(papers), "query": args.query}))
150
+ return 0
151
+
152
+
153
+ if __name__ == "__main__":
154
+ raise SystemExit(main())
@@ -1,76 +1,89 @@
1
1
  ---
2
2
  name: tidy
3
- description: "Workspace maintenance and knowledge hygiene. Finds stale docs, broken refs, abandoned tasks, and fixes them. Use when things feel messy or you want the system to clean itself up. Triggers on: tidy, clean up, maintenance, lint, health check, freshen up."
4
- version: 1.1.0
3
+ description: "Workspace maintenance and knowledge hygiene. Finds stale docs, broken refs, abandoned tasks, ghost names, duplicate scorecards, and fixes them. Use when things feel messy or you want the system to clean itself up. Triggers on: tidy, clean up, maintenance, lint, health check, freshen up, prune."
4
+ version: 2.0.0
5
5
  tags:
6
6
  - maintenance
7
7
  - knowledge
8
8
  - hygiene
9
- - docs
9
+ - prune
10
10
  ---
11
11
 
12
12
  # /tidy
13
13
 
14
- Finds what's rotting in your workspace and fixes it. Stale pages, broken references, abandoned tasks, outdated docs.
14
+ Finds what's rotting in your workspace and fixes it. Not just broken refs — ghost names, stale lessons, duplicate data, language drift, dead code.
15
15
 
16
16
  ## When to use
17
17
 
18
18
  - "Things feel messy"
19
19
  - "Clean this up"
20
+ - "Prune"
20
21
  - After a big refactor when docs have drifted
21
22
  - Periodically, to keep the knowledge base honest
22
- - When you suspect MAP.md or wiki pages are out of date
23
+ - Before a release, to make sure everything is true
23
24
 
24
25
  ## On invoke
25
26
 
26
27
  1. Run `atris clean --dry-run` silently. Collect results.
27
- 2. Read atris/MAP.md, atris/TODO.md, and today's journal for context.
28
+ 2. Read atris/MAP.md, atris/TODO.md, atris/lessons.md, and today's journal.
28
29
  3. Scan for these problems (in priority order):
29
30
 
30
31
  ### What to look for
31
32
 
33
+ **Ghost names** — terms that don't match the current identity. Check package.json `name` and `description`, README title, and PERSONA. Grep the codebase for old names (e.g., "atrisDev" when the product is "atris"). Flag any user-facing string that uses a dead name.
34
+
32
35
  **Stale wiki pages** — pages with `last_compiled` frontmatter where the source files have been modified since. The page content may be wrong.
33
36
 
34
37
  **Broken MAP.md references** — file:line refs that point to code that moved or was deleted. The auto-healer fixes what it can; report what it can't.
35
38
 
39
+ **Stale lessons** — lessons about bugs that have since been fixed. Grep the named files for the bug pattern. If it's gone, tag the lesson `[resolved]`.
40
+
41
+ **Duplicate scorecards** — same slug appearing twice in scorecards.md. Keep the one with more data, delete the other.
42
+
36
43
  **Abandoned tasks** — in-progress tasks claimed more than 3 days ago. Either finish them, re-scope them, or delete them.
37
44
 
38
45
  **Orphan docs** — markdown pages under atris/ that nothing links to. They're invisible and probably stale.
39
46
 
40
- **Stale MAP.md** — if MAP.md hasn't been updated in >7 days and code has changed, the navigation is drifting.
47
+ **Dead exports** — functions in module.exports that nothing imports. They add surface area for no reason.
48
+
49
+ **Stale TODO items** — tasks older than 14 days that haven't moved. Run `isStillTrue` on each. Tag stale ones `[unverified]`.
41
50
 
42
51
  **Empty sections** — TODO.md sections with placeholder text like "(empty)" or "(clean)".
43
52
 
44
53
  4. Present findings as a numbered list, sorted by impact. For each:
45
- - What's wrong
46
- - Why it matters
47
- - What you'd do to fix it
54
+ - What's wrong (specific file, line, or term)
55
+ - Why it matters (one sentence)
56
+ - What you'd do to fix it (one sentence)
48
57
 
49
58
  5. Ask: "want me to fix these? all / pick numbers / skip"
50
59
 
51
60
  6. Fix what they approve. For each fix:
52
61
  - Make the change
53
62
  - Update last_compiled if touching wiki pages
63
+ - Run tests after each fix
54
64
  - Commit with a clear message
55
65
 
56
- 7. After all fixes, run `atris clean` one more time to verify.
66
+ 7. After all fixes, run `atris clean` one more time to verify 0 issues.
57
67
 
58
68
  ## Example
59
69
 
60
70
  ```
61
- Found 4 things to improve:
71
+ found 5 things to tidy:
72
+
73
+ 1. "atrisDev" appears 3 times in user-facing output (bin/atris.js:202, :1545).
74
+ product name is "atris" now. fix: replace with current name.
62
75
 
63
- 1. MAP.md has 11 broken refs 3 files moved, 8 functions renamed.
64
- These make navigation wrong. I can auto-heal most of them.
76
+ 2. lessons.md has 2 lessons about bugs that are already fixed.
77
+ they'll mislead the next horizon pick. fix: tag [resolved].
65
78
 
66
- 2. atris/TODO.md has a task claimed 26 days ago by Executor.
67
- It's blocking the in-progress slot. Should delete or re-scope.
79
+ 3. scorecards.md has a duplicate entry for harden-rl-loop.
80
+ policy will double-count that endgame. fix: keep the better one.
68
81
 
69
- 3. MAP.md hasn't been updated in 25 days.
70
- Code has changed the map is drifting from reality.
82
+ 4. MAP.md has 4 refs that can't be auto-healed.
83
+ navigation is wrong for those symbols. fix: update manually.
71
84
 
72
- 4. 2 empty sections in TODO.md.
73
- Just noise. Can clean them out.
85
+ 5. TODO.md has a task from 12 days ago that nobody touched.
86
+ it's noise. fix: tag [unverified] or delete.
74
87
 
75
88
  want me to fix these? all / pick numbers / skip
76
89
  ```
@@ -80,5 +93,7 @@ want me to fix these? all / pick numbers / skip
80
93
  - Never delete user content without asking.
81
94
  - Always show what you found before fixing.
82
95
  - Commit fixes in small, clear commits (one per category).
96
+ - Run tests after every fix. If tests break, revert and report.
83
97
  - Update last_compiled frontmatter when recompiling wiki pages.
84
- - Run atris clean at the end to verify everything is actually fixed.
98
+ - Run atris clean at the end to verify 0 issues remain.
99
+ - Ghost names are highest priority. The workspace must speak one language.
@@ -14,3 +14,5 @@ tools: []
14
14
  ---
15
15
 
16
16
  # Insert persona, workflow, and rules below
17
+
18
+ > **Soul:** Read `SOUL.md` alongside this file. MEMBER.md is what you do. SOUL.md is who you are.