atris 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,11 @@
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "Bash(chmod:*)",
5
+ "Bash(npm link)",
6
+ "Bash(atris:*)"
7
+ ],
8
+ "deny": [],
9
+ "ask": []
10
+ }
11
+ }
package/README.md ADDED
@@ -0,0 +1,3 @@
1
+ # ATRIS
2
+
3
+ Drop `atris.md` into any codebase and have your favorite coding agent (Claude Code, OpenAI Codex, etc.) read it to scaffold a fully-instrumented, AI-navigable system in under 10 minutes. The agent will generate CODE_MAP.md, three specialized agent specs, and TASK_CONTEXTS.md—transforming your project into a self-documenting workspace with exact file:line references for every feature.
package/atris.md ADDED
@@ -0,0 +1,270 @@
1
+ # atris.md — Universal Codebase Instrumentation Blueprint
2
+
3
+ > **One markdown file. Drop it anywhere. Agents scaffold a fully-instrumented codebase in <10 minutes.**
4
+
5
+ This spec defines how to transform any codebase into a self-documenting, AI-navigable system. Five artifacts (CODE_MAP.md, 3 agent specs, TASK_CONTEXTS.md) + autonomous agents = 10x faster collaboration.
6
+
7
+ **See [`ATRIS_NOTES.md`](./ATRIS_NOTES.md) for vision, roadmap, and future phases (sync, sessions, agent crews).**
8
+
9
+ ---
10
+
11
+ ## Phase 1: Generate CODE_MAP.md (Exact File Context)
12
+
13
+ **Goal:** Create a single-source-of-truth navigation guide that agents reference for all architecture questions.
14
+
15
+ **Why This Matters:**
16
+ - Agents waste time re-learning codebase structure on each task
17
+ - CODE_MAP.md eliminates friction—one grep-friendly index that's always accurate
18
+ - All agents reference the same truth, preventing contradictory guidance
19
+
20
+ **Agent Instructions:**
21
+
22
+ 1. **Scan the project root** (ignore: `node_modules/`, `.git/`, `dist/`, `build/`, `.DS_Store`, `*.log`)
23
+
24
+ 2. **For each major directory** (depth 1-2 levels), extract:
25
+ - Purpose (1 sentence: why does this directory exist?)
26
+ - Key files with line-count ranges (e.g., `auth.ts: 200 lines`)
27
+ - Search accelerators (ripgrep patterns for fast navigation)
28
+
29
+ 3. **Create `/CODE_MAP.md`** with these sections:
30
+ - **Quick Reference Index** (top) — Grep-friendly shortcuts (e.g., `CHAT:BACKEND -> rg -n "def quick_chat" backend/`)
31
+ - **By-Feature** — Chat, files, auth, billing, etc. (answer: "where is feature X?")
32
+ - **By-Concern** — State management, API layer, UI system, etc. (answer: "where is concern Y?")
33
+ - **Critical Files** — Files >10KB or >100 lines of logic = high-impact (mark as ⭐)
34
+ - **Entry Points** — 3-5 entry points clearly marked (landing page, dashboard, API routes, etc.)
35
+
36
+ 4. **Quality Checklist Before Outputting:**
37
+ - [ ] Can I run `rg -l "TODO|FIXME"` and navigate to each via line numbers in CODE_MAP?
38
+ - [ ] Does every major file have a one-liner explaining its purpose?
39
+ - [ ] Are there 10+ ripgrep patterns I can use to navigate quickly?
40
+ - [ ] Can a new developer answer "where is X?" in <30 seconds using this map?
41
+
42
+ 5. **Output:** `/CODE_MAP.md` (target: 500-800 lines for large codebases; scale to project size)
43
+
44
+ ---
45
+
46
+ ## Phase 2: Spawn 3 Foundation Agents
47
+
48
+ After CODE_MAP.md exists, generate agent specs from CODE_MAP insights. Each agent has explicit guardrails.
49
+
50
+ ### Agent 1: **@codebase_AGENT.md**
51
+
52
+ - **Role:** Codebase Navigator & Architecture Expert
53
+ - **Activation Prompt:**
54
+ ```
55
+ You are the codebase expert. Your sole job is answering "where is X?" with precision.
56
+
57
+ Rules:
58
+ 1. ALWAYS start with: "According to CODE_MAP.md, [item] is located in..."
59
+ 2. ALWAYS cite file:line references (e.g., `app/auth/middleware.ts:15-45`)
60
+ 3. Explain data flows end-to-end (frontend → backend → database)
61
+ 4. Identify coupling points and architecture violations
62
+ 5. Guide developers to the right file in <5 clicks
63
+
64
+ DO NOT:
65
+ - Execute code changes or file modifications
66
+ - Make architecture decisions without explicit approval
67
+ - Assume file locations; always reference CODE_MAP.md
68
+ ```
69
+
70
+ - **Knowledge Base:** CODE_MAP.md, architecture docs, API specs, system design docs
71
+ - **Success Metric:** Every question answered with exact file:line references, zero guesses
72
+
73
+ ### Agent 2: **@task_AGENT.md**
74
+
75
+ - **Role:** Context-Aware Task Executor
76
+ - **Activation Prompt:**
77
+ ```
78
+ You are the task executor. When given a task, extract exact context and execute step-by-step.
79
+
80
+ Rules:
81
+ 1. Read CODE_MAP.md first; extract file:line references for all related files
82
+ 2. Identify ALL files that will be touched (use ripgrep patterns from CODE_MAP)
83
+ 3. Map dependencies and risk zones
84
+ 4. Create a concise 4-5 sentence execution plan with:
85
+ - File paths
86
+ - Line numbers for modifications
87
+ - Exact description of changes
88
+ 5. Execute step-by-step, validating at each stage
89
+
90
+ Format: "Task: [name] | Files: [path:line] | Changes: [exact description]"
91
+
92
+ DO NOT:
93
+ - Skip validation steps
94
+ - Modify files outside the planned scope
95
+ - Ignore type errors or test failures
96
+ ```
97
+
98
+ - **Knowledge Base:** CODE_MAP.md, TASK_CONTEXTS.md (generated), test suite, type definitions
99
+ - **Success Metric:** Tasks completed 95% first-try with zero regressions
100
+
101
+ ### Agent 3: **@validator_AGENT.md**
102
+
103
+ - **Role:** Quality Gatekeeper & Architecture Guardian
104
+ - **Activation Prompt:**
105
+ ```
106
+ You are the validator. After ANY change, verify safety and accuracy.
107
+
108
+ Rules:
109
+ 1. Run type-check, lint, tests automatically
110
+ 2. Verify all file references in CODE_MAP.md still exist and are accurate
111
+ 3. Update CODE_MAP.md if architecture changed
112
+ 4. Check for breaking changes or coupling violations
113
+ 5. Report: "✓ Safe to merge" or "⚠ Risks: [list]"
114
+
115
+ ALWAYS cite CODE_MAP.md and explain why changes are safe/risky.
116
+
117
+ DO NOT:
118
+ - Approve changes without running tests
119
+ - Allow breaking changes silently
120
+ - Update CODE_MAP.md without explaining what changed
121
+ ```
122
+
123
+ - **Knowledge Base:** CODE_MAP.md, test suite, type definitions, architecture principles
124
+ - **Success Metric:** Zero undetected breaking changes reach production
125
+
126
+ ---
127
+
128
+ ## Phase 3: Task Context System (TASK_CONTEXTS.md)
129
+
130
+ **Goal:** Automatic task extraction with exact file context, so agents never guess.
131
+
132
+ **Generated File Format:**
133
+
134
+ ```markdown
135
+ # Task Contexts — Auto-extracted from CODE_MAP.md
136
+
137
+ ## Task Template
138
+ - **Task ID:** T-[AUTO]
139
+ - **Name:** [Feature/Fix Name]
140
+ - **Context Files:** [file:line_start-line_end] (from CODE_MAP critical files)
141
+ - **Execution Plan:**
142
+ 1. [Step 1 with file:line reference]
143
+ 2. [Step 2 with file:line reference]
144
+ 3. [Step 3 with file:line reference]
145
+ - **Success Criteria:** [Measurable, testable]
146
+ - **Dependencies:** [Task IDs or external dependencies]
147
+ - **Risk Level:** [Low/Medium/High + reasoning]
148
+
149
+ ## Example Task (Auto-Generated)
150
+ - **Task ID:** T-001
151
+ - **Name:** Add authentication to file upload
152
+ - **Context Files:**
153
+ - `app/api/files/upload/route.ts:1-50` (handler)
154
+ - `app/auth/middleware.ts:15-45` (auth check)
155
+ - `types/auth.ts:8-20` (auth types)
156
+ - **Execution Plan:**
157
+ 1. Add `verifySession()` call to upload handler (line 20)
158
+ 2. Return 401 if no session (add lines 21-23)
159
+ 3. Add auth test to `__tests__/upload.test.ts:112-125`
160
+ 4. Run `npm run test` and verify all pass
161
+ - **Success Criteria:** Upload rejects unauthenticated requests; all tests pass; CODE_MAP.md updated
162
+ - **Dependencies:** None
163
+ - **Risk Level:** Low (isolated auth check, no cross-module impact)
164
+ ```
165
+
166
+ **Agent Instructions:**
167
+
168
+ 1. After CODE_MAP.md is generated, scan for:
169
+ - Incomplete features (TODOs, FIXMEs, marked with line numbers)
170
+ - High-risk files (>500 lines, multiple imports, touching shared state)
171
+ - Cross-module dependencies that could break easily
172
+
173
+ 2. Auto-generate 5-10 canonical tasks with exact file:line references
174
+ - Include both quick wins (low-risk) and strategic work (high-impact)
175
+ - Map all dependencies explicitly
176
+
177
+ 3. Output: `/TASK_CONTEXTS.md` (maintains and evolves as project changes)
178
+
179
+ 4. On each CODE_MAP.md update, regenerate TASK_CONTEXTS.md to reflect new state
180
+
181
+ ---
182
+
183
+ ## Phase 4: Activation & Handoff
184
+
185
+ **When All Five Required Artifacts Exist:**
186
+
187
+ - ✅ `CODE_MAP.md` (navigation guide)
188
+ - ✅ `@codebase_AGENT.md` (question answerer)
189
+ - ✅ `@task_AGENT.md` (executor)
190
+ - ✅ `@validator_AGENT.md` (gatekeeper)
191
+ - ✅ `TASK_CONTEXTS.md` (task bank)
192
+
193
+ **Agent Behavior Activates Automatically:**
194
+
195
+ | Trigger | Agent | Action |
196
+ |---------|-------|--------|
197
+ | "Where is X?" | @codebase_AGENT | Answers with CODE_MAP.md:line reference |
198
+ | "Do task Y" | @task_AGENT | Extracts context, plans execution, cites file:line |
199
+ | After change | @validator_AGENT | Checks validity, updates docs, blocks unsafe changes |
200
+ | New agent joins | @codebase_AGENT | Reads CODE_MAP.md, immediately productive (no ramp-up) |
201
+
202
+ **Validation Checklist:**
203
+
204
+ - [ ] All three agents can read and cite CODE_MAP.md
205
+ - [ ] @codebase_AGENT answers 5 test questions with file:line accuracy
206
+ - [ ] @task_AGENT completes a sample task without regressions
207
+ - [ ] @validator_AGENT successfully detects and blocks a breaking change
208
+ - [ ] CODE_MAP.md is accurate and stays in sync with code
209
+
210
+ ---
211
+
212
+ ## Phase 5: Future Roadmap (Vision)
213
+
214
+ **See [`ATRIS_NOTES.md`](./ATRIS_NOTES.md) for full roadmap. Preview:**
215
+
216
+ - **Phase 5a: Sync** — Local + cloud markdown sync, enabling offline editing and asynchronous agent work
217
+ - **Phase 5b: Sessions** — Step-by-step markdown workflows with `!status`, `!result` tags for interactive collaboration
218
+ - **Phase 5c: Crew Orchestration** — Multi-agent coordination (codebase expert → executor → validator) from markdown config
219
+
220
+ ---
221
+
222
+ ## Why This Works
223
+
224
+ 1. **CODE_MAP = Single Source of Truth** — All agents reference one navigation guide; no contradictions
225
+ 2. **Exact File:Line Context** — No guessing; every answer is pinpoint accurate
226
+ 3. **Self-Validating** — @validator_AGENT keeps CODE_MAP and artifacts fresh automatically
227
+ 4. **Scalable to Any Codebase** — Works for monorepos, microservices, solo projects, legacy systems
228
+ 5. **Agent Handoff** — New agent joins, reads CODE_MAP, immediately productive (no ramp-up time)
229
+ 6. **Offline + Async Ready** — Markdown files work offline; sync on schedule (future Phase 5a)
230
+
231
+ ---
232
+
233
+ ## Implementation Checklist
234
+
235
+ - [ ] **Phase 1:** Generate CODE_MAP.md on fresh codebase (10 min)
236
+ - [ ] **Phase 1 Validation:** Run ripgrep shortcuts from CODE_MAP; all work (5 min)
237
+ - [ ] **Phase 2:** Spawn 3 agent specs with activation prompts (5 min)
238
+ - [ ] **Phase 3:** Auto-generate TASK_CONTEXTS.md from CODE_MAP insights (10 min)
239
+ - [ ] **Phase 4:** Test system (ask @codebase_AGENT a question, watch it cite CODE_MAP:line) (10 min)
240
+ - [ ] **Ongoing:** Each CODE_MAP update triggers TASK_CONTEXTS refresh
241
+
242
+ **Total time to full instrumentation: ~40 minutes**
243
+
244
+ ---
245
+
246
+ ## Quick Start
247
+
248
+ ```bash
249
+ # 1. Copy atris.md to your project root
250
+ cp atris.md /path/to/your/project/
251
+
252
+ # 2. Hand atris.md to any agent with this prompt:
253
+ # "Read atris.md. Execute Phase 1-4 to scaffold this codebase."
254
+
255
+ # 3. Agent generates 5 artifacts
256
+ # 4. Your codebase is now fully instrumented for AI collaboration
257
+
258
+ # Future: atris-cli automates this
259
+ # atris init
260
+ ```
261
+
262
+ ---
263
+
264
+ **Status:** Spec finalized. When deployed to a fresh project, agents will:
265
+ 1. Map the codebase in <10 minutes
266
+ 2. Answer questions with file:line precision
267
+ 3. Execute tasks with full context
268
+ 4. Maintain docs as code evolves
269
+
270
+ *Drop atris.md anywhere. Agents follow the blueprint. Codebase becomes fully instrumented for AI collaboration.*
package/bin/atris.js ADDED
@@ -0,0 +1,49 @@
1
+ #!/usr/bin/env node
2
+
3
+ const fs = require('fs');
4
+ const path = require('path');
5
+
6
+ const command = process.argv[2];
7
+
8
+ if (!command) {
9
+ console.log('Usage: atris <command>');
10
+ console.log('Commands:');
11
+ console.log(' init - Initialize ATRIS in current project');
12
+ process.exit(0);
13
+ }
14
+
15
+ // Command handlers
16
+ if (command === 'init') {
17
+ initAtris();
18
+ } else {
19
+ console.log(`Unknown command: ${command}`);
20
+ console.log('Run "atris" without arguments to see available commands');
21
+ process.exit(1);
22
+ }
23
+
24
+ function initAtris() {
25
+ const targetDir = path.join(process.cwd(), 'atris');
26
+ const sourceFile = path.join(__dirname, '..', 'atris.md');
27
+ const targetFile = path.join(targetDir, 'atris.md');
28
+
29
+ // Create atris/ folder if it doesn't exist
30
+ if (!fs.existsSync(targetDir)) {
31
+ fs.mkdirSync(targetDir, { recursive: true });
32
+ console.log('✓ Created atris/ folder');
33
+ } else {
34
+ console.log('✓ atris/ folder already exists');
35
+ }
36
+
37
+ // Copy atris.md to the folder
38
+ if (fs.existsSync(sourceFile)) {
39
+ fs.copyFileSync(sourceFile, targetFile);
40
+ console.log('✓ Copied atris.md to atris/ folder');
41
+ console.log('\nATRIS initialized! Next steps:');
42
+ console.log('1. Read atris/atris.md for instructions');
43
+ console.log('2. Paste the content to Claude or your favorite AI agent');
44
+ console.log('3. The agent will generate CODE_MAP.md and specialized agents');
45
+ } else {
46
+ console.error('✗ Error: atris.md not found in package');
47
+ process.exit(1);
48
+ }
49
+ }
@@ -0,0 +1,112 @@
1
+ %PDF-1.4
2
+ %���� ReportLab Generated PDF document http://www.reportlab.com
3
+ 1 0 obj
4
+ <<
5
+ /F1 2 0 R /F2 3 0 R
6
+ >>
7
+ endobj
8
+ 2 0 obj
9
+ <<
10
+ /BaseFont /Helvetica /Encoding /WinAnsiEncoding /Name /F1 /Subtype /Type1 /Type /Font
11
+ >>
12
+ endobj
13
+ 3 0 obj
14
+ <<
15
+ /BaseFont /Helvetica-Bold /Encoding /WinAnsiEncoding /Name /F2 /Subtype /Type1 /Type /Font
16
+ >>
17
+ endobj
18
+ 4 0 obj
19
+ <<
20
+ /Contents 10 0 R /MediaBox [ 0 0 612 792 ] /Parent 9 0 R /Resources <<
21
+ /Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
22
+ >> /Rotate 0 /Trans <<
23
+
24
+ >>
25
+ /Type /Page
26
+ >>
27
+ endobj
28
+ 5 0 obj
29
+ <<
30
+ /Contents 11 0 R /MediaBox [ 0 0 612 792 ] /Parent 9 0 R /Resources <<
31
+ /Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
32
+ >> /Rotate 0 /Trans <<
33
+
34
+ >>
35
+ /Type /Page
36
+ >>
37
+ endobj
38
+ 6 0 obj
39
+ <<
40
+ /Contents 12 0 R /MediaBox [ 0 0 612 792 ] /Parent 9 0 R /Resources <<
41
+ /Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
42
+ >> /Rotate 0 /Trans <<
43
+
44
+ >>
45
+ /Type /Page
46
+ >>
47
+ endobj
48
+ 7 0 obj
49
+ <<
50
+ /PageMode /UseNone /Pages 9 0 R /Type /Catalog
51
+ >>
52
+ endobj
53
+ 8 0 obj
54
+ <<
55
+ /Author (\(anonymous\)) /CreationDate (D:20251016232521+00'00') /Creator (\(unspecified\)) /Keywords () /ModDate (D:20251016232521+00'00') /Producer (ReportLab PDF Library - www.reportlab.com)
56
+ /Subject (\(unspecified\)) /Title (\(anonymous\)) /Trapped /False
57
+ >>
58
+ endobj
59
+ 9 0 obj
60
+ <<
61
+ /Count 3 /Kids [ 4 0 R 5 0 R 6 0 R ] /Type /Pages
62
+ >>
63
+ endobj
64
+ 10 0 obj
65
+ <<
66
+ /Filter [ /ASCII85Decode /FlateDecode ] /Length 1270
67
+ >>
68
+ stream
69
+ Gat=*?#Sa]&:Dg-fXIZ1UU+K:5458g'l!,e*dU86/,De0/"6i6.O5b$qs-=:g!7VaG%'#oCg,?6m*#s)dho8NrN07tO&LB!>>BH0dt[EK.r"hb>u-OLO7Mt/k8/5`=l<bjNRO>g+(8Bf^!eDg\%#-u74G\p$Fr`WR8"%d3j%Z-PFaGkcTZN#UoQN187.T]F)L+".=KBhU?O7(J#:[B]7Alo]Ug>sa++FH0A"7`Yl5EXEW\$l&G$9\GbR"jlM3a;o%dpt^"^>IXn;M^E`4/4BR%:nnp[NhIcdWP=fO5_0J1h4QMBY@*U,i>6J8oXF)"[1lm^r+r9d]*r@_eFJP=(1JJ@p\8H.a]cX^.'/&e9Y2X*K\7Q*_'RS(9oPH)_^FPDRA*P9U97T[#&:aTgilASokS@]/.4*qr+GYdaS*LRfdG>]hBc46]?&JbujS_(<u^_NYj0m/dtC7HA*$,6N90*D;V'V[L?<7dO<7fTuIi+<jrd"f!BAUsn7e[fJI_kG_eP>#+!8d6;AdI5pT^_u"&EGFVK5?O`D9%@=Ijt<E)9YIR]LZhu+&6<T6r,JB62\T71IUJ55d?]T2OUQ-m1Gf-%+>$&;b/O9a<M2fS=^RjEW>3u9Z,.8f_*$)'/(^NMb"(t(Y(Kk[[C\M`+#kaJj(qu_k@8mB%WbQ/(fo7i51$C_Ah[Up_4WCB`L%5c4`CsE</IlDH[T6im;*W%JnG0U@T=ua:=OJ3peoiuLU!(jNQC2Ym%@J.dVbqOKQC4m-KW3tTlRQ,8>L'1'N(dU0G9ihX5O/+B6a'C+-ja>Iubgh\TN?Y1^Ks6@Afi`@XmOQ*Ia/dUk7g'"4ABMfXdL\knr&0)1@OciMj7pa?acXJZf`5HU=R=gcpRtA%]h6XO_nU68nP[?o$l'!Uu&HX11omiA^6['V3-0md=9K5Lb*3WF[P`FXI3o.9cIPbm^NA"(H!rg'V83UG/BofKh\b=$/(:A`'4F\QPs#;23K)YsOWAiAKIVhf&"Q9!mQj>ca+.@H%&>e_8l:`p;8S+f1.-FGA9aOT\eom(o_X?8gW4_SMY!neg69)!#"di_O`4>O)N"pYs.kALGOrI/afO`afS++3A?D+sDXFJKu\i-V-7VYa*uZ-G'ja/e/8aFW,]$%dn'$]_/<m40J%!,o506o(#VJ&`iauP*_A,OF+up6RlDD*R:mXnaP"Bb@1a3?@FkBL:1n>QFLXNV.BgcqbeK&Rbk+j,9</tEr73J(6]<c*H)DfnI5*LLW:l*q@>.?$5E~>endstream
70
+ endobj
71
+ 11 0 obj
72
+ <<
73
+ /Filter [ /ASCII85Decode /FlateDecode ] /Length 1460
74
+ >>
75
+ stream
76
+ GatU2_/A!e&A@6Wk-W5_>h="de316+$S8N?[YQRsdaUfs9T]ZRg$sOicB+3*0+o/l_EhS^1AU(WHZ(6`)83?N?e,3a=7Ga,#2'ts"_<PU!`jfe\(0$P<Rs>/BaOOW3B^'.:$7W;>^8cQb_F$t)6E+F;AN'>kAtOTD:.`:P\:+$iA[uYSad7jN=uS@K>p\PR%3#;%B[Z"X=AnO$4Y-FCN=M\ci_FT&1ZuXV"MRqDgmJl1T^S+Rp$\0X+/PB=Mq:>)SlZT23\qEpBB2-1i35jast=?$:T_?0[p"TWr*[H:92-6S>lMUc/O3i"<h@B7%cI^@,6JGrnb([#R;82Yu(fpru<64Qn'[-@5[8h6-\jO!k#$rJD?kXf7C*"Ri.,MC)!UsN-,JWVQUWXmShc;:dM6(b+Pi(J?d$=otf4*577)OoC>5H#e;W8,P)oa\iBhc?66=EiJDO9WarcfUFng"?pR%h8I8Wi-+6>g@n!A;*:;Es^SCHl$Yq;p9,Fb%_Lc<e`'68F1/#sP2/,%-!Uup[F\uWe/bW-<>BhVUa]A4'kJMqj7V-f6R[nR?^nOY'd5C)*Oj,McEIU:F//a_Q@MnGUc.\T&K16K\1-W3+G/Yh=8%2'*!knlk+?$JrM3ajB_FcS5Sj]t"bS?C->0Pu)QUrP.//WlFACM^9cRI5NMHP>;&oQ2W"jEalD5?_)r7&S#k5%#]816uHSY+C%3u%7f^;#qdaNU3`TBPpIDD_7:Cp?V[BUcjN/n1-sOe=J74f(?62)(H(Q=>mqdq,(*7ra<f-mOUhG%^!6ntp%abbj([mdoftXEIi)qpn[gA*8_ErnTtuiXM3h<ihs-@`u<hF6%LXckU*\;b\\fb%boo_2PREX,PfFF"-o[]:K37%r`mikY"R.M*TWYDC2V:LP7K#^\4c[KL??<P<l#6kgM=3Ir9[BnVMn`q1qQJV$:MsOfd6;dQ&>lA+6Gnd?`U#(Co]D:&$1`70mS7i;p<o_Ss&9+J>ZI,1Pqm9tnI-D:5?/Wf3Yt$'Q\BXJ#(UPG-,=V=KRo1:7(lc-)Oj+GJB$7J>=iXI6Xf;*$Qu'(s/ikU:*-@*3a:o5U"Rq>H"2rV;cCOt-!%E=9>V*UacN=DVY/=WX,RM)a0oU_]cc4XJgb%nAIb/ZXGi^h"PAE0q2#73p:%T/12nA;_:`j)r[$S6eF=9G[;dC*G.B-^O'D?\WKLJuDGh+JH0l\in<gL8<b>Xqcb<'Qp_"dHKRQOG4WaH6c'C;8h%+MCI$.G$NeTZL`&Xj_pso+n&A/hb_lgOG/:\dJo"`m0]Vt/H2Q"%RK8>-TbX]H)5iHpV#u.$/9,$ndY+KlmNPOM;js_7X(LZb2?rY)s-DP`q<(M?VDQdh2W5tPP8nMEiaP'Rjl]bT-fH!KaU#BC7CtnY)jC#pq1Ck&2H)^AVtb5%$g17S>W^Rg6m+Wff*]>ks$K2e\@=E\+K~>endstream
77
+ endobj
78
+ 12 0 obj
79
+ <<
80
+ /Filter [ /ASCII85Decode /FlateDecode ] /Length 1200
81
+ >>
82
+ stream
83
+ GatU2bAQ>u']%q&mOi5>Wg7*ADKhLXdS3Q2Kf35be<Rs?,#sf3X2"*H.G;gNhuPsE@Fu89pY>A5F'@Uha%tA%I4FKJkFD*pb*qlq[\<8G5*n5.hW4:P`uR>0$;AHA<`"+sjp!@-dq2$DAY4.gj%^+nT=]Ts88m3%l.?>mlU\?mkCfBskUI(*Er+@"oPQ!kEi*L,.c!oO*.En']E$7l2Yr!I="8]q473><AQ4DT_X`_fL9W+#B!g(XJZWTF>NQ%??[-u%GJCa">hS*V]`54u^]:j67Y=]EWhJGfm70r](coVNGsbmtO7G+k77F)N:@*s7kkhXV=>\"F:B4BZ#jKI?er3<rY@H,TXSn$`n^M$F]*rSj:>j]Clq:lY"KVUp&_J];4/e,;k(@]sBLb;7$Oa"$`he<rgP;gU@`_@.8FNKc:TbMbZTUUWRB+#_Df_7I\C#?FcI(9'[n>O._a-kp-ks&Xgd,a'r+-=Ro.'uN[S4AJPRKnJTai&:;J=Q)a2$38;Fo`AqFXabp7Fh)<G>mW#\cOM*s<^GpACP\g/uu4?bGYap;.H\L'*b(5q50W[%AJ[`#.%Uo[k4G\NB:lX_]&]1r?d=\6u:[cOqRJ,cFXs:s.4JN'>-,'.$s-Yn_6scqZGM(^uA8fT.hFTeFOnVt1<VJc#_J@WN9[T@_cmfDFCtOil^C0::=NdRR+.:aiX5W-h5hA4ps+h*T<5!iR!qeI"CBrD7ce:kpSehu!O`*=o39Y!T`c)W]n<=F(;4rGMQSkllqK1St8bY$$KA:'u,+Lu=c<?t2m<e/f?Y+j66`FM`D57j=$iF?'jBTCh:l2F<H;bu)4l+`3*P1(0fK]@!o'6YmpRSe>+k:+@=!'jBAhY;YheAuo*lM@i)9HRHD[U4J8P*(2a&o"hek11lC[Rer'qpY>5j`p4'm?5el_:CTu`@O"0*:[*tF+E@Q;<$in3d*?+*6'HP(3g\5mE2/nTLd]jk)D_0<*)D@.'4/pIE*b!?DkfH_bR,k)(c:)3Mepk+IB5M:A9/rQ2ib5gTJS\>buElu_U/iK8TjdE6h6Qq>81./EDUMQC7hdkg_$DnC;cX/F!O&a^!MQc)`i45MI.W]WYI:r;AMXYC48M<c5sZO_ChlJJ,GlPS_4"Z?YA:7d>/bffa4^r5G&bIR.q].0P/IT:IcM<GCLrcXqV0,VCs[Nc2ZArn^d~>endstream
84
+ endobj
85
+ xref
86
+ 0 13
87
+ 0000000000 65535 f
88
+ 0000000073 00000 n
89
+ 0000000114 00000 n
90
+ 0000000221 00000 n
91
+ 0000000333 00000 n
92
+ 0000000527 00000 n
93
+ 0000000721 00000 n
94
+ 0000000915 00000 n
95
+ 0000000983 00000 n
96
+ 0000001266 00000 n
97
+ 0000001337 00000 n
98
+ 0000002699 00000 n
99
+ 0000004251 00000 n
100
+ trailer
101
+ <<
102
+ /ID
103
+ [<97c859a97ff504bbb736067c0d697b14><97c859a97ff504bbb736067c0d697b14>]
104
+ % ReportLab generated PDF document -- digest (http://www.reportlab.com)
105
+
106
+ /Info 8 0 R
107
+ /Root 7 0 R
108
+ /Size 13
109
+ >>
110
+ startxref
111
+ 5543
112
+ %%EOF
package/package.json ADDED
@@ -0,0 +1,19 @@
1
+ {
2
+ "name": "atris",
3
+ "version": "1.0.0",
4
+ "description": "Universal codebase instrumentation for AI agents",
5
+ "main": "bin/atris.js",
6
+ "bin": {
7
+ "atris": "./bin/atris.js"
8
+ },
9
+ "scripts": {
10
+ "test": "echo \"Error: no test specified\" && exit 1"
11
+ },
12
+ "keywords": ["ai", "agents", "codebase", "documentation", "automation"],
13
+ "author": "Keshav Rao (atrislabs)",
14
+ "license": "MIT",
15
+ "repository": {
16
+ "type": "git",
17
+ "url": "https://github.com/atrislabs/atris.md.git"
18
+ }
19
+ }