nnn-agent 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,193 @@
1
+ Metadata-Version: 2.4
2
+ Name: nnn-agent
3
+ Version: 0.1.0
4
+ Summary: Multi-agent coding system powered by local LLMs
5
+ Author: srivtx
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/srivtx/nnn
8
+ Project-URL: Repository, https://github.com/srivtx/nnn
9
+ Project-URL: Issues, https://github.com/srivtx/nnn/issues
10
+ Keywords: ai,agent,llm,coding,multi-agent,local-llm
11
+ Requires-Python: >=3.10
12
+ Description-Content-Type: text/markdown
13
+ Requires-Dist: openai>=1.0
14
+ Requires-Dist: httpx
15
+ Requires-Dist: rich
16
+
17
+ # nnn
18
+
19
+ ```
20
+ ╻ ╻╻ ╻╻ ╻
21
+ ┃┗┫┃┗┫┃┗┫
22
+ ╹ ╹╹ ╹╹ ╹
23
+ ```
24
+
25
+ A multi-agent coding system that runs on your local machine. Type a task, and a team of AI agents plans, writes, and tests the code for you.
26
+
27
+ ```
28
+ > create a snake game with pygame
29
+
30
+ 1. Developer Create workspace/snake_game.py with a complete snake game using pygame
31
+ 2. BugFixer Run workspace/snake_game.py with run_command, then fix any errors
32
+
33
+ step 1/2 — Developer
34
+ read_workspace → workspace/
35
+ write_file → Wrote 2847 chars to workspace/snake_game.py
36
+
37
+ step 2/2 — BugFixer
38
+ run_command → (game window opens)
39
+ ✅ No errors found.
40
+ ```
41
+
42
+ ---
43
+
44
+ ## What is this?
45
+
46
+ **nnn** is a ~1500-line Python project that builds a complete AI agent system from scratch. It uses [LM Studio](https://lmstudio.ai) to run a local language model and connects 5 specialized AI agents that collaborate to complete coding tasks.
47
+
48
+ The agents:
49
+
50
+ | Agent | Job |
51
+ | --------------- | ----------------------------------------- |
52
+ | **Architect** | Designs system structure and writes plans |
53
+ | **Developer** | Reads plans and writes working code |
54
+ | **BugFixer** | Runs code, finds errors, and fixes them |
55
+ | **Researcher** | Analyzes existing code in the workspace |
56
+ | **WebSearcher** | Searches the internet for documentation |
57
+
58
+ They are not separate programs — they are the same AI model called with different instructions and different tools.
59
+
60
+ ---
61
+
62
+ ## Quick Start
63
+
64
+ ### 1. Install LM Studio
65
+
66
+ Download [LM Studio](https://lmstudio.ai), load any model (Qwen 2.5 Coder 7B or higher recommended), and start the local server.
67
+
68
+ ### 2. Clone and install
69
+
70
+ ```bash
71
+ git clone https://github.com/YOUR_USERNAME/nnn.git
72
+ cd nnn
73
+ python3 -m venv venv
74
+ source venv/bin/activate
75
+ pip install -e .
76
+ ```
77
+
78
+ ### 3. Run
79
+
80
+ ```bash
81
+ nnn
82
+ ```
83
+
84
+ That's it. Type a task and press Enter.
85
+
86
+ **One-shot mode:**
87
+
88
+ ```bash
89
+ nnn "create a flask API with user login"
90
+ ```
91
+
92
+ ---
93
+
94
+ ## How It Works
95
+
96
+ ```
97
+ You type a task
98
+
99
+ Orchestrator asks the LLM: "Break this into steps and assign agents"
100
+
101
+ LLM returns a JSON plan: Developer → BugFixer
102
+
103
+ Each agent runs in order, using tools (read/write files, run commands)
104
+
105
+ You get working code in the workspace/ folder
106
+ ```
107
+
108
+ There is only **one AI model** running. Each "agent" is just the same model called with a different system prompt and a different set of tools. For a deeper explanation, see [docs/HOW_IT_WORKS.md](docs/HOW_IT_WORKS.md).
109
+
110
+ ---
111
+
112
+ ## Learn How to Build This
113
+
114
+ This project comes with **14 step-by-step lessons** that teach you how to build the entire system from scratch. Each lesson adds one concept, and you test it before moving on.
115
+
116
+ | # | Lesson | What you build |
117
+ | --- | ------------------------------------------------------- | --------------------------------------------------------- |
118
+ | 01 | [Talking to an AI Model](docs/01_talking_to_ai.md) | Send a message to LM Studio and get a reply |
119
+ | 02 | [Giving the AI a Role](docs/02_giving_the_ai_a_role.md) | System prompts, multi-turn conversation |
120
+ | 03 | [Tool Calling](docs/03_tool_calling.md) | AI calls real Python functions (read files, run commands) |
121
+ | 04 | [Your First Agent Class](docs/04_your_first_agent.md) | Package role + tools + loop into a reusable Agent |
122
+ | 05 | [Multiple Agents](docs/05_multiple_agents.md) | 2 agents → 3 agents, passing context between them |
123
+ | 06 | [The Orchestrator](docs/06_the_orchestrator.md) | AI decides which agents to run and in what order |
124
+ | 07 | [The Full System](docs/07_the_full_system.md) | Add remaining tools + WebSearcher + Researcher |
125
+ | 08 | [Debugging Failures](docs/08_debugging_failures.md) | Real failure cases and how to fix them |
126
+ | 09 | [Speed & Context](docs/09_speed_and_context.md) | 6 optimizations: caching, streaming, parallel execution |
127
+ | 10 | [Surgical Editing](docs/10_surgical_editing.md) | Line-by-line editing tools (edit_lines, insert_code) |
128
+ | 11 | [Small-Model Safety](docs/11_small_model_safety.md) | Safety nets for 3-4B models that fail often |
129
+ | 12 | [CLI & Packaging](docs/12_cli_and_packaging.md) | Turn it into an installable `nnn` command |
130
+ | 13 | [Project Intelligence](docs/13_project_intelligence.md) | Detect language/runtime, auto-install deps, syntax check |
131
+ | 14 | [Loop Hardening](docs/14_loop_hardening.md) | Catch stealth errors, duplicate calls, smarter bailout |
132
+
133
+ Start at Lesson 01. Each lesson builds on the previous one. By Lesson 14 you have the complete system.
134
+
135
+ **New to programming?** Start with [docs/BEGINNER_GUIDE.md](docs/BEGINNER_GUIDE.md) instead — it's a gentler introduction.
136
+
137
+ ---
138
+
139
+ ## Project Structure
140
+
141
+ ```
142
+ nnn/
143
+ ├── main.py ← entry point (REPL + one-shot mode)
144
+ ├── orchestrator.py ← plans tasks and delegates to agents
145
+ ├── agent.py ← base Agent class
146
+ ├── llm.py ← LLM bridge (tool-calling loop, streaming, caching)
147
+ ├── tools.py ← all tool implementations (read/write files, run commands, web search)
148
+ ├── config.py ← settings (server URL, token limits, temperatures)
149
+ ├── agents/
150
+ │ ├── architect.py ← system designer
151
+ │ ├── developer.py ← code writer (with rescue safety nets)
152
+ │ ├── bug_fixer.py ← run → diagnose → fix → verify
153
+ │ ├── researcher.py ← code analyzer
154
+ │ └── web_searcher.py ← internet search
155
+ ├── workspace/ ← where agents write code (shared workspace)
156
+ ├── docs/ ← 12 step-by-step lessons
157
+ ├── pyproject.toml ← package config
158
+ └── requirements.txt ← dependencies
159
+ ```
160
+
161
+ ---
162
+
163
+ ## Requirements
164
+
165
+ - Python 3.10+
166
+ - [LM Studio](https://lmstudio.ai) with any loaded model
167
+ - Recommended: Qwen 2.5 Coder 7B+ or Qwen 3 8B+ for best results
168
+ - Works with 3-4B models too (with safety nets — see Lesson 11)
169
+
170
+ ---
171
+
172
+ ## Configuration
173
+
174
+ All settings are in `config.py`:
175
+
176
+ ```python
177
+ LM_BASE_URL = "http://localhost:1234/v1" # LM Studio server
178
+ MAX_TOKENS = 8192 # max response length
179
+ TEMPERATURE_CODE = 0.3 # lower = more reliable tool use
180
+ PARALLEL_AGENTS = True # run independent agents concurrently
181
+ ```
182
+
183
+ Override with environment variables:
184
+
185
+ ```bash
186
+ LM_BASE_URL=http://192.168.1.5:1234/v1 nnn "build a todo app"
187
+ ```
188
+
189
+ ---
190
+
191
+ ## License
192
+
193
+ MIT
@@ -0,0 +1,177 @@
1
+ # nnn
2
+
3
+ ```
4
+ ╻ ╻╻ ╻╻ ╻
5
+ ┃┗┫┃┗┫┃┗┫
6
+ ╹ ╹╹ ╹╹ ╹
7
+ ```
8
+
9
+ A multi-agent coding system that runs on your local machine. Type a task, and a team of AI agents plans, writes, and tests the code for you.
10
+
11
+ ```
12
+ > create a snake game with pygame
13
+
14
+ 1. Developer Create workspace/snake_game.py with a complete snake game using pygame
15
+ 2. BugFixer Run workspace/snake_game.py with run_command, then fix any errors
16
+
17
+ step 1/2 — Developer
18
+ read_workspace → workspace/
19
+ write_file → Wrote 2847 chars to workspace/snake_game.py
20
+
21
+ step 2/2 — BugFixer
22
+ run_command → (game window opens)
23
+ ✅ No errors found.
24
+ ```
25
+
26
+ ---
27
+
28
+ ## What is this?
29
+
30
+ **nnn** is a ~1500-line Python project that builds a complete AI agent system from scratch. It uses [LM Studio](https://lmstudio.ai) to run a local language model and connects 5 specialized AI agents that collaborate to complete coding tasks.
31
+
32
+ The agents:
33
+
34
+ | Agent | Job |
35
+ | --------------- | ----------------------------------------- |
36
+ | **Architect** | Designs system structure and writes plans |
37
+ | **Developer** | Reads plans and writes working code |
38
+ | **BugFixer** | Runs code, finds errors, and fixes them |
39
+ | **Researcher** | Analyzes existing code in the workspace |
40
+ | **WebSearcher** | Searches the internet for documentation |
41
+
42
+ They are not separate programs — they are the same AI model called with different instructions and different tools.
43
+
44
+ ---
45
+
46
+ ## Quick Start
47
+
48
+ ### 1. Install LM Studio
49
+
50
+ Download [LM Studio](https://lmstudio.ai), load any model (Qwen 2.5 Coder 7B or higher recommended), and start the local server.
51
+
52
+ ### 2. Clone and install
53
+
54
+ ```bash
55
+ git clone https://github.com/YOUR_USERNAME/nnn.git
56
+ cd nnn
57
+ python3 -m venv venv
58
+ source venv/bin/activate
59
+ pip install -e .
60
+ ```
61
+
62
+ ### 3. Run
63
+
64
+ ```bash
65
+ nnn
66
+ ```
67
+
68
+ That's it. Type a task and press Enter.
69
+
70
+ **One-shot mode:**
71
+
72
+ ```bash
73
+ nnn "create a flask API with user login"
74
+ ```
75
+
76
+ ---
77
+
78
+ ## How It Works
79
+
80
+ ```
81
+ You type a task
82
+
83
+ Orchestrator asks the LLM: "Break this into steps and assign agents"
84
+
85
+ LLM returns a JSON plan: Developer → BugFixer
86
+
87
+ Each agent runs in order, using tools (read/write files, run commands)
88
+
89
+ You get working code in the workspace/ folder
90
+ ```
91
+
92
+ There is only **one AI model** running. Each "agent" is just the same model called with a different system prompt and a different set of tools. For a deeper explanation, see [docs/HOW_IT_WORKS.md](docs/HOW_IT_WORKS.md).
93
+
94
+ ---
95
+
96
+ ## Learn How to Build This
97
+
98
+ This project comes with **14 step-by-step lessons** that teach you how to build the entire system from scratch. Each lesson adds one concept, and you test it before moving on.
99
+
100
+ | # | Lesson | What you build |
101
+ | --- | ------------------------------------------------------- | --------------------------------------------------------- |
102
+ | 01 | [Talking to an AI Model](docs/01_talking_to_ai.md) | Send a message to LM Studio and get a reply |
103
+ | 02 | [Giving the AI a Role](docs/02_giving_the_ai_a_role.md) | System prompts, multi-turn conversation |
104
+ | 03 | [Tool Calling](docs/03_tool_calling.md) | AI calls real Python functions (read files, run commands) |
105
+ | 04 | [Your First Agent Class](docs/04_your_first_agent.md) | Package role + tools + loop into a reusable Agent |
106
+ | 05 | [Multiple Agents](docs/05_multiple_agents.md) | 2 agents → 3 agents, passing context between them |
107
+ | 06 | [The Orchestrator](docs/06_the_orchestrator.md) | AI decides which agents to run and in what order |
108
+ | 07 | [The Full System](docs/07_the_full_system.md) | Add remaining tools + WebSearcher + Researcher |
109
+ | 08 | [Debugging Failures](docs/08_debugging_failures.md) | Real failure cases and how to fix them |
110
+ | 09 | [Speed & Context](docs/09_speed_and_context.md) | 6 optimizations: caching, streaming, parallel execution |
111
+ | 10 | [Surgical Editing](docs/10_surgical_editing.md) | Line-by-line editing tools (edit_lines, insert_code) |
112
+ | 11 | [Small-Model Safety](docs/11_small_model_safety.md) | Safety nets for 3-4B models that fail often |
113
+ | 12 | [CLI & Packaging](docs/12_cli_and_packaging.md) | Turn it into an installable `nnn` command |
114
+ | 13 | [Project Intelligence](docs/13_project_intelligence.md) | Detect language/runtime, auto-install deps, syntax check |
115
+ | 14 | [Loop Hardening](docs/14_loop_hardening.md) | Catch stealth errors, duplicate calls, smarter bailout |
116
+
117
+ Start at Lesson 01. Each lesson builds on the previous one. By Lesson 14 you have the complete system.
118
+
119
+ **New to programming?** Start with [docs/BEGINNER_GUIDE.md](docs/BEGINNER_GUIDE.md) instead — it's a gentler introduction.
120
+
121
+ ---
122
+
123
+ ## Project Structure
124
+
125
+ ```
126
+ nnn/
127
+ ├── main.py ← entry point (REPL + one-shot mode)
128
+ ├── orchestrator.py ← plans tasks and delegates to agents
129
+ ├── agent.py ← base Agent class
130
+ ├── llm.py ← LLM bridge (tool-calling loop, streaming, caching)
131
+ ├── tools.py ← all tool implementations (read/write files, run commands, web search)
132
+ ├── config.py ← settings (server URL, token limits, temperatures)
133
+ ├── agents/
134
+ │ ├── architect.py ← system designer
135
+ │ ├── developer.py ← code writer (with rescue safety nets)
136
+ │ ├── bug_fixer.py ← run → diagnose → fix → verify
137
+ │ ├── researcher.py ← code analyzer
138
+ │ └── web_searcher.py ← internet search
139
+ ├── workspace/ ← where agents write code (shared workspace)
140
+ ├── docs/ ← 12 step-by-step lessons
141
+ ├── pyproject.toml ← package config
142
+ └── requirements.txt ← dependencies
143
+ ```
144
+
145
+ ---
146
+
147
+ ## Requirements
148
+
149
+ - Python 3.10+
150
+ - [LM Studio](https://lmstudio.ai) with any loaded model
151
+ - Recommended: Qwen 2.5 Coder 7B+ or Qwen 3 8B+ for best results
152
+ - Works with 3-4B models too (with safety nets — see Lesson 11)
153
+
154
+ ---
155
+
156
+ ## Configuration
157
+
158
+ All settings are in `config.py`:
159
+
160
+ ```python
161
+ LM_BASE_URL = "http://localhost:1234/v1" # LM Studio server
162
+ MAX_TOKENS = 8192 # max response length
163
+ TEMPERATURE_CODE = 0.3 # lower = more reliable tool use
164
+ PARALLEL_AGENTS = True # run independent agents concurrently
165
+ ```
166
+
167
+ Override with environment variables:
168
+
169
+ ```bash
170
+ LM_BASE_URL=http://192.168.1.5:1234/v1 nnn "build a todo app"
171
+ ```
172
+
173
+ ---
174
+
175
+ ## License
176
+
177
+ MIT
@@ -0,0 +1,82 @@
1
+ """
2
+ Base Agent class.
3
+ Each agent has a name, a role (system prompt), and a set of tools it can use.
4
+
5
+ SPEED OPTIMIZATION — KV Cache Prefix Sharing:
6
+ All agents share a common system-prompt prefix (from config.SHARED_SYSTEM_PREFIX).
7
+ This means when LM Studio processes Agent B after Agent A, it can reuse the
8
+ KV cache for the shared prefix tokens instead of recomputing them from scratch.
9
+ On a 7B model, this saves ~200-500ms per agent switch.
10
+ """
11
+
12
+ from rich.console import Console
13
+ import llm
14
+ import config
15
+ import tools as tool_module
16
+
17
+ console = Console()
18
+
19
+
20
+ class Agent:
21
+ """A single AI agent with a specific role and tool set."""
22
+
23
+ def __init__(self, name: str, role: str, tool_names: list[str]):
24
+ """
25
+ Args:
26
+ name: Display name (e.g. "Architect")
27
+ role: System prompt describing the agent's persona and job
28
+ tool_names: List of tool names this agent can use (from tools.py)
29
+ """
30
+ self.name = name
31
+ # KV Cache optimization: prepend shared prefix so all agents
32
+ # share the same token prefix → LM Studio reuses KV cache
33
+ self.role = f"{config.SHARED_SYSTEM_PREFIX}\n\n{role}"
34
+ self.tool_names = tool_names
35
+
36
+ # Build the tool schemas and function map for this agent
37
+ self.tool_schemas = [
38
+ tool_module.TOOL_SCHEMAS[t] for t in tool_names
39
+ if t in tool_module.TOOL_SCHEMAS
40
+ ]
41
+ self.tool_functions = {
42
+ t: tool_module.TOOL_FUNCTIONS[t] for t in tool_names
43
+ if t in tool_module.TOOL_FUNCTIONS
44
+ }
45
+
46
+ def run(self, task: str, context: str = "") -> str:
47
+ """
48
+ Run this agent on a task.
49
+
50
+ Args:
51
+ task: What the agent should do
52
+ context: Additional context from previous agents or the workspace
53
+
54
+ Returns:
55
+ The agent's final text response
56
+ """
57
+ console.print(f" [bold]{self.name}[/bold]")
58
+
59
+ system_msg = self.role
60
+ if context:
61
+ system_msg += f"\n\n--- CONTEXT FROM PREVIOUS WORK ---\n{context}"
62
+
63
+ messages = [
64
+ {"role": "system", "content": system_msg},
65
+ {"role": "user", "content": task},
66
+ ]
67
+
68
+ # Speed: agents with tools use a smaller token limit than the default 8K
69
+ # Code agents use lower temperature for more reliable tool calling
70
+ response = llm.chat(
71
+ messages=messages,
72
+ tools=self.tool_schemas if self.tool_schemas else None,
73
+ tool_functions=self.tool_functions,
74
+ agent_name=self.name,
75
+ max_tokens=config.MAX_TOKENS_TOOL_AGENT,
76
+ temperature=config.TEMPERATURE_CODE if self.tool_schemas else None,
77
+ )
78
+
79
+ console.print()
80
+
81
+
82
+ return response
@@ -0,0 +1 @@
1
+ """Specialized agents package."""
@@ -0,0 +1,28 @@
1
+ """
2
+ Architect Agent — designs systems, creates plans, defines structure.
3
+ """
4
+
5
+ from agent import Agent
6
+
7
+ SYSTEM_PROMPT = """You are the Architect — a senior software architect.
8
+
9
+ YOUR JOB:
10
+ - Design system architecture and file structure
11
+ - Create technical plans and specifications
12
+ - Define APIs, data models, and component boundaries
13
+ - Write your plans to the shared workspace so other agents can follow them
14
+
15
+ RULES:
16
+ - Think before you act. Lay out the full design before any code is written.
17
+ - Use `write_plan` to save your architecture docs to the workspace.
18
+ - Use `list_files` and `read_workspace` to understand what already exists.
19
+ - Be specific: name files, define interfaces, describe data flow.
20
+ - Output clean, well-structured markdown plans.
21
+ """
22
+
23
+ def create() -> Agent:
24
+ return Agent(
25
+ name="Architect",
26
+ role=SYSTEM_PROMPT,
27
+ tool_names=["write_plan", "read_workspace", "list_files", "read_file"],
28
+ )
@@ -0,0 +1,40 @@
1
+ """
2
+ Bug Fixer Agent — reviews code, spots bugs, applies fixes.
3
+ """
4
+
5
+ from agent import Agent
6
+
7
+ SYSTEM_PROMPT = """You are the Bug Fixer. You run code, find errors, and fix them.
8
+
9
+ WORKFLOW — follow this exact order:
10
+ 1. RUN the code first. Pick the correct runtime based on file extension:
11
+ - .py → run_command({"command": "python3 filename.py"})
12
+ - .js → run_command({"command": "node filename.js"})
13
+ - .ts → run_command({"command": "npx tsx filename.ts"})
14
+ - .go → run_command({"command": "go run filename.go"})
15
+ IMPORTANT: NEVER use python3 to run .js files! Use node for JavaScript.
16
+ run_command already runs from workspace/, so just use the filename. Do NOT add 'cd workspace'.
17
+ 2. READ the error output. It tells you the exact line and problem.
18
+ 3. Use `read_file` to see the code.
19
+ 4. Fix it:
20
+ - Simple error (1-2 lines wrong) → use `edit_lines` on those lines.
21
+ - Many errors or structural mess → rewrite with `write_file`.
22
+ 5. RUN again to verify the fix worked.
23
+ 6. If same error appears twice, STOP editing and REWRITE with write_file.
24
+ 7. If you have tried 3 times and cannot fix it, STOP and report what you found.
25
+
26
+ SERVERS / LONG-RUNNING PROCESSES (Express, Flask, http.server, fastapi, etc.):
27
+ - Do NOT run them with run_command — they never exit and will timeout.
28
+ - Instead: read the code, check for syntax errors and missing imports, fix issues, done.
29
+ - For JS servers: node -e "require('./file.js')" (quick syntax check)
30
+ - For Python: python3 -c "import ast; ast.parse(open('file.py').read())"
31
+
32
+ NEVER guess at bugs. ALWAYS run first (unless it's a server).
33
+ """
34
+
35
+ def create() -> Agent:
36
+ return Agent(
37
+ name="BugFixer",
38
+ role=SYSTEM_PROMPT,
39
+ tool_names=["read_file", "write_file", "edit_lines", "run_command", "read_workspace", "list_files"],
40
+ )