llama2a 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,39 @@
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ *.egg-info/
8
+ *.egg
9
+ dist/
10
+ build/
11
+
12
+ # Virtual environments
13
+ .venv/
14
+ venv/
15
+ ENV/
16
+
17
+ # IDE
18
+ .idea/
19
+ .vscode/
20
+ *.swp
21
+ *.swo
22
+
23
+ # Backups (system-generated)
24
+ .backups/
25
+ !workspace/.backups
26
+ !workspace/__pycache__
27
+
28
+ # Generated workspace (all apps created by Infinity)
29
+ # workspace/*
30
+ !workspace/README.md
31
+
32
+ # Runtime data (logs, checklists, plans)
33
+ # data/*
34
+ !data/.gitkeep
35
+ data/requirements.txt
36
+
37
+ # OS files
38
+ .DS_Store
39
+ Thumbs.db
llama2a-0.1.0/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 AgentChain Contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
llama2a-0.1.0/PKG-INFO ADDED
@@ -0,0 +1,387 @@
1
+ Metadata-Version: 2.4
2
+ Name: llama2a
3
+ Version: 0.1.0
4
+ Summary: A modular multi-agent framework for building AI development workflows with Ollama
5
+ Project-URL: Homepage, https://github.com/yourusername/agentchain
6
+ Project-URL: Documentation, https://github.com/yourusername/agentchain#readme
7
+ Project-URL: Repository, https://github.com/yourusername/agentchain
8
+ Project-URL: Issues, https://github.com/yourusername/agentchain/issues
9
+ Author: AgentChain Contributors
10
+ License: MIT
11
+ License-File: LICENSE
12
+ Keywords: agents,ai,automation,code-generation,llm,multi-agent,ollama
13
+ Classifier: Development Status :: 3 - Alpha
14
+ Classifier: Intended Audience :: Developers
15
+ Classifier: License :: OSI Approved :: MIT License
16
+ Classifier: Operating System :: OS Independent
17
+ Classifier: Programming Language :: Python :: 3
18
+ Classifier: Programming Language :: Python :: 3.9
19
+ Classifier: Programming Language :: Python :: 3.10
20
+ Classifier: Programming Language :: Python :: 3.11
21
+ Classifier: Programming Language :: Python :: 3.12
22
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
23
+ Classifier: Topic :: Software Development :: Code Generators
24
+ Requires-Python: >=3.9
25
+ Provides-Extra: dev
26
+ Requires-Dist: black>=23.0; extra == 'dev'
27
+ Requires-Dist: mypy>=1.0; extra == 'dev'
28
+ Requires-Dist: pytest-cov>=4.0; extra == 'dev'
29
+ Requires-Dist: pytest>=7.0; extra == 'dev'
30
+ Requires-Dist: ruff>=0.1.0; extra == 'dev'
31
+ Description-Content-Type: text/markdown
32
+
33
+ # AgentChain 🔗
34
+
35
+ A minimal, zero-dependency Python library for building multi-agent workflows.
36
+
37
+ **Fluent API** • **Custom Agents** • **Per-Agent Models** • **Event Hooks** • **PyPI Ready**
38
+
39
+ ---
40
+
41
+ ## Installation
42
+
43
+ ```bash
44
+ pip install agentchain
45
+ ```
46
+
47
+ Or install from source:
48
+
49
+ ```bash
50
+ git clone https://github.com/yourusername/agentchain
51
+ cd agentchain
52
+ pip install -e .
53
+ ```
54
+
55
+ ## Quick Start
56
+
57
+ ```python
58
+ from agentchain import AgentChain
59
+
60
+ AgentChain() \
61
+ .requirements("Build a calculator app with add, subtract, multiply, divide") \
62
+ .model("deepseek-coder-v2:16b") \
63
+ .output("./calculator") \
64
+ .run()
65
+ ```
66
+
67
+ That's it. AgentChain coordinates multiple AI agents to plan and implement your project.
68
+
69
+ ---
70
+
71
+ ## Core Concepts
72
+
73
+ ### The Default Workflow
74
+
75
+ AgentChain ships with 3 agents that form a complete development pipeline:
76
+
77
+ | Agent | Role | Responsibility |
78
+ |-------|------|----------------|
79
+ | **Contractor** | Orchestrator | Coordinates the workflow, handles retries, verifies completion |
80
+ | **Planner** | Planning | Breaks requirements into concrete, ordered tasks |
81
+ | **Coder** | Implementation | Writes code to files based on task specifications |
82
+
83
+ ### Fluent API
84
+
85
+ Configure everything with method chaining:
86
+
87
+ ```python
88
+ AgentChain()
89
+ .requirements("Your project description") # What to build
90
+ .model("llama3:8b") # Default model for all agents
91
+ .configure("planner", model="llama3:70b") # Planner gets a bigger model
92
+ .configure("coder", model="deepseek-coder-v2:16b") # Coder is specialized
93
+ .output("./my_project") # Where to write files
94
+ .run()
95
+ ```
96
+
97
+ ---
98
+
99
+ ## Configuration
100
+
101
+ ### Per-Agent Models
102
+
103
+ Use the right model for each task:
104
+
105
+ ```python
106
+ AgentChain()
107
+ .requirements("Build a REST API")
108
+ .configure("planner", model="llama3:70b") # Planning: bigger model
109
+ .configure("coder", model="deepseek-coder-v2:16b") # Coding: code-focused model
110
+ .output("./api")
111
+ .run()
112
+ ```
113
+
114
+ ### Agent Parameters
115
+
116
+ Pass custom parameters to specific agents:
117
+
118
+ ```python
119
+ AgentChain()
120
+ .requirements("Build a game")
121
+ .configure("planner",
122
+ model="llama3:70b",
123
+ max_tasks=20, # Custom parameter
124
+ include_tests=True # Custom parameter
125
+ )
126
+ .configure("coder",
127
+ model="deepseek-coder-v2:16b",
128
+ style="verbose", # Custom parameter
129
+ language="python" # Custom parameter
130
+ )
131
+ .output("./game")
132
+ .run()
133
+ ```
134
+
135
+ ### LLM Configuration
136
+
137
+ Configure the LLM endpoint:
138
+
139
+ ```python
140
+ from agentchain import ChainConfig
141
+
142
+ config = ChainConfig(
143
+ llm_base_url="http://localhost:11434", # Ollama default
144
+ default_model="llama3:8b",
145
+ )
146
+
147
+ AgentChain(config)
148
+ .requirements("Build something")
149
+ .output("./output")
150
+ .run()
151
+ ```
152
+
153
+ ---
154
+
155
+ ## Custom Agents
156
+
157
+ ### Function-Based Agents
158
+
159
+ The simplest way to add an agent:
160
+
161
+ ```python
162
+ from agentchain import AgentChain, agent, AgentContext, AgentResult
163
+
164
+ @agent(name="reviewer", role="Code Reviewer")
165
+ def review_code(context: AgentContext, llm) -> AgentResult:
166
+ code = context.get("code", "")
167
+
168
+ response = llm.generate(
169
+ "Review this code for issues:\n\n" + code,
170
+ model="llama3:8b"
171
+ )
172
+
173
+ return AgentResult.ok({"review": response})
174
+
175
+
176
+ # Use it
177
+ AgentChain()
178
+ .requirements("Build a CLI tool")
179
+ .register("reviewer", review_code) # Add to this chain
180
+ .output("./cli")
181
+ .run()
182
+ ```
183
+
184
+ ### Class-Based Agents
185
+
186
+ For more complex agents with state:
187
+
188
+ ```python
189
+ from agentchain import Agent, agent, AgentContext, AgentResult
190
+
191
+ @agent(name="tester", role="Test Writer")
192
+ class TesterAgent(Agent):
193
+ def execute(self, context: AgentContext) -> AgentResult:
194
+ code_files = context.get("files", [])
195
+
196
+ for file in code_files:
197
+ test_code = self._llm.generate(
198
+ f"Write tests for:\n{file['content']}",
199
+ model=self._config.get("model", "llama3:8b")
200
+ )
201
+ # Write test file...
202
+
203
+ return AgentResult.ok({"tests_created": len(code_files)})
204
+ ```
205
+
206
+ ### Custom Orchestrator
207
+
208
+ Replace the entire workflow:
209
+
210
+ ```python
211
+ from agentchain import Orchestrator, orchestrator, AgentContext, AgentResult
212
+
213
+ @orchestrator(name="my_workflow", role="Custom Pipeline")
214
+ class MyOrchestrator(Orchestrator):
215
+ def execute(self, context: AgentContext) -> AgentResult:
216
+ # Phase 1: Your planning logic
217
+ plan_result = self.invoke_agent("planner", context)
218
+
219
+ # Phase 2: Your implementation logic
220
+ for task in plan_result.data.get("tasks", []):
221
+ self.invoke_agent("coder", AgentContext(
222
+ task=task["description"],
223
+ inputs=task
224
+ ))
225
+
226
+ # Phase 3: Your custom phase
227
+ if self.get_agent("tester"):
228
+ self.invoke_agent("tester", context)
229
+
230
+ return AgentResult.ok({"status": "complete"})
231
+
232
+
233
+ # Use it
234
+ AgentChain()
235
+ .requirements("Build something")
236
+ .orchestrator(MyOrchestrator)
237
+ .output("./output")
238
+ .run()
239
+ ```
240
+
241
+ ---
242
+
243
+ ## Events & Callbacks
244
+
245
+ Monitor progress with event hooks:
246
+
247
+ ```python
248
+ from agentchain import AgentChain, Event
249
+
250
+ def on_progress(event_type, data):
251
+ print(f"[{event_type}] {data.get('message', '')}")
252
+
253
+ def on_task_complete(event_type, data):
254
+ task = data.get("task", {})
255
+ print(f"✓ Completed: {task.get('name')}")
256
+
257
+ AgentChain()
258
+ .requirements("Build an app")
259
+ .on(Event.PROGRESS, on_progress)
260
+ .on(Event.TASK_COMPLETE, on_task_complete)
261
+ .on(Event.AGENT_ERROR, lambda e, d: print(f"Error: {d}"))
262
+ .output("./app")
263
+ .run()
264
+ ```
265
+
266
+ ### Available Events
267
+
268
+ | Event | Trigger |
269
+ |-------|---------|
270
+ | `Event.CHAIN_START` | Chain execution begins |
271
+ | `Event.CHAIN_COMPLETE` | Chain execution finishes |
272
+ | `Event.AGENT_START` | An agent starts executing |
273
+ | `Event.AGENT_COMPLETE` | An agent finishes |
274
+ | `Event.AGENT_ERROR` | An agent encounters an error |
275
+ | `Event.TASK_START` | A task begins |
276
+ | `Event.TASK_COMPLETE` | A task completes |
277
+ | `Event.TASK_FAILED` | A task fails |
278
+ | `Event.PROGRESS` | General progress update |
279
+
280
+ ---
281
+
282
+ ## CLI Usage
283
+
284
+ AgentChain includes a command-line interface:
285
+
286
+ ```bash
287
+ # Basic usage
288
+ agentchain "Build a todo app with Flask" --output ./todo-app
289
+
290
+ # With specific model
291
+ agentchain "Build a calculator" --model deepseek-coder-v2:16b --output ./calc
292
+
293
+ # Verbose output
294
+ agentchain "Build a game" --output ./game --verbose
295
+ ```
296
+
297
+ ---
298
+
299
+ ## API Reference
300
+
301
+ ### AgentChain
302
+
303
+ The main entry point for building chains.
304
+
305
+ ```python
306
+ class AgentChain:
307
+ def requirements(self, text: str) -> AgentChain
308
+ def model(self, name: str) -> AgentChain
309
+ def configure(self, agent_name: str, **kwargs) -> AgentChain
310
+ def output(self, directory: str) -> AgentChain
311
+ def orchestrator(self, cls: Type[Orchestrator]) -> AgentChain
312
+ def register(self, name: str, agent: Agent | Callable) -> AgentChain
313
+ def on(self, event: Event, handler: Callable) -> AgentChain
314
+ def run(self) -> AgentResult
315
+ ```
316
+
317
+ ### AgentContext
318
+
319
+ Data passed to agents:
320
+
321
+ ```python
322
+ class AgentContext:
323
+ task: str # Current task description
324
+ inputs: dict # Input data from previous agents
325
+ workspace: str # Output directory path
326
+
327
+ def get(self, key: str, default=None) -> Any
328
+ ```
329
+
330
+ ### AgentResult
331
+
332
+ Return value from agents:
333
+
334
+ ```python
335
+ class AgentResult:
336
+ success: bool # Whether execution succeeded
337
+ data: dict # Output data
338
+ error: str | None # Error message if failed
339
+
340
+ @classmethod
341
+ def ok(cls, data: dict) -> AgentResult
342
+
343
+ @classmethod
344
+ def fail(cls, error: str) -> AgentResult
345
+ ```
346
+
347
+ ---
348
+
349
+ ## Requirements
350
+
351
+ - **Python**: 3.9+
352
+ - **LLM Backend**: Ollama running at `localhost:11434` (configurable)
353
+ - **Dependencies**: None (stdlib only!)
354
+
355
+ ---
356
+
357
+ ## Examples
358
+
359
+ See the [examples/](./examples) directory:
360
+
361
+ - [minimal.py](./examples/minimal.py) - Basic usage
362
+ - [multi_model.py](./examples/multi_model.py) - Different models per agent
363
+ - [custom_agent.py](./examples/custom_agent.py) - Creating custom agents
364
+ - [custom_orchestrator.py](./examples/custom_orchestrator.py) - Custom workflow
365
+
366
+ ---
367
+
368
+ ## License
369
+
370
+ MIT License - see [LICENSE](./LICENSE)
371
+
372
+ ---
373
+
374
+ ## Contributing
375
+
376
+ Contributions welcome! Please read our contributing guidelines first.
377
+
378
+ ```bash
379
+ # Development install
380
+ pip install -e ".[dev]"
381
+
382
+ # Run tests
383
+ pytest
384
+
385
+ # Format code
386
+ black src/
387
+ ```