smalltask 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 smalltask contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,405 @@
1
+ Metadata-Version: 2.4
2
+ Name: smalltask
3
+ Version: 0.2.0
4
+ Summary: Define tools and agents as code. Run them anywhere.
5
+ Author: Gabriel Moffa
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/gabrielmoffa/smalltask
8
+ Project-URL: Repository, https://github.com/gabrielmoffa/smalltask
9
+ Project-URL: Issues, https://github.com/gabrielmoffa/smalltask/issues
10
+ Keywords: ai,agents,llm,automation,tools
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.11
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
18
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
19
+ Requires-Python: >=3.11
20
+ Description-Content-Type: text/markdown
21
+ License-File: LICENSE
22
+ Requires-Dist: httpx>=0.27.0
23
+ Requires-Dist: pyyaml>=6.0
24
+ Requires-Dist: click>=8.0
25
+ Provides-Extra: dev
26
+ Requires-Dist: pytest>=8.0; extra == "dev"
27
+ Requires-Dist: build>=1.0; extra == "dev"
28
+ Requires-Dist: twine>=5.0; extra == "dev"
29
+ Dynamic: license-file
30
+
31
+ # smalltask
32
+
33
+ Define tools and agents as code. Run them anywhere.
34
+
35
+ ```bash
36
+ pip install smalltask
37
+ ```
38
+
39
+ ---
40
+
41
+ smalltask is a lightweight framework for building scheduled AI agents. Tools are Python functions. Agents are YAML files. Both live in your git repo — diffable, reviewable, auditable.
42
+
43
+ Bring your own scheduler (Airflow, cron, GitHub Actions). Bring your own LLM (any OpenAI-compatible endpoint).
44
+
45
+ ---
46
+
47
+ ## Quickstart
48
+
49
+ ```bash
50
+ smalltask init # scaffold tools/ and agents/
51
+ smalltask init --template github # scaffold GitHub tools + PR digest agent
52
+ ```
53
+
54
+ Then run:
55
+
56
+ ```bash
57
+ smalltask run agents/example.yaml --var topic="revenue drop" --verbose
58
+ ```
59
+
60
+ ---
61
+
62
+ ## How it works
63
+
64
+ **Tools** are `@tool`-decorated Python functions. The function is the security boundary — the agent can only do what you explicitly expose.
65
+
66
+ ```python
67
+ # tools/orders.py
68
+ from smalltask import tool
69
+
70
+ @tool
71
+ def get_order_summary(days: int) -> dict:
72
+ """Return aggregated order stats for the last N days."""
73
+ ...
74
+
75
+ @tool
76
+ def get_top_customers(days: int, limit: int) -> list:
77
+ """Return the top customers by spend in the last N days."""
78
+ ...
79
+ ```
80
+
81
+ **Agents** are YAML files. They declare the prompt, which tools to use, and which LLM endpoint to call.
82
+
83
+ ```yaml
84
+ # agents/weekly_review.yaml
85
+ name: weekly_review
86
+ description: Weekly order digest with anomaly detection.
87
+
88
+ llm:
89
+ url: https://openrouter.ai/api/v1/chat/completions
90
+ model: anthropic/claude-3.5-sonnet
91
+ api_key_env: OPENROUTER_API_KEY
92
+
93
+ prompt: |
94
+ You are a data analyst reviewing the last 7 days of orders.
95
+ Summarise volume, revenue, refund rate, and top customers.
96
+ Flag anything unusual. Be direct. Use numbers.
97
+
98
+ tools:
99
+ - orders.get_order_summary
100
+ - orders.get_top_customers
101
+ ```
102
+
103
+ Reference tools as `file.function` to be explicit and avoid name collisions.
104
+
105
+ Use `$varname` in prompts for runtime variables:
106
+
107
+ ```yaml
108
+ prompt: |
109
+ Review orders for the week of $week.
110
+ ...
111
+ ```
112
+
113
+ ```bash
114
+ smalltask run agents/weekly_review.yaml --var week=2024-W01
115
+ ```
116
+
117
+ ---
118
+
119
+ ## Project structure
120
+
121
+ ```
122
+ your-repo/
123
+ ├── tools/
124
+ │ ├── orders.py # get_order_summary, get_top_customers, ...
125
+ │ ├── github.py # list_open_prs, get_workflow_runs, ...
126
+ │ └── slack.py # post_message, ...
127
+ ├── agents/
128
+ │ ├── weekly_review.yaml
129
+ │ └── github_pr_digest.yaml
130
+ └── dags/
131
+ └── weekly_review_dag.py # optional: Airflow integration
132
+ ```
133
+
134
+ Tools are discovered from the `tools/` directory. Agent YAMLs reference them by name.
135
+
136
+ ---
137
+
138
+ ## Schedulers
139
+
140
+ smalltask doesn't own scheduling — it drops into whatever you already have.
141
+
142
+ ### GitHub Actions
143
+
144
+ The fastest way to get a scheduled agent running. No infrastructure required.
145
+
146
+ ```yaml
147
+ # .github/workflows/weekly_review.yml
148
+ name: Weekly order review
149
+
150
+ on:
151
+ schedule:
152
+ - cron: '0 9 * * 1' # every Monday at 9am UTC
153
+ workflow_dispatch: # also allow manual runs from the GitHub UI
154
+
155
+ jobs:
156
+ run:
157
+ runs-on: ubuntu-latest
158
+ steps:
159
+ - uses: actions/checkout@v4
160
+
161
+ - uses: actions/setup-python@v5
162
+ with:
163
+ python-version: "3.11"
164
+
165
+ - run: pip install smalltask
166
+
167
+ - run: smalltask run agents/weekly_review.yaml --var week=$(date +%Y-W%V)
168
+ env:
169
+ OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
170
+ ```
171
+
172
+ Store your API key under **Settings → Secrets → Actions** in the GitHub repo.
173
+
174
+ ### Cron
175
+
176
+ ```bash
177
+ # crontab -e
178
+ 0 9 * * 1 cd /path/to/repo && smalltask run agents/weekly_review.yaml --var week=$(date +\%Y-W\%V) >> /var/log/smalltask.log 2>&1
179
+ ```
180
+
181
+ ### Airflow
182
+
183
+ ```python
184
+ from airflow.operators.python import PythonOperator
185
+ from smalltask.runner import run_agent
186
+ from pathlib import Path
187
+
188
+ PythonOperator(
189
+ task_id="weekly_review",
190
+ python_callable=run_agent,
191
+ op_kwargs={
192
+ "agent_path": Path("agents/weekly_review.yaml"),
193
+ "input_vars": {"week": "{{ ds }}"},
194
+ },
195
+ )
196
+ ```
197
+
198
+ ### Python
199
+
200
+ ```python
201
+ from smalltask.runner import run_agent
202
+ from pathlib import Path
203
+
204
+ result = run_agent(
205
+ agent_path=Path("agents/weekly_review.yaml"),
206
+ input_vars={"week": "2024-W01"},
207
+ )
208
+ ```
209
+
210
+ ---
211
+
212
+ ## Agent YAML reference
213
+
214
+ | Field | Required | Description |
215
+ |---|---|---|
216
+ | `name` | yes | Agent identifier |
217
+ | `description` | no | Human-readable description |
218
+ | `prompt` | yes | System prompt. Supports `$var` interpolation. |
219
+ | `tools` | yes | List of tool names (`file.function` or bare `function`) |
220
+ | `llm.url` | yes | OpenAI-compatible endpoint URL |
221
+ | `llm.model` | yes | Model identifier |
222
+ | `llm.api_key_env` | no | Name of env var holding the API key |
223
+ | `llm.max_tokens` | no | Max tokens per LLM call (default: 4096) |
224
+ | `llm.timeout` | no | HTTP timeout in seconds (default: 120) |
225
+ | `llm.extra_headers` | no | Additional HTTP headers (e.g. `HTTP-Referer`) |
226
+ | `max_iterations` | no | Max agentic loop iterations (default: 20) |
227
+ | `max_total_tokens` | no | Token budget across all iterations — stops early if exceeded (default: no limit) |
228
+ | `pre_hook` | no | List of tool calls to run before the LLM loop (see [Hooks](#hooks)) |
229
+ | `post_hook` | no | List of tool calls to run after the LLM loop (see [Hooks](#hooks)) |
230
+
231
+ ---
232
+
233
+ ## Hooks
234
+
235
+ Hooks let you run deterministic tool calls before and after the LLM loop. They use the same tools you already have — no new concepts.
236
+
237
+ ```yaml
238
+ name: metrics_alert
239
+ prompt: |
240
+ Analyze the attached metrics. Flag anomalies. Be direct.
241
+
242
+ llm:
243
+ url: https://openrouter.ai/api/v1/chat/completions
244
+ model: anthropic/claude-3.5-sonnet
245
+ api_key_env: OPENROUTER_API_KEY
246
+
247
+ tools:
248
+ - analysis.plot_revenue
249
+ - analysis.get_summary
250
+
251
+ pre_hook:
252
+ - analysis.snapshot_metrics:
253
+ days: 7
254
+ - analysis.check_threshold:
255
+ metric: error_rate
256
+ max: 0.05
257
+
258
+ post_hook:
259
+ - reporting.upload_charts
260
+ - reporting.send_slack_report:
261
+ channel: "#alerts"
262
+ ```
263
+
264
+ ### Pre-hooks
265
+
266
+ Pre-hooks run sequentially before the LLM. Their results are injected into the prompt so the LLM can see the data.
267
+
268
+ Each entry is a tool name with optional args:
269
+
270
+ ```yaml
271
+ pre_hook:
272
+ - orders.get_summary:
273
+ days: 7
274
+ - orders.check_threshold:
275
+ metric: refund_rate
276
+ max: 0.05
277
+ ```
278
+
279
+ **Skip gate** — if a pre-hook returns `{"skip": True}`, the agent stops immediately without calling the LLM. Use this to avoid wasting tokens when there's nothing to act on:
280
+
281
+ ```python
282
+ @tool
283
+ def check_threshold(metric: str, max: float) -> dict:
284
+ """Only run the agent if a metric exceeds a threshold."""
285
+ value = get_current_value(metric)
286
+ if value <= max:
287
+ return {"skip": True, "reason": f"{metric} is {value}, below {max}"}
288
+ return {"value": value}
289
+ ```
290
+
291
+ ### Post-hooks
292
+
293
+ Post-hooks run after the LLM finishes. The framework auto-injects two special parameters if your tool accepts them:
294
+
295
+ - **`output`** (`str`) — the LLM's final response text.
296
+ - **`tool_results`** (`list`) — every tool call made during the agent loop. Each entry is `{"tool": name, "args": {...}, "result": ...}`.
297
+
298
+ Just declare the parameters you need — the framework fills them in:
299
+
300
+ ```python
301
+ @tool
302
+ def send_slack_report(output: str, tool_results: list, channel: str) -> str:
303
+ """Post the LLM report and any chart images to Slack."""
304
+ charts = [r["result"] for r in tool_results if r["result"].endswith(".png")]
305
+ post_to_slack(channel=channel, text=output, attachments=charts)
306
+ return f"sent to {channel} with {len(charts)} charts"
307
+ ```
308
+
309
+ ```yaml
310
+ post_hook:
311
+ - slack.send_slack_report:
312
+ channel: "#alerts"
313
+ ```
314
+
315
+ The `channel` comes from the YAML. The `output` and `tool_results` are injected by the framework.
316
+
317
+ You can filter `tool_results` however you want — by tool name, by result content, by args:
318
+
319
+ ```python
320
+ # Get all chart paths
321
+ charts = [r["result"] for r in tool_results if r["tool"].startswith("plot_")]
322
+
323
+ # Get results from a specific tool
324
+ summaries = [r["result"] for r in tool_results if r["tool"] == "analysis.get_summary"]
325
+
326
+ # Get all tool calls that used a specific argument
327
+ weekly = [r for r in tool_results if r["args"].get("days") == 7]
328
+ ```
329
+
330
+ ---
331
+
332
+ ## Multi-agent
333
+
334
+ Sub-agents can be called as tools. The parent agent passes a task string; the sub-agent runs its full loop and returns a string result.
335
+
336
+ ```python
337
+ from smalltask.runner import agent_tool, run_agent
338
+ from pathlib import Path
339
+
340
+ run_agent(
341
+ Path("agents/orchestrator.yaml"),
342
+ extra_tools={
343
+ "summarize": agent_tool(
344
+ name="summarize",
345
+ agent_path=Path("agents/summarize.yaml"),
346
+ description="Summarise a block of text. Pass it as 'task'.",
347
+ )
348
+ },
349
+ )
350
+ ```
351
+
352
+ The orchestrator YAML lists `summarize` in its `tools:` section like any other tool.
353
+
354
+ ---
355
+
356
+ ## Templates
357
+
358
+ `smalltask init --list` shows available starter templates:
359
+
360
+ | Template | Scaffolds |
361
+ |---|---|
362
+ | `default` | Generic stub tools + example agent |
363
+ | `github` | GitHub REST API tools + PR digest agent |
364
+
365
+ ```bash
366
+ smalltask init --template github
367
+ ```
368
+
369
+ ---
370
+
371
+ ## LLM compatibility
372
+
373
+ smalltask uses prompt-based tool calling over raw HTTP — no SDK, no provider lock-in. It works with any OpenAI-compatible endpoint:
374
+
375
+ - [OpenRouter](https://openrouter.ai) — access any model via one API key
376
+ - [Ollama](https://ollama.com) — local models
377
+ - [Groq](https://groq.com)
378
+ - [Together AI](https://www.together.ai)
379
+ - Anthropic, OpenAI, Gemini via their OpenAI-compatible layers
380
+ - Any Bedrock / Azure endpoint with an OpenAI-compatible adapter
381
+
382
+ ---
383
+
384
+ ## Contributing
385
+
386
+ ```bash
387
+ git clone https://github.com/gabrielmoffa/smalltask
388
+ cd smalltask
389
+ python -m venv .venv && source .venv/bin/activate
390
+ pip install -e ".[dev]"
391
+ ```
392
+
393
+ Run the tests:
394
+
395
+ ```bash
396
+ pytest tests/
397
+ ```
398
+
399
+ The tests cover core logic (schema generation, tool loading, prompt parsing) without requiring a real LLM or API key. If you change `loader.py` or `prompt_tools.py`, run them before pushing.
400
+
401
+ ---
402
+
403
+ ## License
404
+
405
+ MIT