rillpy 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- rillpy-0.1.0/LICENSE +21 -0
- rillpy-0.1.0/MANIFEST.in +4 -0
- rillpy-0.1.0/PKG-INFO +478 -0
- rillpy-0.1.0/README.md +454 -0
- rillpy-0.1.0/pyproject.toml +37 -0
- rillpy-0.1.0/rill/__init__.py +3 -0
- rillpy-0.1.0/rill/flow.py +489 -0
- rillpy-0.1.0/rill/utils/__init__.py +0 -0
- rillpy-0.1.0/rill/utils/logger.py +133 -0
- rillpy-0.1.0/rillpy.egg-info/PKG-INFO +478 -0
- rillpy-0.1.0/rillpy.egg-info/SOURCES.txt +14 -0
- rillpy-0.1.0/rillpy.egg-info/dependency_links.txt +1 -0
- rillpy-0.1.0/rillpy.egg-info/requires.txt +3 -0
- rillpy-0.1.0/rillpy.egg-info/top_level.txt +1 -0
- rillpy-0.1.0/setup.cfg +4 -0
- rillpy-0.1.0/setup.py +5 -0
rillpy-0.1.0/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 zhixiangxue
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
rillpy-0.1.0/MANIFEST.in
ADDED
rillpy-0.1.0/PKG-INFO
ADDED
|
@@ -0,0 +1,478 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: rillpy
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: Rill - AI Agent Framework
|
|
5
|
+
Author: zhixiangxue
|
|
6
|
+
License-Expression: MIT
|
|
7
|
+
Project-URL: Homepage, https://github.com/zhixiangxue/rill-ai
|
|
8
|
+
Project-URL: Repository, https://github.com/zhixiangxue/rill-ai
|
|
9
|
+
Project-URL: Issues, https://github.com/zhixiangxue/rill-ai/issues
|
|
10
|
+
Keywords: ai,agent,framework,workflow
|
|
11
|
+
Classifier: Development Status :: 3 - Alpha
|
|
12
|
+
Classifier: Intended Audience :: Developers
|
|
13
|
+
Classifier: Programming Language :: Python :: 3
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
17
|
+
Requires-Python: >=3.8
|
|
18
|
+
Description-Content-Type: text/markdown
|
|
19
|
+
License-File: LICENSE
|
|
20
|
+
Requires-Dist: pydantic>=2.0.0
|
|
21
|
+
Requires-Dist: loguru>=0.7.0
|
|
22
|
+
Requires-Dist: chakpy>=0.1.4
|
|
23
|
+
Dynamic: license-file
|
|
24
|
+
|
|
25
|
+
<div align="center">
|
|
26
|
+
|
|
27
|
+
<a href="https://github.com/zhixiangxue/rill-ai"><img src="https://raw.githubusercontent.com/zhixiangxue/rill-ai/main/docs/assets/logo.png" alt="Rill Logo" width="120"></a>
|
|
28
|
+
|
|
29
|
+
[](https://badge.fury.io/py/rillpy)
|
|
30
|
+
[](https://pypi.org/project/rillpy/)
|
|
31
|
+
[](https://github.com/zhixiangxue/rill-ai/blob/main/LICENSE)
|
|
32
|
+
[](https://pypi.org/project/rillpy/)
|
|
33
|
+
[](https://github.com/zhixiangxue/rill-ai)
|
|
34
|
+
|
|
35
|
+
**A zero-dependency flow orchestration kernel for building AI workflows your way.**
|
|
36
|
+
|
|
37
|
+
**A minimal orchestration layer that lets you use any LLM client, any tools, any storage to build your AI agent applications.**
|
|
38
|
+
|
|
39
|
+
Inspired by CrewAI and LangGraph, designed to be lighter and simpler.
|
|
40
|
+
|
|
41
|
+
</div>
|
|
42
|
+
|
|
43
|
+
---
|
|
44
|
+
|
|
45
|
+
## Why Rill?
|
|
46
|
+
|
|
47
|
+
Building AI agents shouldn't require heavy frameworks. Sometimes you just need a simple orchestration piece:
|
|
48
|
+
|
|
49
|
+
- Want to use your own LLM client (chak / OpenAI SDK / Anthropic SDK)? ✅
|
|
50
|
+
- Want to use your own tools (functions / MCP servers / custom implementations)? ✅
|
|
51
|
+
- Want to keep your codebase lightweight and dependencies minimal? ✅
|
|
52
|
+
- Prefer code over YAML/JSON configs? Code is the orchestration. ✅
|
|
53
|
+
|
|
54
|
+
**Rill is just an orchestration component** - bring your own pieces, Rill handles the flow.
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
## Core Philosophy
|
|
59
|
+
|
|
60
|
+
Rill embraces the **"code is flow"** design philosophy pioneered by CrewAI - use decorators to define nodes, use Python functions to express logic, no YAML or JSON configs needed.
|
|
61
|
+
|
|
62
|
+
Built on this foundation, Rill adds:
|
|
63
|
+
|
|
64
|
+
1. **Forward routing**: Declare next steps with `goto` right where you are, not reverse subscription
|
|
65
|
+
2. **Zero binding**: Framework handles orchestration only, everything else is your call
|
|
66
|
+
|
|
67
|
+
*Special thanks to CrewAI and LangGraph for inspiring Rill's design.*
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## Quick Start
|
|
72
|
+
|
|
73
|
+
### Installation
|
|
74
|
+
|
|
75
|
+
```bash
|
|
76
|
+
# From PyPI (coming soon)
|
|
77
|
+
pip install rillpy
|
|
78
|
+
|
|
79
|
+
# From GitHub
|
|
80
|
+
pip install git+https://github.com/zhixiangxue/rill-ai.git@main
|
|
81
|
+
|
|
82
|
+
# Local development
|
|
83
|
+
git clone https://github.com/zhixiangxue/rill-ai.git
|
|
84
|
+
cd rill-ai
|
|
85
|
+
pip install -e .
|
|
86
|
+
```
|
|
87
|
+
|
|
88
|
+
### Build a RAG workflow in 30 seconds
|
|
89
|
+
|
|
90
|
+
```python
|
|
91
|
+
from rill import Flow, node, DYNAMIC, goto
|
|
92
|
+
|
|
93
|
+
class MyRAGFlow(Flow):
|
|
94
|
+
@node(start=True, goto=DYNAMIC)
|
|
95
|
+
async def query(self, user_input):
|
|
96
|
+
# Use your own LLM client (e.g., chak)
|
|
97
|
+
from chak import Conversation
|
|
98
|
+
conv = Conversation("openai/gpt-4o-mini", api_key="YOUR_KEY")
|
|
99
|
+
result = await conv.asend(user_input)
|
|
100
|
+
|
|
101
|
+
# Decide routing based on LLM response
|
|
102
|
+
if "search" in result.content.lower():
|
|
103
|
+
# List means parallel: trigger vector search and web search simultaneously
|
|
104
|
+
return goto([self.vector_search, self.web_search], user_input)
|
|
105
|
+
return goto(self.answer, result.content)
|
|
106
|
+
|
|
107
|
+
@node(goto="answer")
|
|
108
|
+
async def vector_search(self, query):
|
|
109
|
+
# Use your favorite vector database
|
|
110
|
+
return await my_chromadb.search(query)
|
|
111
|
+
|
|
112
|
+
@node(goto="answer")
|
|
113
|
+
async def web_search(self, query):
|
|
114
|
+
# Use your own search tool
|
|
115
|
+
return await my_searxng.search(query)
|
|
116
|
+
|
|
117
|
+
@node()
|
|
118
|
+
async def answer(self, sources):
|
|
119
|
+
# Multiple predecessors auto-merge: sources = {"vector_search": [...], "web_search": [...]}
|
|
120
|
+
from chak import Conversation
|
|
121
|
+
conv = Conversation("openai/gpt-4o-mini", api_key="YOUR_KEY")
|
|
122
|
+
return await conv.asend(str(sources))
|
|
123
|
+
|
|
124
|
+
# Run
|
|
125
|
+
await MyRAGFlow().run("What is quantum entanglement?")
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
**Notice:**
|
|
129
|
+
- ✅ LLM client: `chak` (or OpenAI SDK / Anthropic SDK, your choice)
|
|
130
|
+
- ✅ Vector database: `chromadb` (or Pinecone / Qdrant, your choice)
|
|
131
|
+
- ✅ Search tool: your own implementation (or MCP / LangChain Tools, your choice)
|
|
132
|
+
- ✅ Rill only handles: which node first, which ones parallel, how to merge inputs
|
|
133
|
+
- ✅ Data flows through return values (node → node) and shared state (`self.state`)
|
|
134
|
+
|
|
135
|
+
---
|
|
136
|
+
|
|
137
|
+
## Core Features
|
|
138
|
+
|
|
139
|
+
### 🌱 Zero Binding
|
|
140
|
+
|
|
141
|
+
Framework doesn't care what LLM, tools, or databases you use. Only provides orchestration.
|
|
142
|
+
|
|
143
|
+
### 🪴 Code is Flow
|
|
144
|
+
|
|
145
|
+
Define nodes with `@node` decorator, declare routing with `goto`, no DSL needed.
|
|
146
|
+
|
|
147
|
+
### 🌻 Forward Declaration
|
|
148
|
+
|
|
149
|
+
Use `goto(next, data)` to directly specify next step in current node. Matches how humans think.
|
|
150
|
+
|
|
151
|
+
### 🌾 List Means Parallel
|
|
152
|
+
|
|
153
|
+
`goto([A, B], data)` automatically triggers `asyncio.gather` for parallel execution.
|
|
154
|
+
|
|
155
|
+
### 🌿 Auto Input Merge
|
|
156
|
+
|
|
157
|
+
When multiple predecessors point to same node, framework auto-merges outputs as `{pred_name: output}` dict.
|
|
158
|
+
|
|
159
|
+
### 🍀 Safety Guards
|
|
160
|
+
|
|
161
|
+
Graph validation (start point / cycles / reachability) + `max_loop` to prevent infinite loops.
|
|
162
|
+
|
|
163
|
+
### 🌲 Observable
|
|
164
|
+
|
|
165
|
+
`Flow.stats()` tracks node execution time, `logger` traces execution flow.
|
|
166
|
+
|
|
167
|
+
### 🪴 Shared State Management
|
|
168
|
+
|
|
169
|
+
Rill provides `FlowState` for sharing data across nodes. Simpler than LangGraph's in-node state updates.
|
|
170
|
+
|
|
171
|
+
**TODO**: Parallel nodes updating state simultaneously may have thread-safety issues. Community contributions welcome.
|
|
172
|
+
|
|
173
|
+
### 🌾 Return Value as Input
|
|
174
|
+
|
|
175
|
+
Predecessor node's return value becomes successor node's input parameter. No need to put everything in state.
|
|
176
|
+
|
|
177
|
+
---
|
|
178
|
+
|
|
179
|
+
## Common Patterns
|
|
180
|
+
|
|
181
|
+
### Conditional Branching + Parallel Execution
|
|
182
|
+
|
|
183
|
+
```python
|
|
184
|
+
class ResearchFlow(Flow):
|
|
185
|
+
@node(start=True, goto=DYNAMIC)
|
|
186
|
+
async def decide(self, topic):
|
|
187
|
+
complexity = await self.analyze_complexity(topic)
|
|
188
|
+
|
|
189
|
+
if complexity > 0.8:
|
|
190
|
+
# High complexity: parallel deep research
|
|
191
|
+
return goto([self.academic_search, self.expert_interview], topic)
|
|
192
|
+
else:
|
|
193
|
+
# Low complexity: quick search
|
|
194
|
+
return goto(self.web_search, topic)
|
|
195
|
+
|
|
196
|
+
@node(goto="synthesize")
|
|
197
|
+
async def academic_search(self, topic):
|
|
198
|
+
return await search_papers(topic)
|
|
199
|
+
|
|
200
|
+
@node(goto="synthesize")
|
|
201
|
+
async def expert_interview(self, topic):
|
|
202
|
+
return await interview_experts(topic)
|
|
203
|
+
|
|
204
|
+
@node(goto="synthesize")
|
|
205
|
+
async def web_search(self, topic):
|
|
206
|
+
return await search_web(topic)
|
|
207
|
+
|
|
208
|
+
@node()
|
|
209
|
+
async def synthesize(self, sources):
|
|
210
|
+
# Auto-merge: sources could be {"academic_search": ..., "expert_interview": ...}
|
|
211
|
+
# or just web_search output (single predecessor)
|
|
212
|
+
return await generate_report(sources)
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
### Loop + Exit Condition
|
|
216
|
+
|
|
217
|
+
```python
|
|
218
|
+
class IterativeFlow(Flow):
|
|
219
|
+
@node(start=True, goto=DYNAMIC, max_loop=5)
|
|
220
|
+
async def generate(self, prompt):
|
|
221
|
+
result = await llm_generate(prompt)
|
|
222
|
+
quality = await self.evaluate(result)
|
|
223
|
+
|
|
224
|
+
if quality > 0.9:
|
|
225
|
+
return goto(self.finalize, result)
|
|
226
|
+
else:
|
|
227
|
+
# Loop back with feedback
|
|
228
|
+
return goto(self.generate, {"prompt": prompt, "feedback": quality})
|
|
229
|
+
|
|
230
|
+
@node()
|
|
231
|
+
async def finalize(self, result):
|
|
232
|
+
return result
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
### Using Shared State
|
|
236
|
+
|
|
237
|
+
```python
|
|
238
|
+
class MyWorkflow(Flow):
|
|
239
|
+
@node(start=True, goto=["fetch_data", "process_config"])
|
|
240
|
+
async def begin(self, inputs):
|
|
241
|
+
# Store inputs in state for other nodes to access
|
|
242
|
+
self.state.user_id = inputs["user_id"]
|
|
243
|
+
self.state.query = inputs["query"]
|
|
244
|
+
self.state.results = [] # Initialize shared collection
|
|
245
|
+
|
|
246
|
+
@node(goto="merge")
|
|
247
|
+
async def fetch_data(self, previous_result):
|
|
248
|
+
# Access state from parallel node
|
|
249
|
+
data = await api_call(self.state.user_id, self.state.query)
|
|
250
|
+
|
|
251
|
+
# Accumulate results in state
|
|
252
|
+
self.state.results.append({"source": "api", "data": data})
|
|
253
|
+
return data
|
|
254
|
+
|
|
255
|
+
@node(goto="merge")
|
|
256
|
+
async def process_config(self, previous_result):
|
|
257
|
+
# Another parallel node accessing same state
|
|
258
|
+
config = load_config(self.state.user_id)
|
|
259
|
+
|
|
260
|
+
# Also update shared state
|
|
261
|
+
self.state.config = config
|
|
262
|
+
return config
|
|
263
|
+
|
|
264
|
+
@node()
|
|
265
|
+
async def merge(self, inputs):
|
|
266
|
+
# inputs = {"fetch_data": ..., "process_config": ...}
|
|
267
|
+
# state contains accumulated data from all nodes
|
|
268
|
+
final_result = combine(
|
|
269
|
+
inputs["fetch_data"],
|
|
270
|
+
inputs["process_config"],
|
|
271
|
+
self.state.results # Access shared state
|
|
272
|
+
)
|
|
273
|
+
|
|
274
|
+
self.state.final_output = final_result
|
|
275
|
+
return final_result
|
|
276
|
+
|
|
277
|
+
# Run the workflow
|
|
278
|
+
flow = MyWorkflow()
|
|
279
|
+
final_state = await flow.run({
|
|
280
|
+
"user_id": 123,
|
|
281
|
+
"query": "hello"
|
|
282
|
+
}) # 🎯 Rill auto-converts your dict to a Pydantic FlowState object!
|
|
283
|
+
|
|
284
|
+
# Access final state (Pydantic model)
|
|
285
|
+
print(final_state.final_output) # 🎉 Flow.run() returns the final FlowState
|
|
286
|
+
print(final_state.user_id) # Access any field stored during execution
|
|
287
|
+
print(final_state.results) # All accumulated data persists here
|
|
288
|
+
```
|
|
289
|
+
|
|
290
|
+
**State vs Return Value:**
|
|
291
|
+
- **Return value**: Direct data passing from predecessor to successor (the main data pipeline)
|
|
292
|
+
- **State**: Shared context accessible from any node (for metadata, counters, cross-branch data)
|
|
293
|
+
- **Key difference**: Return values flow through edges, state persists across the entire workflow
|
|
294
|
+
- Use return values for primary data flow, use state for auxiliary data that multiple nodes need
|
|
295
|
+
|
|
296
|
+
**Known Issue:**
|
|
297
|
+
- ⚠️ Parallel nodes updating state simultaneously may cause race conditions
|
|
298
|
+
- 🔧 TODO: Need thread-safe state update mechanism (community contributions welcome)
|
|
299
|
+
|
|
300
|
+
---
|
|
301
|
+
|
|
302
|
+
## When to Use Rill?
|
|
303
|
+
|
|
304
|
+
| Your Situation | Recommendation |
|
|
305
|
+
|----------------|----------------|
|
|
306
|
+
| Quick GPT app, don't want to manage anything | 👉 LangChain / LangGraph (all-in-one convenience) |
|
|
307
|
+
| Want to use my own LLM client (chak / OpenAI SDK) + custom tools | 👉 **Rill** (orchestration freedom) |
|
|
308
|
+
| Just want pure orchestration layer, pick other components myself | 👉 **Rill** |
|
|
309
|
+
|
|
310
|
+
---
|
|
311
|
+
|
|
312
|
+
## FAQ
|
|
313
|
+
|
|
314
|
+
**Q: What's the difference between Rill and LangGraph?**
|
|
315
|
+
A: LangGraph is an all-in-one suite (orchestration + LLM + tools + memory), Rill only handles orchestration layer, other components are your choice.
|
|
316
|
+
|
|
317
|
+
**Q: I'm already using LangChain Tools, can I use Rill?**
|
|
318
|
+
A: Yes! Rill doesn't care where your tools come from, just call them directly in nodes.
|
|
319
|
+
|
|
320
|
+
**Q: Does Rill support state persistence?**
|
|
321
|
+
A: Current `FlowState` is in-memory state (Pydantic model), persistence is your choice (Redis / PostgreSQL / files), no binding to any storage solution.
|
|
322
|
+
|
|
323
|
+
**Q: I want to use my own LLM client (e.g., chak), how to integrate?**
|
|
324
|
+
A: Just `import chak` in nodes and call it, Rill doesn't care which LLM you use. Example:
|
|
325
|
+
```python
|
|
326
|
+
@node(start=True, goto="process")
|
|
327
|
+
async def query(self, user_input):
|
|
328
|
+
from chak import Conversation # Your LLM client
|
|
329
|
+
conv = Conversation("openai/gpt-4o-mini", api_key="YOUR_KEY")
|
|
330
|
+
return await conv.asend(user_input)
|
|
331
|
+
```
|
|
332
|
+
|
|
333
|
+
**Q: When do I need `max_loop`?**
|
|
334
|
+
A: When your flow has cycles (e.g., "generate → evaluate → regenerate"), use `max_loop` to limit loop iterations and prevent infinite loops.
|
|
335
|
+
|
|
336
|
+
**Q: How does input merging work?**
|
|
337
|
+
A: When multiple predecessor nodes point to the same target node, and all predecessors complete, the framework merges their outputs as a dict `{pred_name: output}` and passes it to the target node.
|
|
338
|
+
|
|
339
|
+
**Q: What's the difference between state and return value?**
|
|
340
|
+
A: They serve different purposes:
|
|
341
|
+
- **Node return value**: Passes data to the next node(s) through the flow edge. This is the main data pipeline.
|
|
342
|
+
- **State (`self.state`)**: A shared Pydantic object accessible from all nodes throughout the workflow. Use it for metadata, counters, configuration, or data that multiple branches need to access.
|
|
343
|
+
- **Example**: Return the processed result to next node, but store statistics/metadata in state.
|
|
344
|
+
|
|
345
|
+
**Q: Is state update thread-safe in parallel nodes?**
|
|
346
|
+
A: Not yet. Parallel nodes updating state simultaneously may cause race conditions. This is a known TODO. For now, avoid state updates in parallel nodes or use return values instead.
|
|
347
|
+
|
|
348
|
+
---
|
|
349
|
+
|
|
350
|
+
## API Reference
|
|
351
|
+
|
|
352
|
+
### `Flow`
|
|
353
|
+
|
|
354
|
+
Orchestration engine, inherit to define your workflow.
|
|
355
|
+
|
|
356
|
+
```python
|
|
357
|
+
class MyFlow(Flow):
|
|
358
|
+
def __init__(self, initial_state=None, max_steps=1000, validate=True):
|
|
359
|
+
super().__init__(initial_state, max_steps, validate)
|
|
360
|
+
```
|
|
361
|
+
|
|
362
|
+
- `initial_state`: Initial state dict or Pydantic model
|
|
363
|
+
- `max_steps`: Max execution steps (prevent infinite loops)
|
|
364
|
+
- `validate`: Whether to validate graph before execution
|
|
365
|
+
|
|
366
|
+
### `@node`
|
|
367
|
+
|
|
368
|
+
Decorator to mark methods as executable flow nodes.
|
|
369
|
+
|
|
370
|
+
```python
|
|
371
|
+
@node(start=False, goto=None, max_loop=None)
|
|
372
|
+
def my_node(self, inputs):
|
|
373
|
+
pass
|
|
374
|
+
```
|
|
375
|
+
|
|
376
|
+
- `start`: Whether this is the start node
|
|
377
|
+
- `goto`: Next node(s), can be:
|
|
378
|
+
- `None`: No successors (end node)
|
|
379
|
+
- `"node_name"`: Single next node
|
|
380
|
+
- `["node1", "node2"]`: Multiple nodes (parallel execution)
|
|
381
|
+
- `DYNAMIC`: Runtime-determined routing (must return `goto(...)` in node)
|
|
382
|
+
- `max_loop`: Max loop count for this node (for cycle detection)
|
|
383
|
+
|
|
384
|
+
### `goto(target, data)`
|
|
385
|
+
|
|
386
|
+
Construct routing decision for DYNAMIC nodes.
|
|
387
|
+
|
|
388
|
+
```python
|
|
389
|
+
@node(goto=DYNAMIC)
|
|
390
|
+
async def decide(self, inputs):
|
|
391
|
+
if condition:
|
|
392
|
+
return goto(self.next_node, data)
|
|
393
|
+
else:
|
|
394
|
+
return goto([self.task_a, self.task_b], data) # Parallel
|
|
395
|
+
```
|
|
396
|
+
|
|
397
|
+
- `target`: Single node or list of nodes (list triggers parallel execution)
|
|
398
|
+
- `data`: Payload passed to target node(s)
|
|
399
|
+
|
|
400
|
+
### `DYNAMIC`
|
|
401
|
+
|
|
402
|
+
Constant for runtime-determined routing. Use with `goto()`.
|
|
403
|
+
|
|
404
|
+
### `FlowState`
|
|
405
|
+
|
|
406
|
+
Shared mutable state container (Pydantic model).
|
|
407
|
+
|
|
408
|
+
```python
|
|
409
|
+
# Access state from any node
|
|
410
|
+
self.state.custom_field = "value" # Runtime field injection
|
|
411
|
+
self.state.user_id = 123
|
|
412
|
+
self.state.results = [] # Shared collection
|
|
413
|
+
|
|
414
|
+
# Two independent data channels:
|
|
415
|
+
# 1. Return value: flows through edges (node → successor)
|
|
416
|
+
# 2. State: shared context persists across entire workflow
|
|
417
|
+
```
|
|
418
|
+
|
|
419
|
+
**Known Issue**: Parallel nodes updating state simultaneously may cause race conditions (TODO).
|
|
420
|
+
|
|
421
|
+
### `Flow.run(initial_input)`
|
|
422
|
+
|
|
423
|
+
Execute the flow.
|
|
424
|
+
|
|
425
|
+
```python
|
|
426
|
+
result_state = await flow.run({"user_input": "Hello"})
|
|
427
|
+
```
|
|
428
|
+
|
|
429
|
+
### `Flow.stats()`
|
|
430
|
+
|
|
431
|
+
Get execution statistics.
|
|
432
|
+
|
|
433
|
+
```python
|
|
434
|
+
stats = flow.stats()
|
|
435
|
+
# {
|
|
436
|
+
# "timing": {
|
|
437
|
+
# "total_duration": 2.35,
|
|
438
|
+
# "nodes": {
|
|
439
|
+
# "query": {"duration": 1.2, "percentage": 51.06},
|
|
440
|
+
# "search": {"duration": 0.8, "percentage": 34.04}
|
|
441
|
+
# }
|
|
442
|
+
# }
|
|
443
|
+
# }
|
|
444
|
+
```
|
|
445
|
+
|
|
446
|
+
---
|
|
447
|
+
|
|
448
|
+
## Architecture
|
|
449
|
+
|
|
450
|
+
```
|
|
451
|
+
┌─────────────────────────────────────────┐
|
|
452
|
+
│ Your Application Layer │
|
|
453
|
+
│ LLM: chak / OpenAI / Anthropic / ... │
|
|
454
|
+
│ Tools: MCP / Functions / LangChain │
|
|
455
|
+
│ Storage: ChromaDB / PostgreSQL / Redis │
|
|
456
|
+
└──────────────┬──────────────────────────┘
|
|
457
|
+
│ Only depends on Rill for orchestration
|
|
458
|
+
┌──────────────▼──────────────────────────┐
|
|
459
|
+
│ Rill Orchestration Layer │
|
|
460
|
+
│ @node + goto + parallel + State │
|
|
461
|
+
└─────────────────────────────────────────┘
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
---
|
|
465
|
+
|
|
466
|
+
## Dependencies
|
|
467
|
+
|
|
468
|
+
- Python >= 3.8
|
|
469
|
+
- pydantic >= 2.0.0
|
|
470
|
+
- loguru >= 0.7.0
|
|
471
|
+
|
|
472
|
+
---
|
|
473
|
+
|
|
474
|
+
## License
|
|
475
|
+
|
|
476
|
+
MIT License - see LICENSE file for details.
|
|
477
|
+
|
|
478
|
+
<div align="right"><a href="https://github.com/zhixiangxue/rill-ai"><img src="https://raw.githubusercontent.com/zhixiangxue/rill-ai/main/docs/assets/logo.png" alt="Demo Video" width="120"></a></div>
|