thoughtflow 0.0.3__py3-none-any.whl → 0.0.4__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,1686 @@
1
+ Metadata-Version: 2.4
2
+ Name: thoughtflow
3
+ Version: 0.0.4
4
+ Summary: A minimal, explicit, Pythonic substrate for building reproducible, portable, testable LLM and agent systems.
5
+ Project-URL: Homepage, https://github.com/jrolf/thoughtflow
6
+ Project-URL: Documentation, https://thoughtflow.dev
7
+ Project-URL: Repository, https://github.com/jrolf/thoughtflow
8
+ Project-URL: Issues, https://github.com/jrolf/thoughtflow/issues
9
+ Project-URL: Changelog, https://github.com/jrolf/thoughtflow/blob/main/CHANGELOG.md
10
+ Author-email: "James A. Rolfsen" <james@think.dev>
11
+ Maintainer-email: "James A. Rolfsen" <james@think.dev>
12
+ License: MIT License
13
+
14
+ Copyright (c) 2025 James A. Rolfsen
15
+
16
+ Permission is hereby granted, free of charge, to any person obtaining a copy
17
+ of this software and associated documentation files (the "Software"), to deal
18
+ in the Software without restriction, including without limitation the rights
19
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
20
+ copies of the Software, and to permit persons to whom the Software is
21
+ furnished to do so, subject to the following conditions:
22
+
23
+ The above copyright notice and this permission notice shall be included in all
24
+ copies or substantial portions of the Software.
25
+
26
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
27
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
28
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
29
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
30
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
31
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
32
+ SOFTWARE.
33
+ License-File: LICENSE
34
+ Keywords: agents,ai,anthropic,langchain-alternative,llm,machine-learning,openai,orchestration
35
+ Classifier: Development Status :: 3 - Alpha
36
+ Classifier: Intended Audience :: Developers
37
+ Classifier: License :: OSI Approved :: MIT License
38
+ Classifier: Operating System :: OS Independent
39
+ Classifier: Programming Language :: Python :: 3
40
+ Classifier: Programming Language :: Python :: 3.9
41
+ Classifier: Programming Language :: Python :: 3.10
42
+ Classifier: Programming Language :: Python :: 3.11
43
+ Classifier: Programming Language :: Python :: 3.12
44
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
45
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
46
+ Classifier: Typing :: Typed
47
+ Requires-Python: >=3.9
48
+ Provides-Extra: all
49
+ Requires-Dist: anthropic>=0.18; extra == 'all'
50
+ Requires-Dist: mkdocs-material>=9.0; extra == 'all'
51
+ Requires-Dist: mkdocs>=1.5; extra == 'all'
52
+ Requires-Dist: mkdocstrings[python]>=0.24; extra == 'all'
53
+ Requires-Dist: mypy>=1.0; extra == 'all'
54
+ Requires-Dist: ollama>=0.1; extra == 'all'
55
+ Requires-Dist: openai>=1.0; extra == 'all'
56
+ Requires-Dist: pre-commit>=3.0; extra == 'all'
57
+ Requires-Dist: pytest-asyncio>=0.21; extra == 'all'
58
+ Requires-Dist: pytest-cov>=4.0; extra == 'all'
59
+ Requires-Dist: pytest>=7.0; extra == 'all'
60
+ Requires-Dist: ruff>=0.1; extra == 'all'
61
+ Provides-Extra: all-providers
62
+ Requires-Dist: anthropic>=0.18; extra == 'all-providers'
63
+ Requires-Dist: ollama>=0.1; extra == 'all-providers'
64
+ Requires-Dist: openai>=1.0; extra == 'all-providers'
65
+ Provides-Extra: anthropic
66
+ Requires-Dist: anthropic>=0.18; extra == 'anthropic'
67
+ Provides-Extra: dev
68
+ Requires-Dist: mypy>=1.0; extra == 'dev'
69
+ Requires-Dist: pre-commit>=3.0; extra == 'dev'
70
+ Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
71
+ Requires-Dist: pytest-cov>=4.0; extra == 'dev'
72
+ Requires-Dist: pytest>=7.0; extra == 'dev'
73
+ Requires-Dist: ruff>=0.1; extra == 'dev'
74
+ Provides-Extra: docs
75
+ Requires-Dist: mkdocs-material>=9.0; extra == 'docs'
76
+ Requires-Dist: mkdocs>=1.5; extra == 'docs'
77
+ Requires-Dist: mkdocstrings[python]>=0.24; extra == 'docs'
78
+ Provides-Extra: local
79
+ Requires-Dist: ollama>=0.1; extra == 'local'
80
+ Provides-Extra: openai
81
+ Requires-Dist: openai>=1.0; extra == 'openai'
82
+ Description-Content-Type: text/markdown
83
+
84
+ <!--
85
+ ████████╗██╗ ██╗ ██████╗ ██╗ ██╗ ██████╗ ██╗ ██╗████████╗███████╗██╗ ██████╗ ██╗ ██╗
86
+ ╚══██╔══╝██║ ██║██╔═══██╗██║ ██║██╔════╝ ██║ ██║╚══██╔══╝██╔════╝██║ ██╔═══██╗██║ ██║
87
+ ██║ ███████║██║ ██║██║ ██║██║ ███╗███████║ ██║ █████╗ ██║ ██║ ██║██║ █╗ ██║
88
+ ██║ ██╔══██║██║ ██║██║ ██║██║ ██║██╔══██║ ██║ ██╔══╝ ██║ ██║ ██║██║███╗██║
89
+ ██║ ██║ ██║╚██████╔╝╚██████╔╝╚██████╔╝██║ ██║ ██║ ██║ ███████╗╚██████╔╝╚███╔███╔╝
90
+ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝ ╚═════╝ ╚══╝╚══╝
91
+ -->
92
+
93
+ <p align="center">
94
+ <img src="assets/logo.png" alt="ThoughtFlow Logo" width="400">
95
+ </p>
96
+
97
+ <h1 align="center">ThoughtFlow</h1>
98
+
99
+ <p align="center">
100
+ <strong>The Pythonic Cognitive Engine for LLM Systems That Actually Make Sense</strong>
101
+ </p>
102
+
103
+ <p align="center">
104
+ <em>"We believe your code should be smarter than your framework."</em>
105
+ </p>
106
+
107
+ <!-- Primary badges: trust signals -->
108
+ <p align="center">
109
+ <a href="https://pypi.org/project/thoughtflow/"><img src="https://img.shields.io/pypi/v/thoughtflow?color=blue" alt="PyPI version"></a>
110
+ <a href="https://pypi.org/project/thoughtflow/"><img src="https://img.shields.io/pypi/pyversions/thoughtflow" alt="Python versions"></a>
111
+ <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT"></a>
112
+ <a href="https://github.com/jrolf/thoughtflow/actions/workflows/ci.yml"><img src="https://github.com/jrolf/thoughtflow/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
113
+ <a href="https://pepy.tech/project/thoughtflow"><img src="https://static.pepy.tech/badge/thoughtflow/month" alt="Downloads/month"></a>
114
+ </p>
115
+
116
+ <!-- Secondary badges: social + quality -->
117
+ <p align="center">
118
+ <a href="https://github.com/jrolf/thoughtflow/stargazers"><img src="https://img.shields.io/github/stars/jrolf/thoughtflow?style=flat" alt="GitHub stars"></a>
119
+ <a href="https://github.com/jrolf/thoughtflow/commits/main"><img src="https://img.shields.io/github/last-commit/jrolf/thoughtflow" alt="Last commit"></a>
120
+ <a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff"></a>
121
+ <a href="http://mypy-lang.org/"><img src="https://img.shields.io/badge/type%20checked-mypy-blue" alt="mypy"></a>
122
+ </p>
123
+
124
+ <!-- Navigation -->
125
+ <p align="center">
126
+ <a href="#-installation">Install</a> •
127
+ <a href="#-quick-start">Quick Start</a> •
128
+ <a href="#-the-four-primitives">Primitives</a> •
129
+ <a href="#-the-four-primitives-in-depth">Deep Dive</a> •
130
+ <a href="#-real-world-patterns">Patterns</a> •
131
+ <a href="#-utilities">Utilities</a> •
132
+ <a href="#-philosophy-the-zen-of-thoughtflow">Philosophy</a>
133
+ </p>
134
+
135
+ ---
136
+
137
+ ## 🚀 Installation
138
+
139
+ ```bash
140
+ # Core library (ZERO dependencies - stdlib only!)
141
+ pip install thoughtflow
142
+
143
+ # With OpenAI support
144
+ pip install thoughtflow[openai]
145
+
146
+ # With Anthropic support
147
+ pip install thoughtflow[anthropic]
148
+
149
+ # With local model support (Ollama)
150
+ pip install thoughtflow[local]
151
+
152
+ # With all providers
153
+ pip install thoughtflow[all-providers]
154
+ ```
155
+
156
+ **That's it.** The core library has **zero dependencies** — it uses only Python's standard library. Provider adapters are optional extras that bring in only what you need.
157
+
158
+ ---
159
+
160
+ ## ⚡ Quick Start
161
+
162
+ Here's a complete working example. Copy, paste, run:
163
+
164
+ ```python
165
+ import os
166
+ from thoughtflow import LLM, MEMORY, THOUGHT
167
+
168
+ # 1. Create an LLM instance with your provider
169
+ # Format: "provider:model"
170
+ api_key = os.environ.get("OPENAI_API_KEY", "your-api-key")
171
+ llm = LLM("openai:gpt-4o", key=api_key)
172
+
173
+ # 2. Create a MEMORY to store conversation state
174
+ # MEMORY is an event-sourced container that tracks everything
175
+ memory = MEMORY()
176
+
177
+ # 3. Add a user message to memory
178
+ # Messages can have channels for multi-platform agents
179
+ memory.add_msg("user", "What is the meaning of life?", channel="cli")
180
+
181
+ # 4. Create a THOUGHT - the atomic unit of cognition
182
+ # A THOUGHT combines: Prompt + Context + LLM + Parsing + Validation
183
+ thought = THOUGHT(
184
+ name="respond",
185
+ llm=llm,
186
+ prompt="You are a wise philosopher. Answer: {last_user_msg}",
187
+ )
188
+
189
+ # 5. Execute the thought
190
+ # This is THE pattern: memory = thought(memory)
191
+ print("Calling LLM...")
192
+ memory = thought(memory)
193
+
194
+ # 6. Get the result
195
+ # Results are stored in memory as variables
196
+ result = memory.get_var("respond_result")
197
+ print(f"Response: {result}")
198
+ # Output: "Response: The meaning of life is a profound philosophical question that..."
199
+
200
+ # 7. View the full conversation
201
+ print(memory.render(output_format="conversation"))
202
+ # Output:
203
+ # User: What is the meaning of life?
204
+ # Assistant: The meaning of life is a profound philosophical question that...
205
+ ```
206
+
207
+ **The universal pattern is `memory = thought(memory)`.** That's not a simplification — that's the actual API. Everything flows through MEMORY.
208
+
209
+ ---
210
+
211
+ ## 🔥 The Manifesto
212
+
213
+ > **We reject the complexity-industrial complex.**
214
+
215
+ The modern LLM ecosystem has become an abstraction swamp. Frameworks compete to add more layers, more magic, more indirection—until you need a PhD just to debug a chatbot.
216
+
217
+ **ThoughtFlow takes a different path.**
218
+
219
+ We believe:
220
+ - 🎯 **Your agent logic should fit in your head** — Four primitives, not forty classes
221
+ - 🔍 **Every state change should be visible and traceable** — Event-sourced memory with full history
222
+ - 🧪 **Testing AI systems should be as easy as testing regular code** — Deterministic replay built-in
223
+ - 📦 **Zero dependencies means zero supply chain nightmares** — Core runs on stdlib only
224
+ - ⚡ **Serverless deployment should be trivial, not heroic** — <100ms cold starts
225
+
226
+ This isn't just a library. It's a stance.
227
+
228
+ ---
229
+
230
+ ## ✅ When to Use ThoughtFlow
231
+
232
+ ThoughtFlow is the right choice when:
233
+
234
+ - **You need serverless deployment** — Lambda, Cloud Functions, Edge. Zero dependencies means instant cold starts.
235
+ - **You want to understand your entire codebase** in an afternoon — Four concepts, not forty.
236
+ - **You value explicit state over magic** — Every change is visible, traceable, and replayable.
237
+ - **You need deterministic testing** of AI workflows — Record sessions, replay them, assert on results.
238
+ - **You're building production agents**, not prototypes — Serious error handling, retry logic, validation.
239
+ - **You prefer composition over configuration** — Plain Python, not YAML or JSON configs.
240
+ - **You work across multiple LLM providers** — One interface for OpenAI, Anthropic, Groq, Gemini, Ollama, and more.
241
+
242
+ ## ❌ When NOT to Use ThoughtFlow
243
+
244
+ Be honest with yourself — ThoughtFlow isn't for everyone:
245
+
246
+ - **You need pre-built RAG pipelines out of the box** → Consider [LlamaIndex](https://github.com/run-llama/llama_index)
247
+ - **You want visual workflow builders** → Consider [Flowise](https://github.com/FlowiseAI/Flowise), [Langflow](https://github.com/langflow-ai/langflow)
248
+ - **You need complex multi-agent orchestration frameworks** → Consider [AutoGen](https://github.com/microsoft/autogen), [CrewAI](https://github.com/joaomdmoura/crewai)
249
+ - **You prefer batteries-included over minimal** → Consider [LangChain](https://github.com/langchain-ai/langchain)
250
+ - **You need built-in vector stores and retrievers** → ThoughtFlow doesn't include these (but see [ThoughtBase](#-sister-library-thoughtbase))
251
+
252
+ **ThoughtFlow is opinionated:** we trade breadth for clarity. We do fewer things, but we do them well.
253
+
254
+ ---
255
+
256
+ ## 🚀 Escape Velocity: What You Can Delete
257
+
258
+ Switching to ThoughtFlow? Here's what you can remove from your project:
259
+
260
+ ```diff
261
+ - langchain # 50+ transitive dependencies
262
+ - llama-index # Complex retrieval abstractions
263
+ - autogen # Multi-agent complexity
264
+ - crewai # Yet another agent framework
265
+ - semantic-kernel # Enterprise overhead
266
+ - haystack # Pipeline complexity
267
+ - guidance # Constrained generation complexity
268
+
269
+ - your custom retry logic # THOUGHT handles retries with repair prompts
270
+ - your custom parsing code # valid_extract handles messy LLM output
271
+ - your state management mess # MEMORY tracks everything
272
+ - your 47 adapter classes # LLM provides one interface for all providers
273
+
274
+ + thoughtflow # Zero dependencies. Everything you need.
275
+ ```
276
+
277
+ **Net result:** Your `requirements.txt` gets lighter. Your code gets clearer. Your deployments get faster. Your team spends less time debugging framework internals.
278
+
279
+ ---
280
+
281
+ ## 📊 How ThoughtFlow Compares
282
+
283
+ | Feature | ThoughtFlow | LangChain | LlamaIndex | AutoGen |
284
+ |---------|-------------|-----------|------------|---------|
285
+ | **Core Dependencies** | **0** | 50+ | 30+ | 20+ |
286
+ | **Time to Understand** | **5 minutes** | 2+ hours | 1+ hour | 1+ hour |
287
+ | **Concepts to Learn** | **4** | 50+ | 30+ | 15+ |
288
+ | **Serverless Ready** | **Trivial** | Challenging | Challenging | Challenging |
289
+ | **Cold Start (Lambda)** | **<100ms** | 2-5 seconds | 1-3 seconds | 1-2 seconds |
290
+ | **Full State Visibility** | **Everything** | Partial | Partial | Partial |
291
+ | **Deterministic Replay** | **Built-in** | DIY | DIY | DIY |
292
+ | **Multi-Provider LLM** | **Built-in** | Via adapters | Via adapters | Via adapters |
293
+
294
+ *Each framework has its strengths. LangChain offers breadth, LlamaIndex excels at RAG, AutoGen shines at multi-agent. ThoughtFlow optimizes for simplicity, transparency, and serverless deployment.*
295
+
296
+ ---
297
+
298
+ ## ⚡ Performance Characteristics
299
+
300
+ | Metric | ThoughtFlow | Why It Matters |
301
+ |--------|-------------|----------------|
302
+ | **Import Time** | ~15ms | Zero dependencies = instant module load |
303
+ | **Memory Overhead** | ~2MB | Minimal runtime footprint |
304
+ | **Call Overhead** | <1ms | Direct HTTP calls, no middleware stack |
305
+ | **Cold Start (Lambda)** | <100ms | Critical for serverless economics |
306
+ | **Event Throughput** | 100k+ events/sec | Event-sourced architecture scales |
307
+
308
+ *These are architectural characteristics, not formal benchmarks. Your mileage may vary based on workload.*
309
+
310
+ ---
311
+
312
+ ## 🧩 The Four Primitives
313
+
314
+ ThoughtFlow gives you **four concepts**. Master these, and you've mastered the framework.
315
+
316
+ ```
317
+ ┌─────────────────────────────────────────────────────────────────────────┐
318
+ │ │
319
+ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
320
+ │ │ LLM │ ──▶ │ THOUGHT │ ──▶ │ MEMORY │ ◀── │ ACTION │ │
321
+ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
322
+ │ │ │ │ │ │
323
+ │ Any model Cognition State External │
324
+ │ Any provider unit container operations │
325
+ │ │
326
+ └─────────────────────────────────────────────────────────────────────────┘
327
+ ```
328
+
329
+ | Primitive | What It Does | The Pattern |
330
+ |-----------|--------------|-------------|
331
+ | **LLM** | Unified interface to call any language model | `response = llm.call(messages)` |
332
+ | **MEMORY** | Event-sourced state container for everything | `memory.add_msg("user", "Hello!")` |
333
+ | **THOUGHT** | Atomic unit of cognition with retry/parsing | `memory = thought(memory)` |
334
+ | **ACTION** | External operations with consistent logging | `memory = action(memory, **kwargs)` |
335
+
336
+ That's it. Four concepts. No 47-page tutorial to understand the basics.
337
+
338
+ ---
339
+
340
+ ## 🔌 Supported Providers
341
+
342
+ ThoughtFlow works with **any LLM provider** through a unified interface:
343
+
344
+ | Provider | Model ID Format | Example | Notes |
345
+ |----------|-----------------|---------|-------|
346
+ | **OpenAI** | `openai:model` | `openai:gpt-4o` | GPT-4, GPT-4o, GPT-3.5, etc. |
347
+ | **Anthropic** | `anthropic:model` | `anthropic:claude-3-5-sonnet-20241022` | Claude 3, Claude 3.5, etc. |
348
+ | **Groq** | `groq:model` | `groq:llama-3.1-70b-versatile` | Fast inference for open models |
349
+ | **Google Gemini** | `gemini:model` | `gemini:gemini-1.5-pro` | Gemini Pro, Flash, etc. |
350
+ | **OpenRouter** | `openrouter:model` | `openrouter:anthropic/claude-3-opus` | Access any model via OpenRouter |
351
+ | **Ollama** | `ollama:model` | `ollama:llama3.2` | Local models, no API key needed |
352
+
353
+ **Switching providers is a one-line change:**
354
+
355
+ ```python
356
+ # From OpenAI...
357
+ llm = LLM("openai:gpt-4o", key=openai_key)
358
+
359
+ # ...to Anthropic
360
+ llm = LLM("anthropic:claude-3-5-sonnet-20241022", key=anthropic_key)
361
+
362
+ # ...to local (no key needed!)
363
+ llm = LLM("ollama:llama3.2")
364
+
365
+ # Your THOUGHT and MEMORY code stays exactly the same
366
+ ```
367
+
368
+ ---
369
+
370
+ ## 🔮 The Four Primitives In Depth
371
+
372
+ ### 1. `LLM` — The Universal Model Interface
373
+
374
+ The `LLM` class provides a unified interface for calling any language model. One interface, any provider, zero provider-specific code in your application.
375
+
376
+ ```python
377
+ from thoughtflow import LLM
378
+
379
+ # ═══════════════════════════════════════════════════════════════════════════
380
+ # CREATING LLM INSTANCES
381
+ # ═══════════════════════════════════════════════════════════════════════════
382
+
383
+ # OpenAI
384
+ llm = LLM("openai:gpt-4o", key="sk-...")
385
+
386
+ # Anthropic
387
+ llm = LLM("anthropic:claude-3-5-sonnet-20241022", key="sk-ant-...")
388
+
389
+ # Groq (blazing fast inference)
390
+ llm = LLM("groq:llama-3.1-70b-versatile", key="gsk_...")
391
+
392
+ # Google Gemini
393
+ llm = LLM("gemini:gemini-1.5-pro", key="...")
394
+
395
+ # OpenRouter (access to any model)
396
+ llm = LLM("openrouter:anthropic/claude-3-opus", key="sk-or-...")
397
+
398
+ # Ollama (local models - no API key needed!)
399
+ llm = LLM("ollama:llama3.2")
400
+
401
+ # ═══════════════════════════════════════════════════════════════════════════
402
+ # MAKING CALLS
403
+ # ═══════════════════════════════════════════════════════════════════════════
404
+
405
+ # Standard chat format - works with ALL providers
406
+ response = llm.call([
407
+ {"role": "system", "content": "You are a helpful assistant."},
408
+ {"role": "user", "content": "What's the capital of France?"}
409
+ ])
410
+ # response: ["The capital of France is Paris."]
411
+
412
+ # With parameters
413
+ response = llm.call(
414
+ [{"role": "user", "content": "Write a haiku about Python"}],
415
+ params={"temperature": 0.7, "max_tokens": 100}
416
+ )
417
+
418
+ # ═══════════════════════════════════════════════════════════════════════════
419
+ # MESSAGE NORMALIZATION
420
+ # ═══════════════════════════════════════════════════════════════════════════
421
+
422
+ # LLM automatically normalizes messages - all of these work:
423
+
424
+ # Standard format
425
+ llm.call([{"role": "user", "content": "Hello"}])
426
+
427
+ # Just content (assumes role="user")
428
+ llm.call([{"content": "Hello"}])
429
+
430
+ # Plain strings (becomes user messages)
431
+ llm.call(["Hello", "How are you?"])
432
+ ```
433
+
434
+ **Key features:**
435
+ - **Automatic message normalization** — Pass dicts, strings, or mixed formats
436
+ - **Consistent response format** — Always returns a list of response strings
437
+ - **Zero provider-specific code** — Switch providers by changing one string
438
+ - **Direct HTTP calls** — No middleware, no overhead, no surprises
439
+
440
+ ---
441
+
442
+ ### 2. `MEMORY` — Event-Sourced State
443
+
444
+ MEMORY is an event-sourced container that tracks **everything**: messages, logs, reflections, and variables with full history. Every change is an event with a sortable ID (alphabetical = chronological).
445
+
446
+ ```python
447
+ from thoughtflow import MEMORY
448
+
449
+ memory = MEMORY()
450
+
451
+ # ═══════════════════════════════════════════════════════════════════════════
452
+ # MESSAGES — with channel tracking for omni-channel agents
453
+ # ═══════════════════════════════════════════════════════════════════════════
454
+
455
+ # Add messages with channel tracking (webapp, ios, telegram, slack, etc.)
456
+ memory.add_msg("user", "Hello from the web!", channel="webapp")
457
+ memory.add_msg("assistant", "Hi there! How can I help?", channel="webapp")
458
+ memory.add_msg("user", "Following up on Telegram", channel="telegram")
459
+ memory.add_msg("user", "Also checking on mobile", channel="ios")
460
+
461
+ # Query messages - multiple ways
462
+ all_msgs = memory.get_msgs() # All messages
463
+ user_msgs = memory.get_msgs(include=["user"]) # Only user messages
464
+ web_msgs = memory.get_msgs(channel="webapp") # Only webapp channel
465
+ recent = memory.get_msgs(limit=5) # Last 5 messages
466
+
467
+ # Quick access to most recent
468
+ memory.last_user_msg() # "Also checking on mobile"
469
+ memory.last_asst_msg() # "Hi there! How can I help?"
470
+ memory.last_sys_msg() # Last system message (if any)
471
+
472
+ # ═══════════════════════════════════════════════════════════════════════════
473
+ # LOGS & REFLECTIONS — internal agent reasoning
474
+ # ═══════════════════════════════════════════════════════════════════════════
475
+
476
+ # Logs are for debugging and audit trails
477
+ memory.add_log("User initiated conversation from webapp")
478
+ memory.add_log("Processing user request...")
479
+ memory.add_log("Response generated successfully")
480
+
481
+ # Reflections are for agent's internal reasoning
482
+ memory.add_ref("User seems interested in weather patterns")
483
+ memory.add_ref("Should ask clarifying questions about location")
484
+
485
+ # Retrieve logs and reflections
486
+ memory.get_logs() # All log entries
487
+ memory.get_refs() # All reflections
488
+ memory.last_log_msg() # Most recent log
489
+
490
+ # ═══════════════════════════════════════════════════════════════════════════
491
+ # VARIABLES — with FULL HISTORY tracking
492
+ # ═══════════════════════════════════════════════════════════════════════════
493
+
494
+ # Set variables with optional descriptions
495
+ memory.set_var("session_id", "abc123", desc="Current session identifier")
496
+ memory.set_var("user_name", "Alice", desc="User's display name")
497
+ memory.set_var("request_count", 0)
498
+
499
+ # Update variables - this APPENDS to history, doesn't overwrite!
500
+ memory.set_var("request_count", 1)
501
+ memory.set_var("request_count", 2)
502
+ memory.set_var("request_count", 3)
503
+
504
+ # Get current value
505
+ memory.get_var("request_count") # Returns: 3
506
+ memory.get_var("user_name") # Returns: "Alice"
507
+ memory.get_var("nonexistent") # Returns: None
508
+
509
+ # Get FULL HISTORY - see every change with timestamps
510
+ memory.get_var_history("request_count")
511
+ # Returns: [
512
+ # ["stamp1...", 0],
513
+ # ["stamp2...", 1],
514
+ # ["stamp3...", 2],
515
+ # ["stamp4...", 3]
516
+ # ]
517
+
518
+ # Get all current variables
519
+ memory.get_all_vars()
520
+ # Returns: {"session_id": "abc123", "user_name": "Alice", "request_count": 3}
521
+
522
+ # Get variable description
523
+ memory.get_var_desc("session_id") # "Current session identifier"
524
+
525
+ # ═══════════════════════════════════════════════════════════════════════════
526
+ # VARIABLE DELETION — tombstone pattern preserves history
527
+ # ═══════════════════════════════════════════════════════════════════════════
528
+
529
+ # Deletion is a tombstone, not destruction
530
+ memory.del_var("session_id")
531
+
532
+ # After deletion
533
+ memory.get_var("session_id") # Returns: None
534
+ memory.is_var_deleted("session_id") # Returns: True
535
+
536
+ # But history is preserved!
537
+ memory.get_var_history("session_id")
538
+ # Returns: [["stamp1...", "abc123"], ["stamp2...", <DELETED>]]
539
+
540
+ # Can re-set after deletion
541
+ memory.set_var("session_id", "xyz789")
542
+ memory.get_var("session_id") # Returns: "xyz789"
543
+
544
+ # ═══════════════════════════════════════════════════════════════════════════
545
+ # SERIALIZATION — for persistence and cloud sync
546
+ # ═══════════════════════════════════════════════════════════════════════════
547
+
548
+ # Save to file (pickle format)
549
+ memory.save("state.pkl")
550
+ memory.save("state.pkl.gz", compressed=True) # With compression
551
+
552
+ # Load from file
553
+ memory2 = MEMORY()
554
+ memory2.load("state.pkl")
555
+
556
+ # Export to JSON (portable, human-readable)
557
+ memory.to_json("state.json")
558
+ json_string = memory.to_json() # Returns string if no filename
559
+
560
+ # Load from JSON
561
+ memory3 = MEMORY.from_json("state.json")
562
+ memory4 = MEMORY.from_json(json_string)
563
+
564
+ # Export snapshot for cloud sync
565
+ snapshot = memory.snapshot()
566
+ # snapshot = {"id": "...", "events": {...}, "objects": {...}}
567
+
568
+ # Rehydrate from events (for distributed systems)
569
+ memory5 = MEMORY.from_events(snapshot["events"].values())
570
+
571
+ # Deep copy
572
+ memory_copy = memory.copy()
573
+
574
+ # ═══════════════════════════════════════════════════════════════════════════
575
+ # RENDERING — for debugging, logging, and LLM context
576
+ # ═══════════════════════════════════════════════════════════════════════════
577
+
578
+ # Render as conversation (great for debugging)
579
+ print(memory.render(output_format="conversation"))
580
+ # Output:
581
+ # User: Hello from the web!
582
+ # Assistant: Hi there! How can I help?
583
+ # User: Following up on Telegram
584
+ # ...
585
+
586
+ # Render as JSON
587
+ print(memory.render(output_format="json", include=("msgs", "logs")))
588
+
589
+ # Render as plain text
590
+ print(memory.render(output_format="plain"))
591
+
592
+ # Filter by role, channel, content
593
+ print(memory.render(
594
+ role_filter=["user", "assistant"],
595
+ channel_filter="webapp",
596
+ max_total_length=2000
597
+ ))
598
+
599
+ # ═══════════════════════════════════════════════════════════════════════════
600
+ # LARGE OBJECT HANDLING — automatic compression
601
+ # ═══════════════════════════════════════════════════════════════════════════
602
+
603
+ # Large values (>10KB by default) are automatically compressed
604
+ large_data = "x" * 50000 # 50KB of data
605
+ memory.set_var("big_data", large_data)
606
+
607
+ # Retrieved transparently
608
+ memory.get_var("big_data") # Returns full 50KB string
609
+
610
+ # Or store objects explicitly
611
+ stamp = memory.set_obj(large_binary_data, name="attachment", desc="PDF file")
612
+ memory.get_var("attachment") # Returns decompressed data
613
+ ```
614
+
615
+ **Key features:**
616
+ - **Event-sourced** — Every change is an event with a sortable ID
617
+ - **Full variable history** — See every change with timestamps
618
+ - **Channel tracking** — Build omni-channel agents (web, mobile, Telegram, etc.)
619
+ - **Tombstone deletion** — History is never lost
620
+ - **Auto-compression** — Large values handled automatically
621
+ - **Multiple export formats** — JSON, Pickle, snapshots for cloud sync
622
+
623
+ ---
624
+
625
+ ### 3. `THOUGHT` — The Atomic Unit of Cognition
626
+
627
+ A THOUGHT is the discrete unit of reasoning: **Prompt + Context + LLM + Parsing + Validation**. It's the building block for all cognitive operations.
628
+
629
+ ```python
630
+ from thoughtflow import LLM, MEMORY, THOUGHT
631
+
632
+ llm = LLM("openai:gpt-4o", key="...")
633
+ memory = MEMORY()
634
+
635
+ # ═══════════════════════════════════════════════════════════════════════════
636
+ # BASIC THOUGHT — the simplest form
637
+ # ═══════════════════════════════════════════════════════════════════════════
638
+
639
+ thought = THOUGHT(
640
+ name="respond",
641
+ llm=llm,
642
+ prompt="You are a helpful assistant. Answer: {last_user_msg}",
643
+ )
644
+
645
+ memory.add_msg("user", "What's 2 + 2?")
646
+ memory = thought(memory) # THE UNIVERSAL PATTERN
647
+
648
+ result = memory.get_var("respond_result")
649
+ print(result) # "2 + 2 equals 4."
650
+
651
+ # ═══════════════════════════════════════════════════════════════════════════
652
+ # WITH PARSING — extract structured data from messy LLM output
653
+ # ═══════════════════════════════════════════════════════════════════════════
654
+
655
+ thought = THOUGHT(
656
+ name="extract_user_info",
657
+ llm=llm,
658
+ prompt="Extract user information from this text: {text}",
659
+ parsing_rules={
660
+ "kind": "python",
661
+ "format": {
662
+ "name": "", # Required string
663
+ "age": 0, # Required int
664
+ "email?": "", # Optional (note the ?)
665
+ "skills": [], # Required list
666
+ }
667
+ },
668
+ )
669
+
670
+ memory.set_var("text", "My name is Alice, I'm 28, and I know Python and ML.")
671
+ memory = thought(memory)
672
+ info = memory.get_var("extract_user_info_result")
673
+ # info = {"name": "Alice", "age": 28, "skills": ["Python", "ML"]}
674
+
675
+ # ═══════════════════════════════════════════════════════════════════════════
676
+ # WITH VALIDATION — ensure output meets requirements
677
+ # ═══════════════════════════════════════════════════════════════════════════
678
+
679
+ thought = THOUGHT(
680
+ name="generate_ideas",
681
+ llm=llm,
682
+ prompt="Generate exactly 5 creative ideas for: {topic}",
683
+ parser="json",
684
+ validator="list_min_len:5", # Must have at least 5 items
685
+ max_retries=3, # Retry up to 3 times if validation fails
686
+ retry_delay=0.5, # Wait 0.5s between retries
687
+ )
688
+
689
+ # Built-in validators:
690
+ # - "any" — Accept anything
691
+ # - "has_keys:key1,key2" — Dict must have these keys
692
+ # - "list_min_len:N" — List must have at least N items
693
+ # - Custom callable — Your own validation function
694
+
695
+ # ═══════════════════════════════════════════════════════════════════════════
696
+ # WITH CUSTOM VALIDATION
697
+ # ═══════════════════════════════════════════════════════════════════════════
698
+
699
+ def validate_email_list(result):
700
+ """Custom validator: all items must be valid emails."""
701
+ if not isinstance(result, list):
702
+ return False, "Expected a list"
703
+ for item in result:
704
+ if "@" not in str(item):
705
+ return False, f"Invalid email: {item}"
706
+ return True, ""
707
+
708
+ thought = THOUGHT(
709
+ name="extract_emails",
710
+ llm=llm,
711
+ prompt="Extract all email addresses from: {text}",
712
+ parser="list",
713
+ validation=validate_email_list,
714
+ max_retries=2,
715
+ )
716
+
717
+ # ═══════════════════════════════════════════════════════════════════════════
718
+ # OPERATIONS — THOUGHT isn't just for LLM calls
719
+ # ═══════════════════════════════════════════════════════════════════════════
720
+
721
+ # MEMORY QUERY — retrieve data without calling LLM
722
+ query_thought = THOUGHT(
723
+ name="get_user_context",
724
+ operation="memory_query",
725
+ required_vars=["user_name", "session_id"],
726
+ optional_vars=["preferences"],
727
+ )
728
+ memory = query_thought(memory)
729
+ context = memory.get_var("get_user_context_result")
730
+ # context = {"user_name": "Alice", "session_id": "abc123"}
731
+
732
+ # VARIABLE SET — set multiple variables at once
733
+ init_thought = THOUGHT(
734
+ name="init_session",
735
+ operation="variable_set",
736
+ prompt={
737
+ "session_active": True,
738
+ "start_time": None,
739
+ "message_count": 0
740
+ }
741
+ )
742
+ memory = init_thought(memory)
743
+ # Sets all three variables in memory
744
+
745
+ # CONDITIONAL — branch logic based on memory state
746
+ branch_thought = THOUGHT(
747
+ name="check_threshold",
748
+ operation="conditional",
749
+ condition=lambda m, ctx: ctx.get("score", 0) > 80,
750
+ if_true="high_score_path",
751
+ if_false="low_score_path"
752
+ )
753
+ memory.set_var("score", 95)
754
+ memory = branch_thought(memory)
755
+ result = memory.get_var("check_threshold_result") # "high_score_path"
756
+
757
+ # ═══════════════════════════════════════════════════════════════════════════
758
+ # PRE/POST HOOKS — custom processing
759
+ # ═══════════════════════════════════════════════════════════════════════════
760
+
761
+ def pre_process(thought, memory, vars, **kwargs):
762
+ """Called before execution."""
763
+ print(f"About to execute: {thought.name}")
764
+ # Can modify vars before execution
765
+
766
+ def post_process(thought, memory, result, error):
767
+ """Called after execution."""
768
+ if error:
769
+ print(f"Error in {thought.name}: {error}")
770
+ else:
771
+ print(f"Success: {thought.name} -> {result}")
772
+
773
+ thought = THOUGHT(
774
+ name="monitored_thought",
775
+ llm=llm,
776
+ prompt="...",
777
+ pre_hook=pre_process,
778
+ post_hook=post_process,
779
+ )
780
+
781
+ # ═══════════════════════════════════════════════════════════════════════════
782
+ # SERIALIZATION — save and restore thoughts
783
+ # ═══════════════════════════════════════════════════════════════════════════
784
+
785
+ # Export to dict (for storage/transmission)
786
+ thought_data = thought.to_dict()
787
+
788
+ # Reconstruct from dict (LLM must be provided separately)
789
+ thought_copy = THOUGHT.from_dict(thought_data, llm=llm)
790
+
791
+ # Copy a thought
792
+ thought_clone = thought.copy()
793
+
794
+ # ═══════════════════════════════════════════════════════════════════════════
795
+ # INTROSPECTION — examine execution history
796
+ # ═══════════════════════════════════════════════════════════════════════════
797
+
798
+ # After executing a thought multiple times
799
+ thought.execution_history
800
+ # [
801
+ # {"stamp": "...", "duration_ms": 234.5, "success": True, ...},
802
+ # {"stamp": "...", "duration_ms": 198.2, "success": True, ...},
803
+ # ]
804
+
805
+ thought.last_result # Most recent result
806
+ thought.last_error # Most recent error (if any)
807
+ thought.last_prompt # The prompt that was sent
808
+ thought.last_response # Raw LLM response
809
+ ```
810
+
811
+ **Key features:**
812
+ - **Callable interface** — `memory = thought(memory)` is the entire API
813
+ - **Automatic retry** — With repair prompts that explain what went wrong
814
+ - **Schema-based parsing** — Via `valid_extract` for bulletproof extraction
815
+ - **Multiple validators** — Built-in or custom validation functions
816
+ - **Four operations** — `llm_call`, `memory_query`, `variable_set`, `conditional`
817
+ - **Pre/post hooks** — Custom processing before and after execution
818
+ - **Full serialization** — Save, restore, and copy thoughts
819
+
820
+ ---
821
+
822
+ ### 4. `ACTION` — External Operations
823
+
824
+ ACTION wraps external operations (API calls, file I/O, database queries) with consistent logging and error handling:
825
+
826
+ ```python
827
+ from thoughtflow import ACTION, MEMORY
828
+
829
+ # ═══════════════════════════════════════════════════════════════════════════
830
+ # DEFINING AN ACTION
831
+ # ═══════════════════════════════════════════════════════════════════════════
832
+
833
+ def search_web(memory, query, max_results=3):
834
+ """
835
+ Search the web and return results.
836
+
837
+ Args:
838
+ memory: MEMORY object (always first argument)
839
+ query: Search query string
840
+ max_results: Maximum results to return
841
+
842
+ Returns:
843
+ dict with search results
844
+ """
845
+ # Your implementation here
846
+ results = web_api.search(query, limit=max_results)
847
+ return {"status": "success", "hits": results, "query": query}
848
+
849
+ search_action = ACTION(
850
+ name="web_search",
851
+ fn=search_web,
852
+ config={"max_results": 5}, # Default config
853
+ description="Searches the web for information"
854
+ )
855
+
856
+ # ═══════════════════════════════════════════════════════════════════════════
857
+ # EXECUTING AN ACTION
858
+ # ═══════════════════════════════════════════════════════════════════════════
859
+
860
+ memory = MEMORY()
861
+
862
+ # Execute with default config
863
+ memory = search_action(memory, query="thoughtflow python library")
864
+
865
+ # Execute with override
866
+ memory = search_action(memory, query="python agents", max_results=10)
867
+
868
+ # Results are stored automatically
869
+ result = memory.get_var("web_search_result")
870
+ # result = {"status": "success", "hits": [...], "query": "..."}
871
+
872
+ # ═══════════════════════════════════════════════════════════════════════════
873
+ # ERROR HANDLING — errors don't interrupt your workflow
874
+ # ═══════════════════════════════════════════════════════════════════════════
875
+
876
+ def risky_operation(memory, url):
877
+ """An operation that might fail."""
878
+ response = requests.get(url, timeout=5)
879
+ response.raise_for_status()
880
+ return response.json()
881
+
882
+ fetch_action = ACTION(name="fetch_data", fn=risky_operation)
883
+
884
+ # If the action fails, error info is stored (not raised)
885
+ memory = fetch_action(memory, url="https://example.com/api")
886
+
887
+ result = memory.get_var("fetch_data_result")
888
+ if "error" in result:
889
+ print(f"Action failed: {result['error']}")
890
+ else:
891
+ print(f"Action succeeded: {result}")
892
+
893
+ # ═══════════════════════════════════════════════════════════════════════════
894
+ # INTROSPECTION — examine execution history
895
+ # ═══════════════════════════════════════════════════════════════════════════
896
+
897
+ # After executing an action multiple times
898
+ search_action.execution_count # How many times called
899
+ search_action.was_successful() # Did last call succeed?
900
+ search_action.last_result # Most recent result
901
+ search_action.last_error # Most recent error (if any)
902
+
903
+ # Full execution history with timing
904
+ search_action.execution_history
905
+ # [
906
+ # {"stamp": "...", "duration_ms": 145.2, "success": True, "error": None},
907
+ # {"stamp": "...", "duration_ms": 203.1, "success": False, "error": "Timeout"},
908
+ # ]
909
+
910
+ # Get timing for last call
911
+ last_call = search_action.execution_history[-1]
912
+ print(f"Last call took {last_call['duration_ms']:.1f}ms")
913
+
914
+ # ═══════════════════════════════════════════════════════════════════════════
915
+ # RESET AND COPY
916
+ # ═══════════════════════════════════════════════════════════════════════════
917
+
918
+ # Reset stats (useful for testing)
919
+ search_action.reset_stats()
920
+
921
+ # Copy an action (shares function, copies config)
922
+ search_action_copy = search_action.copy()
923
+
924
+ # ═══════════════════════════════════════════════════════════════════════════
925
+ # SERIALIZATION
926
+ # ═══════════════════════════════════════════════════════════════════════════
927
+
928
+ # Export to dict
929
+ action_data = search_action.to_dict()
930
+
931
+ # Reconstruct (need function registry)
932
+ fn_registry = {"search_web": search_web}
933
+ action_copy = ACTION.from_dict(action_data, fn_registry)
934
+ ```
935
+
936
+ **Key features:**
937
+ - **Callable interface** — `memory = action(memory, **kwargs)`
938
+ - **Automatic result storage** — Results stored in `{name}_result` variable
939
+ - **Error containment** — Errors are logged, not raised (workflow continues)
940
+ - **Full execution history** — Timing, success/failure, error details
941
+ - **Configurable defaults** — Set defaults, override per-call
942
+ - **Serialization support** — Save and restore actions
943
+
944
+ ---
945
+
946
+ ## 🔧 Utilities
947
+
948
+ ### `valid_extract` — Robust LLM Output Parsing
949
+
950
+ LLMs are messy. They add prose, code fences, markdown, and formatting you didn't ask for. `valid_extract` handles all of it:
951
+
952
+ **Basic extraction from messy output:**
953
+
954
+ ```python
955
+ from thoughtflow import valid_extract, ValidExtractError
956
+
957
+ # Messy LLM output with prose and formatting
958
+ llm_output = '''
959
+ Sure! Here is the data you asked for:
960
+ {"name": "Alice", "age": 28, "skills": ["Python", "ML"]}
961
+ Let me know if you need anything else!
962
+ '''
963
+
964
+ # Define extraction rules with schema
965
+ rules = {
966
+ "kind": "python",
967
+ "format": {
968
+ "name": "", # Required string
969
+ "age": 0, # Required int
970
+ "skills": [], # Required list
971
+ }
972
+ }
973
+
974
+ result = valid_extract(llm_output, rules)
975
+ # result = {'name': 'Alice', 'age': 28, 'skills': ['Python', 'ML']}
976
+ ```
977
+
978
+ **Optional keys (marked with `?`):**
979
+
980
+ ```python
981
+ rules = {
982
+ "kind": "python",
983
+ "format": {
984
+ "name": "", # Required
985
+ "email": "", # Required
986
+ "phone?": "", # Optional (note the ?)
987
+ "address?": "", # Optional
988
+ }
989
+ }
990
+
991
+ llm_output = "{'name': 'Bob', 'email': 'bob@example.com'}"
992
+ result = valid_extract(llm_output, rules)
993
+ # result = {'name': 'Bob', 'email': 'bob@example.com'}
994
+ # No error even though phone and address are missing
995
+ ```
996
+
997
+ **Nested structures:**
998
+
999
+ ```python
1000
+ rules = {
1001
+ "kind": "python",
1002
+ "format": {
1003
+ "user": {
1004
+ "id": 0,
1005
+ "profile": {
1006
+ "name": "",
1007
+ "settings": {}
1008
+ }
1009
+ },
1010
+ "metadata": {}
1011
+ }
1012
+ }
1013
+ ```
1014
+
1015
+ **List element validation:**
1016
+
1017
+ ```python
1018
+ # [schema] means every element must match schema
1019
+ rules = {
1020
+ "kind": "python",
1021
+ "format": [{
1022
+ "id": 0,
1023
+ "name": "",
1024
+ "done": True
1025
+ }]
1026
+ }
1027
+
1028
+ llm_output = """
1029
+ [
1030
+ {'id': 1, 'name': 'Task A', 'done': False},
1031
+ {'id': 2, 'name': 'Task B', 'done': True},
1032
+ ]
1033
+ """
1034
+ result = valid_extract(llm_output, rules)
1035
+ # Each item validated against the schema
1036
+ ```
1037
+
1038
+ **JSON parsing:**
1039
+
1040
+ ```python
1041
+ rules = {
1042
+ "kind": "json", # Parse as JSON instead of Python
1043
+ "format": {"status": "", "data": []}
1044
+ }
1045
+
1046
+ llm_output = '{"status": "ok", "data": [1, 2, 3]}'
1047
+ result = valid_extract(llm_output, rules)
1048
+ ```
1049
+
1050
+ **Error handling:**
1051
+
1052
+ ```python
1053
+ try:
1054
+ result = valid_extract("no valid data here", rules)
1055
+ except ValidExtractError as e:
1056
+ print(f"Extraction failed: {e}")
1057
+ ```
1058
+
1059
+ **Schema type mapping:**
1060
+ - `""` or `str` → string
1061
+ - `0` or `int` → integer
1062
+ - `0.0` or `float` → float
1063
+ - `True` or `bool` → boolean
1064
+ - `None` → NoneType
1065
+ - `[]` → list (any contents)
1066
+ - `[schema]` → list of items matching schema
1067
+ - `{}` → dict (any contents)
1068
+ - `{"k": schema}` → dict with required key "k"
1069
+ - `{"k?": schema}` → dict with optional key "k"
1070
+
1071
+ ---
1072
+
1073
+ ### `EventStamp` — Deterministic IDs
1074
+
1075
+ ```python
1076
+ from thoughtflow import event_stamp, hashify, EventStamp
1077
+
1078
+ # Generate unique, sortable event ID
1079
+ # Alphabetical order = chronological order
1080
+ stamp = event_stamp() # "A1B2C3D4E5F6G7H8"
1081
+
1082
+ # Generate with document hash (deterministic component)
1083
+ stamp = event_stamp({"user": "alice", "action": "login"})
1084
+
1085
+ # Decode timestamp from stamp
1086
+ unix_time = EventStamp.decode_time(stamp)
1087
+
1088
+ # Generate deterministic hash
1089
+ hash_id = hashify("some input string") # 32 chars by default
1090
+ hash_id = hashify("some input", length=16) # Custom length
1091
+ # Same input always produces same hash
1092
+ ```
1093
+
1094
+ ---
1095
+
1096
+ ### Prompt Construction
1097
+
1098
+ ```python
1099
+ from thoughtflow import construct_prompt, construct_msgs
1100
+
1101
+ # ═══════════════════════════════════════════════════════════════════════════
1102
+ # STRUCTURED PROMPTS WITH SECTIONS
1103
+ # ═══════════════════════════════════════════════════════════════════════════
1104
+
1105
+ prompt = construct_prompt({
1106
+ "context": "You are analyzing customer feedback data.",
1107
+ "instructions": "Follow these steps:\n1. Identify sentiment\n2. Extract key themes",
1108
+ "output_format": "Return a JSON object with 'sentiment' and 'themes' keys."
1109
+ })
1110
+ # Generates a structured prompt with clear section markers
1111
+
1112
+ # ═══════════════════════════════════════════════════════════════════════════
1113
+ # MESSAGE LIST CONSTRUCTION
1114
+ # ═══════════════════════════════════════════════════════════════════════════
1115
+
1116
+ msgs = construct_msgs(
1117
+ usr_prompt="Analyze this feedback: {feedback}",
1118
+ vars={"feedback": customer_feedback},
1119
+ sys_prompt="You are a sentiment analysis expert.",
1120
+ msgs=[] # Prior conversation messages
1121
+ )
1122
+ # Returns properly formatted message list for LLM
1123
+ ```
1124
+
1125
+ ---
1126
+
1127
+ ## 🎨 Real-World Patterns
1128
+
1129
+ ### Multi-Step Workflow
1130
+
1131
+ Chain multiple thoughts together for complex workflows:
1132
+
1133
+ ```python
1134
+ from thoughtflow import LLM, MEMORY, THOUGHT
1135
+
1136
+ llm = LLM("openai:gpt-4o", key="...")
1137
+ memory = MEMORY()
1138
+
1139
+ # Define a pipeline of thoughts
1140
+ analyze = THOUGHT(
1141
+ name="analyze",
1142
+ llm=llm,
1143
+ prompt="Analyze the following text and identify key themes: {text}",
1144
+ parsing_rules={"kind": "python", "format": {"themes": [], "sentiment": ""}}
1145
+ )
1146
+
1147
+ expand = THOUGHT(
1148
+ name="expand",
1149
+ llm=llm,
1150
+ prompt="Take these themes and expand on each one: {analyze_result}",
1151
+ )
1152
+
1153
+ summarize = THOUGHT(
1154
+ name="summarize",
1155
+ llm=llm,
1156
+ prompt="Create an executive summary from this expanded analysis: {expand_result}",
1157
+ )
1158
+
1159
+ critique = THOUGHT(
1160
+ name="critique",
1161
+ llm=llm,
1162
+ prompt="Identify potential weaknesses or gaps in this analysis: {summarize_result}",
1163
+ )
1164
+
1165
+ # Execute the pipeline — it's just Python!
1166
+ memory.set_var("text", document)
1167
+
1168
+ for thought in [analyze, expand, summarize, critique]:
1169
+ print(f"Executing: {thought.name}")
1170
+ memory = thought(memory)
1171
+ print(f" Result stored in: {thought.name}_result")
1172
+
1173
+ # Get final results
1174
+ summary = memory.get_var("summarize_result")
1175
+ critique = memory.get_var("critique_result")
1176
+ ```
1177
+
1178
+ ### Multi-Channel Agent
1179
+
1180
+ Build agents that work across platforms:
1181
+
1182
+ ```python
1183
+ from thoughtflow import LLM, MEMORY, THOUGHT
1184
+
1185
+ memory = MEMORY()
1186
+
1187
+ # Messages come from different platforms
1188
+ memory.add_msg("user", "Hello from the website!", channel="webapp")
1189
+ memory.add_msg("user", "Following up via Telegram", channel="telegram")
1190
+ memory.add_msg("user", "Quick question from mobile", channel="ios")
1191
+ memory.add_msg("user", "Also checking Slack", channel="slack")
1192
+
1193
+ # Process messages by channel
1194
+ for channel in ["webapp", "telegram", "ios", "slack"]:
1195
+ msgs = memory.get_msgs(channel=channel)
1196
+ print(f"\n{channel.upper()} ({len(msgs)} messages):")
1197
+ for msg in msgs:
1198
+ print(f" {msg['role']}: {msg['content'][:50]}...")
1199
+
1200
+ # Or process all together, maintaining context
1201
+ all_msgs = memory.get_msgs(include=["user", "assistant"])
1202
+
1203
+ # Render for LLM context with channel info
1204
+ context = memory.render(
1205
+ output_format="conversation",
1206
+ include_roles=("user", "assistant"),
1207
+ max_total_length=4000
1208
+ )
1209
+ ```
1210
+
1211
+ ### Retry with Auto-Repair
1212
+
1213
+ Automatic retry with intelligent repair prompts:
1214
+
1215
+ ```python
1216
+ from thoughtflow import LLM, MEMORY, THOUGHT
1217
+
1218
+ llm = LLM("openai:gpt-4o", key="...")
1219
+ memory = MEMORY()
1220
+
1221
+ thought = THOUGHT(
1222
+ name="generate_json",
1223
+ llm=llm,
1224
+ prompt="""Generate a valid JSON object with exactly these keys:
1225
+ - "name": a string
1226
+ - "count": an integer greater than 0
1227
+ - "tags": a list of at least 3 strings
1228
+ """,
1229
+ parsing_rules={
1230
+ "kind": "json",
1231
+ "format": {"name": "", "count": 0, "tags": [""]}
1232
+ },
1233
+ validator="list_min_len:3", # Custom: tags must have 3+ items
1234
+ max_retries=3,
1235
+ retry_delay=0.5,
1236
+ )
1237
+
1238
+ # If validation fails, THOUGHT automatically retries with a repair prompt
1239
+ # that explains what went wrong:
1240
+ # "(Please return only the requested format; your last answer failed: List too short)"
1241
+
1242
+ memory = thought(memory)
1243
+
1244
+ # Check execution history
1245
+ for attempt in thought.execution_history:
1246
+ print(f"Attempt: success={attempt['success']}, duration={attempt['duration_ms']:.1f}ms")
1247
+ ```
1248
+
1249
+ ### Combining THOUGHTs and ACTIONs
1250
+
1251
+ Build agents that think AND act:
1252
+
1253
+ ```python
1254
+ from thoughtflow import LLM, MEMORY, THOUGHT, ACTION
1255
+
1256
+ llm = LLM("openai:gpt-4o", key="...")
1257
+ memory = MEMORY()
1258
+
1259
+ # Define an action for external API calls
1260
+ def search_database(memory, query, limit=10):
1261
+ results = db.search(query, limit=limit)
1262
+ return {"results": results, "count": len(results)}
1263
+
1264
+ search = ACTION(name="search", fn=search_database)
1265
+
1266
+ # Define thoughts for reasoning
1267
+ analyze_query = THOUGHT(
1268
+ name="analyze_query",
1269
+ llm=llm,
1270
+ prompt="Convert this user question into a database search query: {last_user_msg}",
1271
+ )
1272
+
1273
+ synthesize = THOUGHT(
1274
+ name="synthesize",
1275
+ llm=llm,
1276
+ prompt="Given these search results: {search_result}\n\nAnswer the user's question: {last_user_msg}",
1277
+ )
1278
+
1279
+ # Workflow: Think → Act → Think
1280
+ memory.add_msg("user", "What products do we have under $50?")
1281
+
1282
+ memory = analyze_query(memory) # Think: convert to query
1283
+ query = memory.get_var("analyze_query_result")
1284
+
1285
+ memory = search(memory, query=query, limit=20) # Act: search database
1286
+
1287
+ memory = synthesize(memory) # Think: synthesize answer
1288
+ answer = memory.get_var("synthesize_result")
1289
+ ```
1290
+
1291
+ ---
1292
+
1293
+ ## 🎯 Philosophy: The Zen of ThoughtFlow
1294
+
1295
+ ThoughtFlow is guided by principles documented in [**ZEN.md**](ZEN.md):
1296
+
1297
+ | Principle | What It Means |
1298
+ |-----------|---------------|
1299
+ | 🎯 **First Principles First** | Built on fundamentals, not abstractions on abstractions |
1300
+ | 🧘 **Complexity is the Enemy** | Pythonic, intuitive, elegant. As light as possible. |
1301
+ | 👁️ **Obvious Over Abstract** | If you have to dig deep to understand, the design failed |
1302
+ | 🔍 **Transparency is Trust** | Never guess what's happening under the hood |
1303
+ | 📦 **Minimize Dependencies** | Zero deps for core. Serverless-ready by default. |
1304
+ | ♻️ **Backward Compatibility is Sacred** | Code should endure. Deprecation should be rare. |
1305
+ | 🧩 **Modularity Over Monolith** | Composable pieces, not all-or-nothing frameworks |
1306
+ | 🚗 **Vehicle, Not Destination** | Your logic, your rules, your journey |
1307
+ | 🐍 **Python is King** | Pythonic first. No DSLs, no YAML configs, no magic. |
1308
+
1309
+ > *"Don't try to please everyone. Greatness comes from focus, not from trying to do everything."*
1310
+ >
1311
+ > — [ZEN.md](ZEN.md)
1312
+
1313
+ ---
1314
+
1315
+ ## 🔗 Sister Library: ThoughtBase
1316
+
1317
+ **[ThoughtBase](https://github.com/jrolf/thoughtbase)** is an optional companion library providing persistent storage and vector search capabilities.
1318
+
1319
+ ```python
1320
+ from thoughtflow import MEMORY, THOUGHT
1321
+ from thoughtbase import VectorStore, PersistentMemory
1322
+
1323
+ # Create persistent, searchable memory
1324
+ store = VectorStore("my_agent_memories")
1325
+ persistent_mem = PersistentMemory(store)
1326
+
1327
+ # Your normal ThoughtFlow workflow
1328
+ thought = THOUGHT(name="respond", llm=llm, prompt="...")
1329
+ memory = thought(memory)
1330
+
1331
+ # Save to ThoughtBase
1332
+ persistent_mem.save(memory)
1333
+
1334
+ # Later: search across all saved memories
1335
+ results = persistent_mem.search("user preferences about notifications", limit=5)
1336
+
1337
+ # Load a specific memory
1338
+ memory = persistent_mem.load(session_id="abc123")
1339
+ ```
1340
+
1341
+ > ⚠️ **ThoughtBase is entirely optional.** ThoughtFlow provides complete functionality standalone. ThoughtBase adds persistence and vector search when you need them.
1342
+
1343
+ ---
1344
+
1345
+ ## 🔧 Supported Versions
1346
+
1347
+ | Version | Python | Status | Notes |
1348
+ |---------|--------|--------|-------|
1349
+ | **0.0.x** | 3.9 - 3.12 | 🟢 Active | Current development |
1350
+
1351
+ **Compatibility Policy:**
1352
+ - We test against Python 3.9, 3.10, 3.11, and 3.12
1353
+ - We aim to support new Python versions within 3 months of stable release
1354
+ - Breaking changes are avoided; when necessary, deprecation warnings come first
1355
+
1356
+ ---
1357
+
1358
+ ## 🧪 Testing & Evaluation
1359
+
1360
+ ThoughtFlow is designed for **deterministic testing**:
1361
+
1362
+ ```python
1363
+ from thoughtflow import MEMORY
1364
+ from thoughtflow.eval import Harness, Replay
1365
+
1366
+ # ═══════════════════════════════════════════════════════════════════════════
1367
+ # RECORD AND REPLAY
1368
+ # ═══════════════════════════════════════════════════════════════════════════
1369
+
1370
+ # Record a session
1371
+ memory = MEMORY()
1372
+ # ... run your workflow ...
1373
+ memory.save("session_recording.pkl")
1374
+
1375
+ # Replay for testing
1376
+ replay = MEMORY()
1377
+ replay.load("session_recording.pkl")
1378
+
1379
+ # Assert on results
1380
+ assert replay.get_var("final_result") == expected_value
1381
+ assert len(replay.get_msgs()) == expected_message_count
1382
+
1383
+ # ═══════════════════════════════════════════════════════════════════════════
1384
+ # EVALUATION HARNESS
1385
+ # ═══════════════════════════════════════════════════════════════════════════
1386
+
1387
+ # Define test cases
1388
+ test_cases = [
1389
+ {"input": "What's 2+2?", "expected_contains": "4"},
1390
+ {"input": "Capital of France?", "expected_contains": "Paris"},
1391
+ ]
1392
+
1393
+ # Run evaluation
1394
+ harness = Harness(test_cases=test_cases)
1395
+ results = harness.run(my_workflow_function)
1396
+
1397
+ # Analyze results
1398
+ for result in results:
1399
+ print(f"Input: {result['input']}")
1400
+ print(f"Output: {result['output']}")
1401
+ print(f"Passed: {result['passed']}")
1402
+ ```
1403
+
1404
+ ---
1405
+
1406
+ ## 📁 Project Structure
1407
+
1408
+ ```
1409
+ thoughtflow/
1410
+ ├── src/thoughtflow/
1411
+ │ ├── __init__.py # Public API exports
1412
+ │ ├── llm.py # LLM class - multi-provider interface
1413
+ │ ├── memory/
1414
+ │ │ ├── __init__.py
1415
+ │ │ └── base.py # MEMORY class - event-sourced state
1416
+ │ ├── thought.py # THOUGHT class - cognitive unit
1417
+ │ ├── action.py # ACTION class - external operations
1418
+ │ ├── _util.py # Utilities (event_stamp, valid_extract, etc.)
1419
+ │ ├── tools/ # Tool registry for function calling
1420
+ │ ├── trace/ # Session tracing and events
1421
+ │ └── eval/ # Evaluation harness and replay
1422
+ ├── examples/ # Working, runnable examples
1423
+ │ ├── 01_hello_world.py
1424
+ │ ├── 02_action.py
1425
+ │ ├── 03_memory.py
1426
+ │ ├── 04_valid_extract.py
1427
+ │ └── ...
1428
+ ├── tests/ # Comprehensive test suite
1429
+ │ ├── unit/
1430
+ │ └── integration/
1431
+ ├── docs/ # Documentation source
1432
+ ├── developer/ # Developer guides
1433
+ ├── assets/ # Logo and media
1434
+ └── ZEN.md # Philosophy document
1435
+ ```
1436
+
1437
+ ---
1438
+
1439
+ ## 🛠️ Development
1440
+
1441
+ ```bash
1442
+ # Clone the repository
1443
+ git clone https://github.com/jrolf/thoughtflow.git
1444
+ cd thoughtflow
1445
+
1446
+ # Install in development mode with all extras
1447
+ pip install -e ".[dev]"
1448
+
1449
+ # Run the test suite
1450
+ pytest
1451
+
1452
+ # Run with coverage
1453
+ pytest --cov=src/thoughtflow
1454
+
1455
+ # Lint the code
1456
+ ruff check src/
1457
+
1458
+ # Format the code
1459
+ ruff format src/
1460
+
1461
+ # Type check
1462
+ mypy src/thoughtflow/
1463
+ ```
1464
+
1465
+ See [developer/](developer/) for comprehensive development documentation.
1466
+
1467
+ ---
1468
+
1469
+ ## 📈 Project Status
1470
+
1471
+ | Aspect | Status | Notes |
1472
+ |--------|--------|-------|
1473
+ | **Core Primitives** | ✅ Stable | LLM, MEMORY, THOUGHT, ACTION |
1474
+ | **API Stability** | 🟡 Alpha | May evolve based on feedback |
1475
+ | **Documentation** | 🟡 In Progress | Core docs complete, expanding |
1476
+ | **Test Coverage** | ✅ Comprehensive | Unit + integration tests |
1477
+ | **Type Hints** | ✅ Full | Strict mypy compliance |
1478
+ | **Serverless Ready** | ✅ Yes | Zero deps, fast cold starts |
1479
+
1480
+ See [CHANGELOG.md](CHANGELOG.md) for version history.
1481
+
1482
+ ---
1483
+
1484
+ ## 🔒 Security
1485
+
1486
+ Found a vulnerability? **Please don't open a public issue.**
1487
+
1488
+ See [SECURITY.md](SECURITY.md) for our responsible disclosure policy. We take security seriously and will respond within 48 hours.
1489
+
1490
+ ---
1491
+
1492
+ ## 🤝 Contributing
1493
+
1494
+ We welcome contributions! ThoughtFlow values:
1495
+
1496
+ | Principle | What It Means |
1497
+ |-----------|---------------|
1498
+ | **Simplicity** | Over feature bloat |
1499
+ | **Clarity** | Over cleverness |
1500
+ | **Explicit** | Over implicit |
1501
+ | **Tested** | Everything has tests |
1502
+
1503
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
1504
+
1505
+ ---
1506
+
1507
+ ## 💬 Getting Help
1508
+
1509
+ | Need | Where to Go |
1510
+ |------|-------------|
1511
+ | **Question about usage** | [GitHub Discussions](https://github.com/jrolf/thoughtflow/discussions) |
1512
+ | **Bug report** | [GitHub Issues](https://github.com/jrolf/thoughtflow/issues) |
1513
+ | **Feature request** | [GitHub Issues](https://github.com/jrolf/thoughtflow/issues) |
1514
+ | **Security issue** | See [SECURITY.md](SECURITY.md) |
1515
+
1516
+ ---
1517
+
1518
+ ## 📖 Resources
1519
+
1520
+ | Resource | Description |
1521
+ |----------|-------------|
1522
+ | 📚 [Documentation](https://thoughtflow.dev) | Full documentation site |
1523
+ | 🧘 [ZEN.md](ZEN.md) | Philosophy and design principles |
1524
+ | 💡 [examples/](examples/) | Working, runnable examples |
1525
+ | 🛠️ [developer/](developer/) | Developer guides and docs |
1526
+ | 📝 [CHANGELOG.md](CHANGELOG.md) | Version history |
1527
+ | 🤝 [CONTRIBUTING.md](CONTRIBUTING.md) | Contribution guidelines |
1528
+
1529
+ ---
1530
+
1531
+ ## 📄 License
1532
+
1533
+ [MIT](LICENSE) © James A. Rolfsen
1534
+
1535
+ ---
1536
+
1537
+ <p align="center">
1538
+ <img src="assets/logo.png" alt="ThoughtFlow" width="80">
1539
+ </p>
1540
+
1541
+ <p align="center">
1542
+ <strong>ThoughtFlow</strong><br>
1543
+ <sub>Because your agent code should be as clear as your thinking.</sub>
1544
+ </p>
1545
+
1546
+ <p align="center">
1547
+ <sub>Built with ❤️ for developers who believe AI tools should empower, not mystify.</sub>
1548
+ </p>
1549
+
1550
+ <p align="center">
1551
+ <sub>
1552
+ <a href="#-installation">Install</a> •
1553
+ <a href="#-quick-start">Quick Start</a> •
1554
+ <a href="#-the-four-primitives-in-depth">Deep Dive</a> •
1555
+ <a href="#-contributing">Contribute</a> •
1556
+ <a href="ZEN.md">Philosophy</a>
1557
+ </sub>
1558
+ </p>
1559
+
1560
+ <p align="center">
1561
+ ⭐ Star us on GitHub — it helps!
1562
+ </p>
1563
+
1564
+ <!--
1565
+ ═══════════════════════════════════════════════════════════════════════════════
1566
+ HIDDEN SECTIONS: Uncomment when content is ready
1567
+ ═══════════════════════════════════════════════════════════════════════════════
1568
+
1569
+ ## 💬 What People Are Saying
1570
+
1571
+ <table>
1572
+ <tr>
1573
+ <td width="33%">
1574
+
1575
+ > *"Finally, an LLM framework that doesn't make me feel stupid."*
1576
+ >
1577
+ > — **[Name]** <br><sub>Software Engineer</sub>
1578
+
1579
+ </td>
1580
+ <td width="33%">
1581
+
1582
+ > *"Deployed to Lambda in 10 minutes. Try that with LangChain."*
1583
+ >
1584
+ > — **[Name]** <br><sub>DevOps Engineer</sub>
1585
+
1586
+ </td>
1587
+ <td width="33%">
1588
+
1589
+ > *"I read the entire source in one sitting. That's unheard of."*
1590
+ >
1591
+ > — **[Name]** <br><sub>AI Researcher</sub>
1592
+
1593
+ </td>
1594
+ </tr>
1595
+ </table>
1596
+
1597
+ ───────────────────────────────────────────────────────────────────────────────
1598
+
1599
+ ## 🏗️ Built With ThoughtFlow
1600
+
1601
+ <table>
1602
+ <tr>
1603
+ <td width="33%">
1604
+
1605
+ ### 🤖 Project Name
1606
+ **Description**
1607
+
1608
+ A conversational AI assistant built with ThoughtFlow.
1609
+
1610
+ [View Project →](link)
1611
+
1612
+ </td>
1613
+ <td width="33%">
1614
+
1615
+ ### 📊 Project Name
1616
+ **Description**
1617
+
1618
+ Data analysis agent using ThoughtFlow workflows.
1619
+
1620
+ [View Project →](link)
1621
+
1622
+ </td>
1623
+ <td width="33%">
1624
+
1625
+ ### 🎮 Project Name
1626
+ **Description**
1627
+
1628
+ Interactive application powered by ThoughtFlow.
1629
+
1630
+ [View Project →](link)
1631
+
1632
+ </td>
1633
+ </tr>
1634
+ </table>
1635
+
1636
+ ───────────────────────────────────────────────────────────────────────────────
1637
+
1638
+ ## 👥 Contributor Spotlight
1639
+
1640
+ <table>
1641
+ <tr>
1642
+ <td align="center">
1643
+ <a href="https://github.com/jrolf">
1644
+ <img src="https://github.com/jrolf.png" width="80px;" alt="James Rolfsen"/><br>
1645
+ <sub><b>James Rolfsen</b></sub>
1646
+ </a>
1647
+ <br><sub>Creator & Maintainer</sub>
1648
+ </td>
1649
+ <td align="center">
1650
+ <a href="#">
1651
+ <img src="https://github.com/[username].png" width="80px;" alt="Contributor"/><br>
1652
+ <sub><b>[Name]</b></sub>
1653
+ </a>
1654
+ <br><sub>Core Contributor</sub>
1655
+ </td>
1656
+ <td align="center">
1657
+ <a href="CONTRIBUTING.md">
1658
+ <sub><b>You?</b></sub>
1659
+ </a>
1660
+ <br><sub><a href="CONTRIBUTING.md">Join Us →</a></sub>
1661
+ </td>
1662
+ </tr>
1663
+ </table>
1664
+
1665
+ ───────────────────────────────────────────────────────────────────────────────
1666
+
1667
+ ## 🌐 Community
1668
+
1669
+ <p align="center">
1670
+ <a href="[discord-link]"><img src="https://img.shields.io/badge/Discord-Join%20Us-7289da?style=for-the-badge&logo=discord&logoColor=white" alt="Discord"></a>
1671
+ &nbsp;
1672
+ <a href="[twitter-link]"><img src="https://img.shields.io/badge/Twitter-Follow-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"></a>
1673
+ </p>
1674
+
1675
+ ───────────────────────────────────────────────────────────────────────────────
1676
+
1677
+ ## 🎮 Try It Now
1678
+
1679
+ <p align="center">
1680
+ <a href="[replit-link]"><img src="https://img.shields.io/badge/Open%20in%20Replit-Try%20ThoughtFlow-667881?style=for-the-badge&logo=replit&logoColor=white" alt="Replit"></a>
1681
+ &nbsp;
1682
+ <a href="[colab-link]"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
1683
+ </p>
1684
+
1685
+ ═══════════════════════════════════════════════════════════════════════════════
1686
+ -->