thoughtflow 0.0.2__py3-none-any.whl → 0.0.4__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
thoughtflow/thought.py ADDED
@@ -0,0 +1,1102 @@
1
+ """
2
+ THOUGHT class for ThoughtFlow.
3
+
4
+ The THOUGHT class represents a single, modular reasoning or action step within an agentic
5
+ workflow. It is the atomic unit of cognition in the Thoughtflow framework.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ import json
11
+ import copy
12
+
13
+ from thoughtflow._util import (
14
+ event_stamp,
15
+ construct_prompt,
16
+ construct_msgs,
17
+ valid_extract,
18
+ ValidExtractError,
19
+ )
20
+
21
+
22
+ class THOUGHT:
23
+ """
24
+ The THOUGHT class represents a single, modular reasoning or action step within an agentic
25
+ workflow. It is designed to operate on MEMORY objects, orchestrating LLM calls, memory queries,
26
+ and variable manipulations in a composable and traceable manner.
27
+ THOUGHTs are the atomic units of reasoning, planning, and execution in the Thoughtflow framework,
28
+ and can be chained or composed to build complex agent behaviors.
29
+
30
+ CONCEPT:
31
+ A thought is a self-contained, modular process of (1) creating a structured prompt for an LLM,
32
+ (2) Executing the LLM request, (3) cleaning / validating the LLM response, and (4) retry execution
33
+ if it is necesary. It is the discrete unit of cognition. It is the execution of a single cognitive task.
34
+ In-so-doing, we have created the fundamental component of architecting multi-step cognitive systems.
35
+
36
+ The Simple Equation of a Thought:
37
+ Thoughts = Prompt + Context + LLM + Parsing + Validation
38
+
39
+
40
+ COMPONENTS:
41
+
42
+ 1. PROMPT
43
+ The Prompt() object is essentially the structured template which may contain certain parameters to fill-out.
44
+ This defines the structure and the rules for executing the LLM request.
45
+
46
+ 2. CONTEXT
47
+ This is the relevant context which comes from a Memory() object. It is passed to a prompt object in the
48
+ structure of a dictionary containing the variables required / optional. Any context that is given, but
49
+ does not exist as a variable in the prompt, will be excluded.
50
+
51
+ 3. LLM REQUEST
52
+ This is the simple transaction of submitting a structured Messages object to an LLM in order to receive
53
+ a response. The messages object may include a system prompt and a series of historical user / assistant
54
+ interactions. Passed in this request is also parameters like temperature.
55
+
56
+ 4. PARSING
57
+ It is often that LLMs offer extra text even if they are told not to. For this reason, it is important
58
+ to parse the response such that we are only handling the content that was requested, and nothing more.
59
+ So if we are asking for a Python List, the parsed response should begin with "[" and end with "]".
60
+
61
+ 5. VALIDATION
62
+ It is possible that even if a response was successfully parsed that it is not valid, given the constraints
63
+ of the Thought. For this reason, it is helpful to have a validation routine that stamps the response as valid
64
+ according to a fixed list of rules. "max_retries" is a param that tells the Thought how many times it can
65
+ retry the prompt before returning an error.
66
+
67
+
68
+ Supported Operations:
69
+ - llm_call: Execute an LLM request with prompt and context (default)
70
+ - memory_query: Query memory state and return variables/data without LLM
71
+ - variable_set: Set or compute memory variables from context
72
+ - conditional: Execute logic based on memory conditions
73
+
74
+ Key Features:
75
+ - Callable interface: mem = thought(mem) or mem = thought(mem, vars)
76
+ - Automatic retry with configurable attempts and repair prompts
77
+ - Schema-based response parsing via valid_extract or custom parsers
78
+ - Multiple validators: has_keys, list_min_len, custom callables
79
+ - Pre/post hooks for custom processing
80
+ - Full execution tracing and history
81
+ - Serialization support via to_dict()/from_dict()
82
+ - Channel support for message tracking
83
+
84
+ Parameters:
85
+ name (str): Unique identifier for this thought
86
+ llm (LLM): LLM instance for execution (required for llm_call operation)
87
+ prompt (str|dict): Prompt template with {variable} placeholders
88
+ operation (str): Type of operation ('llm_call', 'memory_query', 'variable_set', 'conditional')
89
+ system_prompt (str): Optional system prompt for LLM context (via config)
90
+ parser (str|callable): Response parser ('text', 'json', 'list', or callable)
91
+ parsing_rules (dict): Schema for valid_extract parsing (e.g., {'kind': 'python', 'format': []})
92
+ validator (str|callable): Response validator ('any', 'has_keys:k1,k2', 'list_min_len:N', or callable)
93
+ max_retries (int): Maximum retry attempts (default: 1)
94
+ retry_delay (float): Delay between retries in seconds (default: 0)
95
+ required_vars (list): Variables required from memory
96
+ optional_vars (list): Optional variables from memory
97
+ output_var (str): Variable name for storing result (default: '{name}_result')
98
+ pre_hook (callable): Function called before execution: fn(thought, memory, vars, **kwargs)
99
+ post_hook (callable): Function called after execution: fn(thought, memory, result, error)
100
+ channel (str): Channel for message tracking (default: 'system')
101
+ add_reflection (bool): Whether to add reflection on success (default: True)
102
+
103
+ Example usage:
104
+ # Basic LLM call with result storage
105
+ mem = MEMORY()
106
+ llm = LLM(model="openai:gpt-4o-mini", api_key="...")
107
+ thought = THOUGHT(
108
+ name="summarize",
109
+ llm=llm,
110
+ prompt="Summarize the last user message: {last_user_msg}",
111
+ operation="llm_call"
112
+ )
113
+ mem = thought(mem) # Executes the thought, updates memory with result
114
+ result = mem.get_var("summarize_result")
115
+
116
+ # Schema-based parsing example
117
+ thought = THOUGHT(
118
+ name="extract_info",
119
+ llm=llm,
120
+ prompt="Extract name and age from: {text}",
121
+ parsing_rules={"kind": "python", "format": {"name": "", "age": 0}}
122
+ )
123
+
124
+ # Memory query example (no LLM)
125
+ thought = THOUGHT(
126
+ name="get_context",
127
+ operation="memory_query",
128
+ required_vars=["user_name", "session_id"]
129
+ )
130
+
131
+ # Variable set example
132
+ thought = THOUGHT(
133
+ name="init_session",
134
+ operation="variable_set",
135
+ prompt={"session_active": True, "start_time": None} # dict of values to set
136
+ )
137
+
138
+
139
+ !!! IMPORTANT !!!
140
+ The resulting functionality from this class must enable the following pattern:
141
+ mem = thought(mem) # where mem is a MEMORY object
142
+ or
143
+ mem = thought(mem,vars) # where vars (optional)is a dictionary of variables to pass to the thought
144
+
145
+ THOUGHT OPERATIONS MUST BE CALLABLE.
146
+
147
+ """
148
+
149
+ # Valid operation types
150
+ VALID_OPERATIONS = {'llm_call', 'memory_query', 'variable_set', 'conditional'}
151
+
152
+ def __init__(self, name=None, llm=None, prompt=None, operation=None, **kwargs):
153
+ """
154
+ Initialize a THOUGHT instance.
155
+
156
+ Args:
157
+ name (str): Name of the thought.
158
+ llm: LLM interface or callable.
159
+ prompt: Prompt template (str or dict).
160
+ operation (str): Operation type (e.g., 'llm_call', 'memory_query', etc).
161
+ **kwargs: Additional configuration parameters.
162
+ """
163
+ self.name = name
164
+ self.id = event_stamp()
165
+ self.llm = llm
166
+ self.prompt = prompt
167
+ self.operation = operation
168
+
169
+ # Store any additional configuration parameters
170
+ self.config = kwargs.copy()
171
+
172
+ # Optionally, store a description or docstring if provided
173
+ self.description = kwargs.get("description", None)
174
+
175
+ # Optionally, store validation rules, parsing functions, etc.
176
+ self.validation = kwargs.get("validation", None)
177
+ self.parse_fn = kwargs.get("parse_fn", None)
178
+ self.max_retries = kwargs.get("max_retries", 1)
179
+ self.retry_delay = kwargs.get("retry_delay", 0)
180
+
181
+ # Optionally, store default context variables or requirements
182
+ self.required_vars = kwargs.get("required_vars", [])
183
+ self.optional_vars = kwargs.get("optional_vars", [])
184
+
185
+ # Optionally, store output variable name
186
+ self.output_var = kwargs.get("output_var", "{}_result".format(self.name) if self.name else None)
187
+
188
+ # Internal state for tracking last result, errors, etc.
189
+ self.last_result = None
190
+ self.last_error = None
191
+ self.last_prompt = None
192
+ self.last_msgs = None
193
+ self.last_response = None
194
+
195
+ # Allow for custom hooks (pre/post processing)
196
+ self.pre_hook = kwargs.get("pre_hook", None)
197
+ self.post_hook = kwargs.get("post_hook", None)
198
+
199
+ # Execution history tracking
200
+ self.execution_history = []
201
+
202
+
203
+ def __call__(self, memory, vars={}, **kwargs):
204
+ """
205
+ Execute the thought on the given MEMORY object.
206
+
207
+ Args:
208
+ memory: MEMORY object.
209
+ vars: Optional dictionary of variables to pass to the thought.
210
+ **kwargs: Additional parameters for execution.
211
+ Returns:
212
+ Updated MEMORY object with result stored (if applicable).
213
+ """
214
+ import time as time_module
215
+
216
+ start_time = time_module.time()
217
+
218
+ # Allow vars to be None
219
+ if vars is None:
220
+ vars = {}
221
+
222
+ # Pre-hook
223
+ if self.pre_hook and callable(self.pre_hook):
224
+ self.pre_hook(self, memory, vars, **kwargs)
225
+
226
+ # Determine operation type
227
+ operation = self.operation or 'llm_call'
228
+
229
+ # Dispatch to appropriate handler based on operation type
230
+ if operation == 'llm_call':
231
+ result, last_error, attempts_made = self._execute_llm_call(memory, vars, **kwargs)
232
+ elif operation == 'memory_query':
233
+ result, last_error, attempts_made = self._execute_memory_query(memory, vars, **kwargs)
234
+ elif operation == 'variable_set':
235
+ result, last_error, attempts_made = self._execute_variable_set(memory, vars, **kwargs)
236
+ elif operation == 'conditional':
237
+ result, last_error, attempts_made = self._execute_conditional(memory, vars, **kwargs)
238
+ else:
239
+ raise ValueError("Unknown operation: {}. Valid operations: {}".format(operation, self.VALID_OPERATIONS))
240
+
241
+ # Calculate execution duration
242
+ duration_ms = (time_module.time() - start_time) * 1000
243
+
244
+ # Build execution event for logging
245
+ execution_event = {
246
+ 'thought_name': self.name,
247
+ 'thought_id': self.id,
248
+ 'operation': operation,
249
+ 'attempts': attempts_made,
250
+ 'success': result is not None,
251
+ 'duration_ms': round(duration_ms, 2),
252
+ 'output_var': self.output_var
253
+ }
254
+
255
+ # If failed after all retries
256
+ if result is None and last_error is not None:
257
+ execution_event['error'] = last_error
258
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
259
+ memory.add_log("Thought execution failed: " + json.dumps(execution_event))
260
+ # Store None as result
261
+ self.update_memory(memory, None)
262
+ else:
263
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
264
+ memory.add_log("Thought execution complete: " + json.dumps(execution_event))
265
+ self.update_memory(memory, result)
266
+
267
+ # Track execution history on the THOUGHT instance
268
+ self.execution_history.append({
269
+ 'stamp': event_stamp(),
270
+ 'memory_id': getattr(memory, 'id', None),
271
+ 'operation': operation,
272
+ 'duration_ms': duration_ms,
273
+ 'success': result is not None or last_error is None,
274
+ 'attempts': attempts_made,
275
+ 'error': self.last_error
276
+ })
277
+
278
+ # Post-hook
279
+ if self.post_hook and callable(self.post_hook):
280
+ self.post_hook(self, memory, self.last_result, self.last_error)
281
+
282
+ return memory
283
+
284
+ def _execute_llm_call(self, memory, vars, **kwargs):
285
+ """
286
+ Execute an LLM call operation with retry logic.
287
+
288
+ Returns:
289
+ tuple: (result, last_error, attempts_made)
290
+ """
291
+ import copy as copy_module
292
+ import time as time_module
293
+
294
+ retries_left = self.max_retries
295
+ last_error = None
296
+ result = None
297
+ attempts_made = 0
298
+
299
+ # Store original prompt to avoid mutation - work with a copy
300
+ original_prompt = copy_module.deepcopy(self.prompt)
301
+ working_prompt = copy_module.deepcopy(self.prompt)
302
+
303
+ while retries_left > 0:
304
+ attempts_made += 1
305
+ try:
306
+ # Temporarily set working prompt for this iteration
307
+ self.prompt = working_prompt
308
+
309
+ # Build context and prompt/messages
310
+ ctx = self.get_context(memory)
311
+ ctx.update(vars)
312
+ msgs = self.build_msgs(memory, ctx)
313
+
314
+ # Run LLM
315
+ llm_kwargs = self.config.get("llm_params", {})
316
+ llm_kwargs.update(kwargs)
317
+ response = self.run_llm(msgs, **llm_kwargs)
318
+ self.last_response = response
319
+
320
+ # Get channel from config for message tracking
321
+ channel = self.config.get("channel", "system")
322
+
323
+ # Add assistant message to memory (if possible)
324
+ if hasattr(memory, "add_msg") and callable(getattr(memory, "add_msg", None)):
325
+ memory.add_msg("assistant", response, channel=channel)
326
+
327
+ # Parse
328
+ parsed = self.parse_response(response)
329
+ self.last_result = parsed
330
+
331
+ # Validate
332
+ valid, why = self.validate(parsed)
333
+ if valid:
334
+ result = parsed
335
+ self.last_error = None
336
+ # Logging
337
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
338
+ memory.add_log("Thought '{}' completed successfully".format(self.name))
339
+ # Add reflection for reasoning trace (if configured)
340
+ if self.config.get("add_reflection", True):
341
+ if hasattr(memory, "add_ref") and callable(getattr(memory, "add_ref", None)):
342
+ # Truncate response for reflection if too long
343
+ response_preview = str(response)[:300]
344
+ if len(str(response)) > 300:
345
+ response_preview += "..."
346
+ memory.add_ref("Thought '{}': {}".format(self.name, response_preview))
347
+ break
348
+ else:
349
+ last_error = why
350
+ self.last_error = why
351
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
352
+ memory.add_log("Thought '{}' validation failed: {}".format(self.name, why))
353
+ # Create repair suffix for next retry (modify working_prompt, not original)
354
+ repair_suffix = "\n(Please return only the requested format; your last answer failed: {}.)" .format(why)
355
+ if isinstance(original_prompt, str):
356
+ working_prompt = original_prompt.rstrip() + repair_suffix
357
+ elif isinstance(original_prompt, dict):
358
+ working_prompt = copy_module.deepcopy(original_prompt)
359
+ last_key = list(working_prompt.keys())[-1]
360
+ working_prompt[last_key] = working_prompt[last_key].rstrip() + repair_suffix
361
+ except Exception as e:
362
+ last_error = str(e)
363
+ self.last_error = last_error
364
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
365
+ memory.add_log("Thought '{}' error: {}".format(self.name, last_error))
366
+ # Create repair suffix for next retry (modify working_prompt, not original)
367
+ repair_suffix = "\n(Please return only the requested format; your last answer failed: {}.)".format(last_error)
368
+ if isinstance(original_prompt, str):
369
+ working_prompt = original_prompt.rstrip() + repair_suffix
370
+ elif isinstance(original_prompt, dict):
371
+ working_prompt = copy_module.deepcopy(original_prompt)
372
+ last_key = list(working_prompt.keys())[-1]
373
+ working_prompt[last_key] = working_prompt[last_key].rstrip() + repair_suffix
374
+ retries_left -= 1
375
+ if self.retry_delay:
376
+ time_module.sleep(self.retry_delay)
377
+
378
+ # Restore original prompt after execution (prevents permanent mutation)
379
+ self.prompt = original_prompt
380
+
381
+ return result, last_error, attempts_made
382
+
383
+ def _execute_memory_query(self, memory, vars, **kwargs):
384
+ """
385
+ Execute a memory query operation (no LLM involved).
386
+ Retrieves specified variables from memory and returns them as a dict.
387
+
388
+ Returns:
389
+ tuple: (result, last_error, attempts_made)
390
+ """
391
+ try:
392
+ result = {}
393
+
394
+ # Get required variables
395
+ for var in self.required_vars:
396
+ if hasattr(memory, "get_var") and callable(getattr(memory, "get_var", None)):
397
+ val = memory.get_var(var)
398
+ else:
399
+ val = getattr(memory, var, None)
400
+
401
+ if val is None:
402
+ return None, "Required variable '{}' not found in memory".format(var), 1
403
+ result[var] = val
404
+
405
+ # Get optional variables
406
+ for var in self.optional_vars:
407
+ if hasattr(memory, "get_var") and callable(getattr(memory, "get_var", None)):
408
+ val = memory.get_var(var)
409
+ else:
410
+ val = getattr(memory, var, None)
411
+
412
+ if val is not None:
413
+ result[var] = val
414
+
415
+ # Include any vars passed directly
416
+ result.update(vars)
417
+
418
+ self.last_result = result
419
+ self.last_error = None
420
+
421
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
422
+ memory.add_log("Thought '{}' memory query completed".format(self.name))
423
+
424
+ return result, None, 1
425
+
426
+ except Exception as e:
427
+ self.last_error = str(e)
428
+ return None, str(e), 1
429
+
430
+ def _execute_variable_set(self, memory, vars, **kwargs):
431
+ """
432
+ Execute a variable set operation.
433
+ Sets variables in memory from the prompt (as dict) or vars parameter.
434
+
435
+ Returns:
436
+ tuple: (result, last_error, attempts_made)
437
+ """
438
+ try:
439
+ values_to_set = {}
440
+
441
+ # If prompt is a dict, use it as the values to set
442
+ if isinstance(self.prompt, dict):
443
+ values_to_set.update(self.prompt)
444
+
445
+ # Override/add with vars parameter
446
+ values_to_set.update(vars)
447
+
448
+ # Set each variable in memory
449
+ for key, value in values_to_set.items():
450
+ if hasattr(memory, "set_var") and callable(getattr(memory, "set_var", None)):
451
+ desc = self.config.get("var_descriptions", {}).get(key, "Set by thought: {}".format(self.name))
452
+ memory.set_var(key, value, desc=desc)
453
+ elif hasattr(memory, "vars"):
454
+ if key not in memory.vars:
455
+ memory.vars[key] = []
456
+ stamp = event_stamp(value)
457
+ memory.vars[key].append([stamp, value])
458
+
459
+ self.last_result = values_to_set
460
+ self.last_error = None
461
+
462
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
463
+ memory.add_log("Thought '{}' set {} variables".format(self.name, len(values_to_set)))
464
+
465
+ return values_to_set, None, 1
466
+
467
+ except Exception as e:
468
+ self.last_error = str(e)
469
+ return None, str(e), 1
470
+
471
+ def _execute_conditional(self, memory, vars, **kwargs):
472
+ """
473
+ Execute a conditional operation.
474
+ Evaluates a condition from config and returns the appropriate result.
475
+
476
+ Config options:
477
+ condition (callable): Function that takes (memory, vars) and returns bool
478
+ if_true: Value/action if condition is true
479
+ if_false: Value/action if condition is false
480
+
481
+ Returns:
482
+ tuple: (result, last_error, attempts_made)
483
+ """
484
+ try:
485
+ condition_fn = self.config.get("condition")
486
+ if_true = self.config.get("if_true")
487
+ if_false = self.config.get("if_false")
488
+
489
+ if condition_fn is None:
490
+ return None, "No condition function provided for conditional operation", 1
491
+
492
+ if not callable(condition_fn):
493
+ return None, "Condition must be callable", 1
494
+
495
+ # Evaluate condition
496
+ ctx = self.get_context(memory)
497
+ ctx.update(vars)
498
+ condition_result = condition_fn(memory, ctx)
499
+
500
+ # Return appropriate value
501
+ if condition_result:
502
+ result = if_true
503
+ if callable(if_true):
504
+ result = if_true(memory, ctx)
505
+ else:
506
+ result = if_false
507
+ if callable(if_false):
508
+ result = if_false(memory, ctx)
509
+
510
+ self.last_result = result
511
+ self.last_error = None
512
+
513
+ if hasattr(memory, "add_log") and callable(getattr(memory, "add_log", None)):
514
+ memory.add_log("Thought '{}' conditional evaluated to {}".format(self.name, bool(condition_result)))
515
+
516
+ return result, None, 1
517
+
518
+ except Exception as e:
519
+ self.last_error = str(e)
520
+ return None, str(e), 1
521
+
522
+ def build_prompt(self, memory, context_vars=None):
523
+ """
524
+ Build the prompt for the LLM using construct_prompt.
525
+
526
+ Args:
527
+ memory: MEMORY object providing context.
528
+ context_vars (dict): Optional context variables to fill the prompt.
529
+
530
+ Returns:
531
+ str: The constructed prompt string.
532
+ """
533
+ # Get context variables (merge get_context and context_vars)
534
+ ctx = self.get_context(memory)
535
+ if context_vars:
536
+ ctx.update(context_vars)
537
+ prompt_template = self.prompt
538
+ # If prompt is a dict, use construct_prompt, else format as string
539
+ if isinstance(prompt_template, dict):
540
+ prompt = construct_prompt(prompt_template)
541
+ elif isinstance(prompt_template, str):
542
+ try:
543
+ prompt = prompt_template.format(**ctx)
544
+ except Exception:
545
+ # fallback: just return as is
546
+ prompt = prompt_template
547
+ else:
548
+ prompt = str(prompt_template)
549
+ self.last_prompt = prompt
550
+ return prompt
551
+
552
+ def build_msgs(self, memory, context_vars=None):
553
+ """
554
+ Build the messages list for the LLM using construct_msgs.
555
+
556
+ Args:
557
+ memory: MEMORY object providing context.
558
+ context_vars (dict): Optional context variables to fill the prompt.
559
+
560
+ Returns:
561
+ list: List of message dicts for LLM input.
562
+ """
563
+ ctx = self.get_context(memory)
564
+ if context_vars:
565
+ ctx.update(context_vars)
566
+ # Compose system and user prompts
567
+ sys_prompt = self.config.get("system_prompt", "")
568
+ usr_prompt = self.build_prompt(memory, ctx)
569
+ # Optionally, allow for prior messages from memory
570
+ msgs = []
571
+ if hasattr(memory, "get_msgs"):
572
+ # Optionally, get recent messages for context
573
+ msgs = memory.get_msgs(repr="list") if callable(getattr(memory, "get_msgs", None)) else []
574
+ # Build messages using construct_msgs
575
+ msgs_out = construct_msgs(
576
+ usr_prompt=usr_prompt,
577
+ vars=ctx,
578
+ sys_prompt=sys_prompt,
579
+ msgs=msgs
580
+ )
581
+ self.last_msgs = msgs_out
582
+ return msgs_out
583
+
584
+ def get_context(self, memory):
585
+ """
586
+ Extract relevant context from the MEMORY object for this thought.
587
+
588
+ Args:
589
+ memory: MEMORY object.
590
+
591
+ Returns:
592
+ dict: Context variables for prompt filling.
593
+ """
594
+ ctx = {}
595
+ # If required_vars is specified, try to get those from memory
596
+ if hasattr(self, "required_vars") and self.required_vars:
597
+ for var in self.required_vars:
598
+ # Try to get from memory.get_var if available
599
+ if hasattr(memory, "get_var") and callable(getattr(memory, "get_var", None)):
600
+ val = memory.get_var(var)
601
+ else:
602
+ val = getattr(memory, var, None)
603
+ if val is not None:
604
+ ctx[var] = val
605
+ # Optionally, add optional_vars if present in memory
606
+ if hasattr(self, "optional_vars") and self.optional_vars:
607
+ for var in self.optional_vars:
608
+ if hasattr(memory, "get_var") and callable(getattr(memory, "get_var", None)):
609
+ val = memory.get_var(var)
610
+ else:
611
+ val = getattr(memory, var, None)
612
+ if val is not None:
613
+ ctx[var] = val
614
+ # Add some common context keys if available
615
+ if hasattr(memory, "last_user_msg") and callable(getattr(memory, "last_user_msg", None)):
616
+ ctx["last_user_msg"] = memory.last_user_msg()
617
+ if hasattr(memory, "last_asst_msg") and callable(getattr(memory, "last_asst_msg", None)):
618
+ ctx["last_asst_msg"] = memory.last_asst_msg()
619
+ if hasattr(memory, "get_msgs") and callable(getattr(memory, "get_msgs", None)):
620
+ ctx["messages"] = memory.get_msgs(repr="list")
621
+ # Add all memory.vars if present
622
+ if hasattr(memory, "vars"):
623
+ ctx.update(getattr(memory, "vars", {}))
624
+ return ctx
625
+
626
+ def run_llm(self, msgs, **llm_kwargs):
627
+ """
628
+ Execute the LLM call with the given messages.
629
+ !!! USE THE EXISTING LLM CLASS !!!
630
+
631
+ Args:
632
+ msgs (list): List of message dicts.
633
+ **llm_kwargs: Additional LLM parameters.
634
+
635
+ Returns:
636
+ str: Raw LLM response.
637
+ """
638
+ if self.llm is None:
639
+ raise ValueError("No LLM instance provided to this THOUGHT.")
640
+ # The LLM class is expected to be callable: llm(msgs, **kwargs)
641
+ # If LLM is a class with .call, use that (standard interface)
642
+ if hasattr(self.llm, "call") and callable(getattr(self.llm, "call", None)):
643
+ response = self.llm.call(msgs, llm_kwargs)
644
+ elif hasattr(self.llm, "chat") and callable(getattr(self.llm, "chat", None)):
645
+ response = self.llm.chat(msgs, **llm_kwargs)
646
+ else:
647
+ response = self.llm(msgs, **llm_kwargs)
648
+
649
+ # Handle list response from LLM.call() - it returns a list of choices
650
+ if isinstance(response, list):
651
+ response = response[0] if response else ""
652
+
653
+ # If response is a dict with 'content', extract it
654
+ if isinstance(response, dict) and "content" in response:
655
+ return response["content"]
656
+
657
+ return response
658
+
659
+ def parse_response(self, response):
660
+ """
661
+ Parse the LLM response to extract the desired content.
662
+
663
+ Args:
664
+ response (str): Raw LLM response.
665
+
666
+ Returns:
667
+ object: Parsed result (e.g., string, list, dict).
668
+
669
+ Supports:
670
+ - Custom parse_fn callable
671
+ - Schema-based parsing via parsing_rules (uses valid_extract)
672
+ - Built-in parsers: 'text', 'json', 'list'
673
+ """
674
+ # Use custom parse_fn if provided
675
+ if self.parse_fn and callable(self.parse_fn):
676
+ return self.parse_fn(response)
677
+
678
+ # Check for schema-based parsing rules (using valid_extract)
679
+ parsing_rules = self.config.get("parsing_rules")
680
+ if parsing_rules:
681
+ try:
682
+ return valid_extract(response, parsing_rules)
683
+ except ValidExtractError as e:
684
+ raise ValueError("Schema-based parsing failed: {}".format(e))
685
+
686
+ # Use built-in parser based on config
687
+ parser = self.config.get("parser", None)
688
+ if parser is None:
689
+ # Default: return as string
690
+ return response
691
+ if parser == "text":
692
+ return response
693
+ elif parser == "json":
694
+ import re
695
+ # Remove code fences if present
696
+ text = response.strip()
697
+ text = re.sub(r"^```(?:json)?|```$", "", text, flags=re.MULTILINE).strip()
698
+ # Find first JSON object or array
699
+ match = re.search(r"(\{.*\}|\[.*\])", text, re.DOTALL)
700
+ if match:
701
+ json_str = match.group(1)
702
+ return json.loads(json_str)
703
+ else:
704
+ raise ValueError("No JSON object or array found in response.")
705
+ elif parser == "list":
706
+ import ast, re
707
+ # Find first list literal
708
+ match = re.search(r"(\[.*\])", response, re.DOTALL)
709
+ if match:
710
+ list_str = match.group(1)
711
+ return ast.literal_eval(list_str)
712
+ else:
713
+ raise ValueError("No list found in response.")
714
+ elif callable(parser):
715
+ return parser(response)
716
+ else:
717
+ # Unknown parser, return as is
718
+ return response
719
+
720
+ def validate(self, parsed_result):
721
+ """
722
+ Validate the parsed result according to the thought's rules.
723
+
724
+ Args:
725
+ parsed_result: The parsed output from the LLM.
726
+
727
+ Returns:
728
+ (bool, why): True if valid, False otherwise, and reason string.
729
+ """
730
+ # Use custom validation if provided
731
+ if self.validation and callable(self.validation):
732
+ try:
733
+ valid, why = self.validation(parsed_result)
734
+ return bool(valid), why
735
+ except Exception as e:
736
+ return False, "Validation exception: {}".format(e)
737
+ # Use built-in validator based on config
738
+ validator = self.config.get("validator", None)
739
+ if validator is None or validator == "any":
740
+ return True, ""
741
+ elif isinstance(validator, str):
742
+ if validator.startswith("has_keys:"):
743
+ keys = [k.strip() for k in validator.split(":", 1)[1].split(",")]
744
+ if isinstance(parsed_result, dict):
745
+ missing = [k for k in keys if k not in parsed_result]
746
+ if not missing:
747
+ return True, ""
748
+ else:
749
+ return False, "Missing keys: {}".format(missing)
750
+ else:
751
+ return False, "Result is not a dict"
752
+ elif validator.startswith("list_min_len:"):
753
+ try:
754
+ min_len = int(validator.split(":", 1)[1])
755
+ except Exception:
756
+ min_len = 1
757
+ if isinstance(parsed_result, list) and len(parsed_result) >= min_len:
758
+ return True, ""
759
+ else:
760
+ return False, "List too short (min {})".format(min_len)
761
+ elif validator == "summary_v1":
762
+ # Example: summary must be a string of at least 10 chars
763
+ if isinstance(parsed_result, str) and len(parsed_result.strip()) >= 10:
764
+ return True, ""
765
+ else:
766
+ return False, "Summary too short"
767
+ else:
768
+ return True, ""
769
+ elif callable(validator):
770
+ try:
771
+ valid, why = validator(parsed_result)
772
+ return bool(valid), why
773
+ except Exception as e:
774
+ return False, "Validation exception: {}".format(e)
775
+ else:
776
+ return True, ""
777
+
778
+ def update_memory(self, memory, result):
779
+ """
780
+ Update the MEMORY object with the result of this thought.
781
+
782
+ Args:
783
+ memory: MEMORY object.
784
+ result: The result to store.
785
+
786
+ Returns:
787
+ MEMORY: Updated memory object.
788
+ """
789
+ # Store result in vars or via set_var if available
790
+ varname = self.output_var or ("{}_result".format(self.name) if self.name else "thought_result")
791
+ if hasattr(memory, "set_var") and callable(getattr(memory, "set_var", None)):
792
+ memory.set_var(varname, result, desc="Result of thought: {}".format(self.name))
793
+ elif hasattr(memory, "vars"):
794
+ # Fallback: directly access vars dict if set_var not available
795
+ if varname not in memory.vars:
796
+ memory.vars[varname] = []
797
+ stamp = event_stamp(result) if 'event_stamp' in globals() else 'no_stamp'
798
+ memory.vars[varname].append({'object': result, 'stamp': stamp})
799
+ else:
800
+ setattr(memory, varname, result)
801
+ return memory
802
+
803
+ def to_dict(self):
804
+ """
805
+ Return a serializable dictionary representation of this THOUGHT.
806
+
807
+ Note: The LLM instance, parse_fn, validation, and hooks cannot be serialized,
808
+ so they are represented by type/name only. When deserializing, these must be
809
+ provided separately.
810
+
811
+ Returns:
812
+ dict: Serializable representation of this thought.
813
+ """
814
+ return {
815
+ "name": self.name,
816
+ "id": self.id,
817
+ "prompt": self.prompt,
818
+ "operation": self.operation,
819
+ "config": self.config,
820
+ "description": self.description,
821
+ "max_retries": self.max_retries,
822
+ "retry_delay": self.retry_delay,
823
+ "output_var": self.output_var,
824
+ "required_vars": self.required_vars,
825
+ "optional_vars": self.optional_vars,
826
+ "execution_history": self.execution_history,
827
+ # Store metadata about non-serializable items
828
+ "llm_type": type(self.llm).__name__ if self.llm else None,
829
+ "has_parse_fn": self.parse_fn is not None,
830
+ "has_validation": self.validation is not None,
831
+ "has_pre_hook": self.pre_hook is not None,
832
+ "has_post_hook": self.post_hook is not None,
833
+ }
834
+
835
+ @classmethod
836
+ def from_dict(cls, data, llm=None, parse_fn=None, validation=None, pre_hook=None, post_hook=None):
837
+ """
838
+ Reconstruct a THOUGHT from a dictionary representation.
839
+
840
+ Args:
841
+ data (dict): Dictionary representation of a THOUGHT.
842
+ llm: LLM instance to use (required for execution).
843
+ parse_fn: Optional custom parse function.
844
+ validation: Optional custom validation function.
845
+ pre_hook: Optional pre-execution hook.
846
+ post_hook: Optional post-execution hook.
847
+
848
+ Returns:
849
+ THOUGHT: Reconstructed THOUGHT object.
850
+ """
851
+ # Extract config and merge with explicit kwargs
852
+ config = data.get("config", {}).copy()
853
+
854
+ thought = cls(
855
+ name=data.get("name"),
856
+ llm=llm,
857
+ prompt=data.get("prompt"),
858
+ operation=data.get("operation"),
859
+ description=data.get("description"),
860
+ max_retries=data.get("max_retries", 1),
861
+ retry_delay=data.get("retry_delay", 0),
862
+ output_var=data.get("output_var"),
863
+ required_vars=data.get("required_vars", []),
864
+ optional_vars=data.get("optional_vars", []),
865
+ parse_fn=parse_fn,
866
+ validation=validation,
867
+ pre_hook=pre_hook,
868
+ post_hook=post_hook,
869
+ **config
870
+ )
871
+
872
+ # Restore ID if provided
873
+ if data.get("id"):
874
+ thought.id = data["id"]
875
+
876
+ # Restore execution history
877
+ thought.execution_history = data.get("execution_history", [])
878
+
879
+ return thought
880
+
881
+ def copy(self):
882
+ """
883
+ Return a deep copy of this THOUGHT.
884
+
885
+ Note: The LLM instance is shallow-copied (same reference), as LLM
886
+ instances typically should be shared. All other attributes are deep-copied.
887
+
888
+ Returns:
889
+ THOUGHT: A new THOUGHT instance with copied attributes.
890
+ """
891
+ import copy as copy_module
892
+
893
+ new_thought = THOUGHT(
894
+ name=self.name,
895
+ llm=self.llm, # Shallow copy - same LLM instance
896
+ prompt=copy_module.deepcopy(self.prompt),
897
+ operation=self.operation,
898
+ description=self.description,
899
+ max_retries=self.max_retries,
900
+ retry_delay=self.retry_delay,
901
+ output_var=self.output_var,
902
+ required_vars=copy_module.deepcopy(self.required_vars),
903
+ optional_vars=copy_module.deepcopy(self.optional_vars),
904
+ parse_fn=self.parse_fn,
905
+ validation=self.validation,
906
+ pre_hook=self.pre_hook,
907
+ post_hook=self.post_hook,
908
+ **copy_module.deepcopy(self.config)
909
+ )
910
+
911
+ # Copy internal state
912
+ new_thought.id = event_stamp() # Generate new ID for the copy
913
+ new_thought.execution_history = copy_module.deepcopy(self.execution_history)
914
+ new_thought.last_result = copy_module.deepcopy(self.last_result)
915
+ new_thought.last_error = self.last_error
916
+ new_thought.last_prompt = self.last_prompt
917
+ new_thought.last_msgs = copy_module.deepcopy(self.last_msgs)
918
+ new_thought.last_response = self.last_response
919
+
920
+ return new_thought
921
+
922
+ def __repr__(self):
923
+ """
924
+ Return a detailed string representation of this THOUGHT.
925
+
926
+ Returns:
927
+ str: Detailed representation including key attributes.
928
+ """
929
+ return ("THOUGHT(name='{}', operation='{}', "
930
+ "max_retries={}, output_var='{}')".format(
931
+ self.name, self.operation, self.max_retries, self.output_var))
932
+
933
+ def __str__(self):
934
+ """
935
+ Return a human-readable string representation of this THOUGHT.
936
+
937
+ Returns:
938
+ str: Simple description of the thought.
939
+ """
940
+ return "Thought: {}".format(self.name or 'unnamed')
941
+
942
+
943
+ ThoughtClassTests = """
944
+ # --- THOUGHT Class Tests ---
945
+
946
+ # Test 1: Basic THOUGHT instantiation and attributes
947
+ >>> from thoughtflow6 import THOUGHT, MEMORY, event_stamp
948
+ >>> t = THOUGHT(name="test_thought", prompt="Hello {name}", max_retries=3)
949
+ >>> t.name
950
+ 'test_thought'
951
+ >>> t.max_retries
952
+ 3
953
+ >>> t.output_var
954
+ 'test_thought_result'
955
+ >>> t.operation is None # Defaults to None, which means 'llm_call'
956
+ True
957
+ >>> len(t.execution_history)
958
+ 0
959
+
960
+ # Test 2: Serialization round-trip with to_dict/from_dict
961
+ >>> t1 = THOUGHT(name="serialize_test", prompt="test prompt", max_retries=3, output_var="my_output")
962
+ >>> data = t1.to_dict()
963
+ >>> data['name']
964
+ 'serialize_test'
965
+ >>> data['max_retries']
966
+ 3
967
+ >>> data['output_var']
968
+ 'my_output'
969
+ >>> t2 = THOUGHT.from_dict(data)
970
+ >>> t2.name == t1.name
971
+ True
972
+ >>> t2.max_retries == t1.max_retries
973
+ True
974
+ >>> t2.output_var == t1.output_var
975
+ True
976
+
977
+ # Test 3: Copy creates independent instance
978
+ >>> t1 = THOUGHT(name="copy_test", prompt="original prompt")
979
+ >>> t2 = t1.copy()
980
+ >>> t2.name = "modified"
981
+ >>> t1.name
982
+ 'copy_test'
983
+ >>> t2.name
984
+ 'modified'
985
+ >>> t1.id != t2.id # Copy gets new ID
986
+ True
987
+
988
+ # Test 4: __repr__ and __str__
989
+ >>> t = THOUGHT(name="repr_test", operation="llm_call", max_retries=2, output_var="result")
990
+ >>> "repr_test" in repr(t)
991
+ True
992
+ >>> "llm_call" in repr(t)
993
+ True
994
+ >>> str(t)
995
+ 'Thought: repr_test'
996
+ >>> t2 = THOUGHT() # unnamed
997
+ >>> str(t2)
998
+ 'Thought: unnamed'
999
+
1000
+ # Test 5: Memory query operation (no LLM)
1001
+ >>> mem = MEMORY()
1002
+ >>> mem.set_var("user_name", "Alice", desc="Test user")
1003
+ >>> mem.set_var("session_id", "sess123", desc="Test session")
1004
+ >>> t = THOUGHT(
1005
+ ... name="query_test",
1006
+ ... operation="memory_query",
1007
+ ... required_vars=["user_name", "session_id"]
1008
+ ... )
1009
+ >>> mem2 = t(mem)
1010
+ >>> result = mem2.get_var("query_test_result")
1011
+ >>> result['user_name']
1012
+ 'Alice'
1013
+ >>> result['session_id']
1014
+ 'sess123'
1015
+
1016
+ # Test 6: Variable set operation
1017
+ >>> mem = MEMORY()
1018
+ >>> t = THOUGHT(
1019
+ ... name="setvar_test",
1020
+ ... operation="variable_set",
1021
+ ... prompt={"status": "active", "count": 42}
1022
+ ... )
1023
+ >>> mem2 = t(mem)
1024
+ >>> mem2.get_var("status")
1025
+ 'active'
1026
+ >>> mem2.get_var("count")
1027
+ 42
1028
+
1029
+ # Test 7: Execution history tracking
1030
+ >>> mem = MEMORY()
1031
+ >>> t = THOUGHT(name="history_test", operation="memory_query", required_vars=[])
1032
+ >>> len(t.execution_history)
1033
+ 0
1034
+ >>> mem = t(mem)
1035
+ >>> len(t.execution_history)
1036
+ 1
1037
+ >>> t.execution_history[0]['success']
1038
+ True
1039
+ >>> 'duration_ms' in t.execution_history[0]
1040
+ True
1041
+ >>> 'stamp' in t.execution_history[0]
1042
+ True
1043
+
1044
+ # Test 8: Conditional operation
1045
+ >>> mem = MEMORY()
1046
+ >>> mem.set_var("threshold", 50)
1047
+ >>> t = THOUGHT(
1048
+ ... name="cond_test",
1049
+ ... operation="conditional",
1050
+ ... condition=lambda m, ctx: ctx.get('value', 0) > ctx.get('threshold', 0),
1051
+ ... if_true="above",
1052
+ ... if_false="below"
1053
+ ... )
1054
+ >>> mem2 = t(mem, vars={'value': 75})
1055
+ >>> mem2.get_var("cond_test_result")
1056
+ 'above'
1057
+ >>> mem3 = t(mem, vars={'value': 25})
1058
+ >>> mem3.get_var("cond_test_result")
1059
+ 'below'
1060
+
1061
+ # Test 9: VALID_OPERATIONS class attribute
1062
+ >>> 'llm_call' in THOUGHT.VALID_OPERATIONS
1063
+ True
1064
+ >>> 'memory_query' in THOUGHT.VALID_OPERATIONS
1065
+ True
1066
+ >>> 'variable_set' in THOUGHT.VALID_OPERATIONS
1067
+ True
1068
+ >>> 'conditional' in THOUGHT.VALID_OPERATIONS
1069
+ True
1070
+
1071
+ # Test 10: Parse response with parsing_rules (valid_extract integration)
1072
+ >>> t = THOUGHT(name="parse_test", parsing_rules={"kind": "python", "format": []})
1073
+ >>> t.parse_response("Here is the list: [1, 2, 3]")
1074
+ [1, 2, 3]
1075
+ >>> t2 = THOUGHT(name="parse_dict", parsing_rules={"kind": "python", "format": {"name": "", "count": 0}})
1076
+ >>> t2.parse_response("Result: {'name': 'test', 'count': 5}")
1077
+ {'name': 'test', 'count': 5}
1078
+
1079
+ # Test 11: Built-in parsers
1080
+ >>> t = THOUGHT(name="json_test", parser="json")
1081
+ >>> t.parse_response('Here is JSON: {"key": "value"}')
1082
+ {'key': 'value'}
1083
+ >>> t2 = THOUGHT(name="list_test", parser="list")
1084
+ >>> t2.parse_response("Numbers: [1, 2, 3, 4, 5]")
1085
+ [1, 2, 3, 4, 5]
1086
+ >>> t3 = THOUGHT(name="text_test", parser="text")
1087
+ >>> t3.parse_response("plain text")
1088
+ 'plain text'
1089
+
1090
+ # Test 12: Built-in validators
1091
+ >>> t = THOUGHT(name="val_test", validator="has_keys:name,age")
1092
+ >>> t.validate({"name": "Alice", "age": 30})
1093
+ (True, '')
1094
+ >>> t.validate({"name": "Bob"})
1095
+ (False, 'Missing keys: [\\'age\\']')
1096
+ >>> t2 = THOUGHT(name="list_val", validator="list_min_len:3")
1097
+ >>> t2.validate([1, 2, 3])
1098
+ (True, '')
1099
+ >>> t2.validate([1, 2])
1100
+ (False, 'List too short (min 3)')
1101
+
1102
+ """