lollms-client 0.27.3__tar.gz → 0.29.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of lollms-client might be problematic. Click here for more details.

Files changed (104) hide show
  1. {lollms_client-0.27.3 → lollms_client-0.29.0}/PKG-INFO +268 -110
  2. {lollms_client-0.27.3 → lollms_client-0.29.0}/README.md +268 -110
  3. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/__init__.py +1 -1
  4. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/lollms_chat/__init__.py +31 -7
  5. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/openai/__init__.py +31 -8
  6. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_core.py +7 -7
  7. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_discussion.py +97 -39
  8. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_personality.py +8 -0
  9. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client.egg-info/PKG-INFO +268 -110
  10. {lollms_client-0.27.3 → lollms_client-0.29.0}/LICENSE +0 -0
  11. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/article_summary/article_summary.py +0 -0
  12. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/console_discussion/console_app.py +0 -0
  13. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/console_discussion.py +0 -0
  14. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/deep_analyze/deep_analyse.py +0 -0
  15. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/deep_analyze/deep_analyze_multiple_files.py +0 -0
  16. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/function_calling_with_local_custom_mcp.py +0 -0
  17. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/generate_a_benchmark_for_safe_store.py +0 -0
  18. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/generate_and_speak/generate_and_speak.py +0 -0
  19. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/generate_game_sfx/generate_game_fx.py +0 -0
  20. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/generate_text_with_multihop_rag_example.py +0 -0
  21. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/gradio_chat_app.py +0 -0
  22. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/gradio_lollms_chat.py +0 -0
  23. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/internet_search_with_rag.py +0 -0
  24. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/lollms_chat/calculator.py +0 -0
  25. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/lollms_chat/derivative.py +0 -0
  26. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/lollms_chat/test_openai_compatible_with_lollms_chat.py +0 -0
  27. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/lollms_discussions_test.py +0 -0
  28. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/mcp_examples/external_mcp.py +0 -0
  29. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/mcp_examples/local_mcp.py +0 -0
  30. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/mcp_examples/openai_mcp.py +0 -0
  31. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/mcp_examples/run_remote_mcp_example_v2.py +0 -0
  32. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/mcp_examples/run_standard_mcp_example.py +0 -0
  33. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/simple_text_gen_test.py +0 -0
  34. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/simple_text_gen_with_image_test.py +0 -0
  35. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/test_local_models/local_chat.py +0 -0
  36. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/text_2_audio.py +0 -0
  37. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/text_2_image.py +0 -0
  38. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/text_2_image_diffusers.py +0 -0
  39. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/text_and_image_2_audio.py +0 -0
  40. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/text_gen.py +0 -0
  41. {lollms_client-0.27.3 → lollms_client-0.29.0}/examples/text_gen_system_prompt.py +0 -0
  42. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/__init__.py +0 -0
  43. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/azure_openai/__init__.py +0 -0
  44. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/claude/__init__.py +0 -0
  45. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/gemini/__init__.py +0 -0
  46. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/grok/__init__.py +0 -0
  47. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/groq/__init__.py +0 -0
  48. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/hugging_face_inference_api/__init__.py +0 -0
  49. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/litellm/__init__.py +0 -0
  50. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/llamacpp/__init__.py +0 -0
  51. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/lollms/__init__.py +0 -0
  52. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/mistral/__init__.py +0 -0
  53. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/ollama/__init__.py +0 -0
  54. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/open_router/__init__.py +0 -0
  55. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/openllm/__init__.py +0 -0
  56. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/pythonllamacpp/__init__.py +0 -0
  57. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/tensor_rt/__init__.py +0 -0
  58. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/transformers/__init__.py +0 -0
  59. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/llm_bindings/vllm/__init__.py +0 -0
  60. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_config.py +0 -0
  61. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_js_analyzer.py +0 -0
  62. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_llm_binding.py +0 -0
  63. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_mcp_binding.py +0 -0
  64. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_python_analyzer.py +0 -0
  65. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_stt_binding.py +0 -0
  66. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_tti_binding.py +0 -0
  67. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_ttm_binding.py +0 -0
  68. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_tts_binding.py +0 -0
  69. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_ttv_binding.py +0 -0
  70. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_types.py +0 -0
  71. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/lollms_utilities.py +0 -0
  72. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/mcp_bindings/local_mcp/__init__.py +0 -0
  73. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/mcp_bindings/local_mcp/default_tools/file_writer/file_writer.py +0 -0
  74. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/mcp_bindings/local_mcp/default_tools/generate_image_from_prompt/generate_image_from_prompt.py +0 -0
  75. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/mcp_bindings/local_mcp/default_tools/internet_search/internet_search.py +0 -0
  76. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/mcp_bindings/local_mcp/default_tools/python_interpreter/python_interpreter.py +0 -0
  77. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/mcp_bindings/remote_mcp/__init__.py +0 -0
  78. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/mcp_bindings/standard_mcp/__init__.py +0 -0
  79. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/stt_bindings/__init__.py +0 -0
  80. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/stt_bindings/lollms/__init__.py +0 -0
  81. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/stt_bindings/whisper/__init__.py +0 -0
  82. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/stt_bindings/whispercpp/__init__.py +0 -0
  83. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tti_bindings/__init__.py +0 -0
  84. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tti_bindings/dalle/__init__.py +0 -0
  85. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tti_bindings/diffusers/__init__.py +0 -0
  86. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tti_bindings/gemini/__init__.py +0 -0
  87. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tti_bindings/lollms/__init__.py +0 -0
  88. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/ttm_bindings/__init__.py +0 -0
  89. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/ttm_bindings/audiocraft/__init__.py +0 -0
  90. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/ttm_bindings/bark/__init__.py +0 -0
  91. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/ttm_bindings/lollms/__init__.py +0 -0
  92. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tts_bindings/__init__.py +0 -0
  93. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tts_bindings/bark/__init__.py +0 -0
  94. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tts_bindings/lollms/__init__.py +0 -0
  95. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tts_bindings/piper_tts/__init__.py +0 -0
  96. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/tts_bindings/xtts/__init__.py +0 -0
  97. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/ttv_bindings/__init__.py +0 -0
  98. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client/ttv_bindings/lollms/__init__.py +0 -0
  99. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client.egg-info/SOURCES.txt +0 -0
  100. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client.egg-info/dependency_links.txt +0 -0
  101. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client.egg-info/requires.txt +0 -0
  102. {lollms_client-0.27.3 → lollms_client-0.29.0}/lollms_client.egg-info/top_level.txt +0 -0
  103. {lollms_client-0.27.3 → lollms_client-0.29.0}/pyproject.toml +0 -0
  104. {lollms_client-0.27.3 → lollms_client-0.29.0}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: lollms_client
3
- Version: 0.27.3
3
+ Version: 0.29.0
4
4
  Summary: A client library for LoLLMs generate endpoint
5
5
  Author-email: ParisNeo <parisneoai@gmail.com>
6
6
  License: Apache Software License
@@ -49,8 +49,10 @@ Whether you're connecting to a remote LoLLMs server, an Ollama instance, the Ope
49
49
  * 🔌 **Versatile Binding System:** Seamlessly switch between different LLM backends (LoLLMs, Ollama, OpenAI, Llama.cpp, Transformers, vLLM, OpenLLM) without major code changes.
50
50
  * 🗣️ **Multimodal Support:** Interact with models capable of processing images and generate various outputs like speech (TTS) and images (TTI).
51
51
  * 🤖 **Function Calling with MCP:** Empowers LLMs to use external tools and functions through the Model Context Protocol (MCP), with built-in support for local Python tool execution via `local_mcp` binding and its default tools (file I/O, internet search, Python interpreter, image generation).
52
+ * 🎭 **Personalities as Agents:** Personalities can now define their own set of required tools (MCPs) and have access to static or dynamic knowledge bases (`data_source`), turning them into self-contained, ready-to-use agents.
52
53
  * 🚀 **Streaming & Callbacks:** Efficiently handle real-time text generation with customizable callback functions, including during MCP interactions.
53
- * 💬 **Discussion Management:** Utilities to easily manage and format conversation histories for chat applications.
54
+ * 📝 **Advanced Structured Content Generation:** Reliably generate structured JSON output from natural language prompts using the `generate_structured_content` helper method.
55
+ * 💬 **Discussion Management:** Utilities to easily manage and format conversation histories, including a persistent `data_zone` for context that is always present in the system prompt.
54
56
  * ⚙️ **Configuration Management:** Flexible ways to configure bindings and generation parameters.
55
57
  * 🧩 **Extensible:** Designed to easily incorporate new LLM backends and modality services, including custom MCP toolsets.
56
58
  * 📝 **High-Level Operations:** Includes convenience methods for complex tasks like sequential summarization and deep text analysis directly within `LollmsClient`.
@@ -120,156 +122,174 @@ except Exception as e:
120
122
 
121
123
  ```
122
124
 
123
- ### Function Calling with MCP
125
+ ### Advanced Structured Content Generation
124
126
 
125
- `lollms-client` supports robust function calling via the Model Context Protocol (MCP), allowing LLMs to interact with your custom Python tools or pre-defined utilities.
127
+ The `generate_structured_content` method is a powerful utility for forcing an LLM's output into a specific JSON format. It's ideal for extracting information, getting consistent tool parameters, or any task requiring reliable, machine-readable output.
126
128
 
127
129
  ```python
128
- from lollms_client import LollmsClient, MSG_TYPE
129
- from ascii_colors import ASCIIColors
130
- import json # For pretty printing results
131
-
132
- # Example callback for MCP streaming
133
- def mcp_stream_callback(chunk: str, msg_type: MSG_TYPE, metadata: dict = None, turn_history: list = None) -> bool:
134
- if msg_type == MSG_TYPE.MSG_TYPE_CHUNK: ASCIIColors.success(chunk, end="", flush=True) # LLM's final answer or thought process
135
- elif msg_type == MSG_TYPE.MSG_TYPE_STEP_START: ASCIIColors.info(f"\n>> MCP Step Start: {metadata.get('tool_name', chunk)}", flush=True)
136
- elif msg_type == MSG_TYPE.MSG_TYPE_STEP_END: ASCIIColors.success(f"\n<< MCP Step End: {metadata.get('tool_name', chunk)} -> Result: {json.dumps(metadata.get('result', ''))}", flush=True)
137
- elif msg_type == MSG_TYPE.MSG_TYPE_INFO and metadata and metadata.get("type") == "tool_call_request": ASCIIColors.info(f"\nAI requests: {metadata.get('name')}({metadata.get('params')})", flush=True)
138
- return True
130
+ from lollms_client import LollmsClient
131
+ import json
139
132
 
140
- try:
141
- # Initialize LollmsClient with an LLM binding and the local_mcp binding
142
- lc = LollmsClient(
143
- binding_name="ollama", model_name="mistral", # Example LLM
144
- mcp_binding_name="local_mcp" # Enables default tools (file_writer, internet_search, etc.)
145
- # or custom tools if mcp_binding_config.tools_folder_path is set.
146
- )
133
+ lc = LollmsClient(binding_name="ollama", model_name="llama3")
147
134
 
148
- user_query = "What were the main AI headlines last week and write a summary to 'ai_news.txt'?"
149
- ASCIIColors.blue(f"User Query: {user_query}")
150
- ASCIIColors.yellow("AI Processing with MCP (streaming):")
135
+ text_block = "John Doe is a 34-year-old software engineer from New York. He loves hiking and Python programming."
151
136
 
152
- mcp_result = lc.generate_with_mcp(
153
- prompt=user_query,
154
- streaming_callback=mcp_stream_callback
155
- )
156
- print("\n--- End of MCP Interaction ---")
137
+ # Define the exact JSON structure you want
138
+ output_template = {
139
+ "full_name": "string",
140
+ "age": "integer",
141
+ "profession": "string",
142
+ "city": "string",
143
+ "hobbies": ["list", "of", "strings"]
144
+ }
157
145
 
158
- if mcp_result.get("error"):
159
- ASCIIColors.error(f"MCP Error: {mcp_result['error']}")
160
- else:
161
- ASCIIColors.cyan(f"\nFinal Answer from AI: {mcp_result.get('final_answer', 'N/A')}")
162
- ASCIIColors.magenta("\nTool Calls Made:")
163
- for tc in mcp_result.get("tool_calls", []):
164
- print(f" - Tool: {tc.get('name')}, Params: {tc.get('params')}, Result (first 50 chars): {str(tc.get('result'))[:50]}...")
146
+ # Generate the structured data
147
+ extracted_data = lc.generate_structured_content(
148
+ prompt=f"Extract the relevant information from the following text:\n\n{text_block}",
149
+ output_format=output_template
150
+ )
165
151
 
166
- except Exception as e:
167
- ASCIIColors.error(f"An error occurred in MCP example: {e}")
168
- trace_exception(e) # Assuming you have trace_exception utility
152
+ if extracted_data:
153
+ print(json.dumps(extracted_data, indent=2))
154
+ # Expected output:
155
+ # {
156
+ # "full_name": "John Doe",
157
+ # "age": 34,
158
+ # "profession": "software engineer",
159
+ # "city": "New York",
160
+ # "hobbies": ["hiking", "Python programming"]
161
+ # }
169
162
  ```
170
- For a comprehensive guide on function calling and setting up tools, please refer to the [Usage Guide (DOC_USE.md)](DOC_USE.md).
171
163
 
172
- ### 🤖 Advanced Agentic Generation with RAG: `generate_with_mcp_rag`
164
+ ### Putting It All Together: An Advanced Agentic Example
173
165
 
174
- For more complex tasks, `generate_with_mcp_rag` provides a powerful, built-in agent that uses a ReAct-style (Reason, Act) loop. This agent can reason about a user's request, use tools (MCP), retrieve information from knowledge bases (RAG), and adapt its plan based on the results of its actions.
166
+ Let's create a **Python Coder Agent**. This agent will use a set of coding rules from a local file as its knowledge base and will be equipped with a tool to execute the code it writes. This demonstrates the synergy between `LollmsPersonality` (with `data_source` and `active_mcps`), `LollmsDiscussion`, and the MCP system.
175
167
 
176
- **Key Agent Capabilities:**
168
+ #### Step 1: Create the Knowledge Base (`coding_rules.txt`)
177
169
 
178
- * **Observe-Think-Act Loop:** The agent iteratively reviews its progress, thinks about the next logical step, and takes an action (like calling a tool).
179
- * **Tool Integration (MCP):** Can use any available MCP tools, such as searching the web or executing code.
180
- * **Retrieval-Augmented Generation (RAG):** You can provide one or more "data stores" (knowledge bases). The agent gains a `research::{store_name}` tool to query these stores for relevant information.
181
- * **In-Memory Code Generation:** The agent has a special `generate_code` tool. This allows it to first write a piece of code (e.g., a complex Python script) and then pass that code to another tool (e.g., `python_code_interpreter`) in a subsequent step.
182
- * **Stateful Progress Tracking:** Designed for rich UI experiences, it emits `step_start` and `step_end` events with unique IDs via the streaming callback. This allows an application to track the agent's individual thoughts and long-running tool calls in real-time.
183
- * **Self-Correction:** Includes a `refactor_scratchpad` tool for the agent to clean up its own thought process if it becomes cluttered.
170
+ Create a simple text file with the rules our agent must follow.
184
171
 
185
- Here is an example of using the agent to answer a question by first performing RAG on a custom knowledge base and then using the retrieved information to generate and execute code.
172
+ ```text
173
+ # File: coding_rules.txt
186
174
 
187
- ```python
188
- import json
189
- from lollms_client import LollmsClient, MSG_TYPE
190
- from ascii_colors import ASCIIColors
175
+ 1. All Python functions must include a Google-style docstring.
176
+ 2. Use type hints for all function parameters and return values.
177
+ 3. The main execution block should be protected by `if __name__ == "__main__":`.
178
+ 4. After defining a function, add a simple example of its usage inside the main block.
179
+ 5. Print the output of the example usage to the console.
180
+ ```
191
181
 
192
- # 1. Define a mock RAG data store and retrieval function
193
- project_notes = {
194
- "project_phoenix_details": "Project Phoenix has a current budget of $500,000 and an expected quarterly growth rate of 15%."
195
- }
182
+ #### Step 2: The Main Script (`agent_example.py`)
196
183
 
197
- def retrieve_from_notes(query: str, top_k: int = 1, min_similarity: float = 0.5):
198
- """A simple keyword-based retriever for our mock data store."""
199
- results = []
200
- for key, text in project_notes.items():
201
- if query.lower() in text.lower():
202
- results.append({"source": key, "content": text})
203
- return results[:top_k]
184
+ This script will define the personality, initialize the client, and run the agent.
204
185
 
205
- # 2. Define a detailed streaming callback to visualize the agent's process
206
- def agent_streaming_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, metadata: list = None) -> bool:
186
+ ```python
187
+ from pathlib import Path
188
+ from lollms_client import LollmsClient, LollmsPersonality, LollmsDiscussion, MSG_TYPE
189
+ from ascii_colors import ASCIIColors
190
+ import json
191
+
192
+ # A detailed callback to visualize the agent's process
193
+ def agent_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, **kwargs) -> bool:
207
194
  if not params: params = {}
208
- msg_id = params.get("id", "")
209
195
 
210
- if msg_type == MSG_TYPE.MSG_TYPE_STEP_START:
211
- ASCIIColors.yellow(f"\n>> Agent Step Start [ID: {msg_id}]: {chunk}")
196
+ if msg_type == MSG_TYPE.MSG_TYPE_STEP:
197
+ ASCIIColors.yellow(f"\n>> Agent Step: {chunk}")
198
+ elif msg_type == MSG_TYPE.MSG_TYPE_STEP_START:
199
+ ASCIIColors.yellow(f"\n>> Agent Step Start: {chunk}")
212
200
  elif msg_type == MSG_TYPE.MSG_TYPE_STEP_END:
213
- ASCIIColors.green(f"<< Agent Step End [ID: {msg_id}]: {chunk}")
214
- if params.get('result'):
215
- ASCIIColors.cyan(f" Result: {json.dumps(params['result'], indent=2)}")
201
+ result = params.get('result', '')
202
+ ASCIIColors.green(f"<< Agent Step End: {chunk} -> Result: {json.dumps(result)[:150]}...")
216
203
  elif msg_type == MSG_TYPE.MSG_TYPE_THOUGHT_CONTENT:
217
- ASCIIColors.magenta(f"\n🤔 Agent Thought: {chunk}")
204
+ ASCIIColors.magenta(f"🤔 Agent Thought: {chunk}")
218
205
  elif msg_type == MSG_TYPE.MSG_TYPE_TOOL_CALL:
219
- ASCIIColors.blue(f"\n🛠️ Agent Action: {chunk}")
206
+ ASCIIColors.blue(f"🛠️ Agent Action: {chunk}")
220
207
  elif msg_type == MSG_TYPE.MSG_TYPE_OBSERVATION:
221
- ASCIIColors.cyan(f"\n👀 Agent Observation: {chunk}")
208
+ ASCIIColors.cyan(f"👀 Agent Observation: {chunk}")
222
209
  elif msg_type == MSG_TYPE.MSG_TYPE_CHUNK:
223
210
  print(chunk, end="", flush=True) # Final answer stream
224
211
  return True
225
212
 
226
213
  try:
227
- # 3. Initialize LollmsClient with an LLM and local tools enabled
228
- lc = LollmsClient(
229
- binding_name="ollama", # Use Ollama
230
- model_name="llama3", # Or any capable model like mistral, gemma, etc.
231
- mcp_binding_name="local_mcp" # Enable local tools like python_code_interpreter
214
+ # --- 1. Load the knowledge base from the file ---
215
+ rules_path = Path("coding_rules.txt")
216
+ if not rules_path.exists():
217
+ raise FileNotFoundError("Please create the 'coding_rules.txt' file.")
218
+ coding_rules = rules_path.read_text()
219
+
220
+ # --- 2. Define the Coder Agent Personality ---
221
+ coder_personality = LollmsPersonality(
222
+ name="Python Coder Agent",
223
+ author="lollms-client",
224
+ category="Coding",
225
+ description="An agent that writes and executes Python code according to specific rules.",
226
+ system_prompt=(
227
+ "You are an expert Python programmer. Your task is to write clean, executable Python code based on the user's request. "
228
+ "You MUST strictly follow all rules provided in the 'Personality Static Data' section. "
229
+ "First, think about the plan. Then, use the `python_code_interpreter` tool to write and execute the code. "
230
+ "Finally, present the code and its output to the user."
231
+ ),
232
+ # A) Attach the static knowledge base
233
+ data_source=coding_rules,
234
+ # B) Equip the agent with a code execution tool
235
+ active_mcps=["python_code_interpreter"]
232
236
  )
233
237
 
234
- # 4. Define the user prompt and the RAG data store
235
- prompt = "Based on my notes about Project Phoenix, write and run a Python script to calculate its projected budget after two quarters."
238
+ # --- 3. Initialize the Client and Discussion ---
239
+ lc = LollmsClient(
240
+ binding_name="ollama", # Or any capable model binding
241
+ model_name="codellama", # A code-specialized model is recommended
242
+ mcp_binding_name="local_mcp" # Enable the local tool execution engine
243
+ )
244
+ discussion = LollmsDiscussion.create_new(lollms_client=lc)
236
245
 
237
- rag_data_store = {
238
- "project_notes": {"callable": retrieve_from_notes}
239
- }
246
+ # --- 4. The User's Request ---
247
+ user_prompt = "Write a Python function that takes two numbers and returns their sum."
240
248
 
241
- ASCIIColors.yellow(f"User Prompt: {prompt}")
249
+ ASCIIColors.yellow(f"User Prompt: {user_prompt}")
242
250
  print("\n" + "="*50 + "\nAgent is now running...\n" + "="*50)
243
251
 
244
- # 5. Run the agent
245
- agent_output = lc.generate_with_mcp_rag(
246
- prompt=prompt,
247
- use_data_store=rag_data_store,
248
- use_mcps=["python_code_interpreter"], # Make specific tools available
249
- streaming_callback=agent_streaming_callback,
250
- max_reasoning_steps=5
252
+ # --- 5. Run the Agentic Chat Turn ---
253
+ response = discussion.chat(
254
+ user_message=user_prompt,
255
+ personality=coder_personality,
256
+ streaming_callback=agent_callback
251
257
  )
252
258
 
253
- print("\n" + "="*50 + "\nAgent finished.\n" + "="*50)
254
-
255
- # 6. Print the final results
256
- if agent_output.get("error"):
257
- ASCIIColors.error(f"\nAgent Error: {agent_output['error']}")
258
- else:
259
- ASCIIColors.green("\n--- Final Answer ---")
260
- print(agent_output.get("final_answer"))
261
-
262
- ASCIIColors.magenta("\n--- Tool Calls ---")
263
- print(json.dumps(agent_output.get("tool_calls", []), indent=2))
264
-
265
- ASCIIColors.cyan("\n--- RAG Sources ---")
266
- print(json.dumps(agent_output.get("sources", []), indent=2))
259
+ print("\n\n" + "="*50 + "\nAgent finished.\n" + "="*50)
260
+
261
+ # --- 6. Inspect the results ---
262
+ ai_message = response['ai_message']
263
+ ASCIIColors.green("\n--- Final Answer from Agent ---")
264
+ print(ai_message.content)
265
+
266
+ ASCIIColors.magenta("\n--- Tool Calls Made ---")
267
+ print(json.dumps(ai_message.metadata.get("tool_calls", []), indent=2))
267
268
 
268
269
  except Exception as e:
269
- ASCIIColors.red(f"\nAn unexpected error occurred: {e}")
270
+ trace_exception(e)
270
271
 
271
272
  ```
272
273
 
274
+ #### Step 3: What Happens Under the Hood
275
+
276
+ When you run `agent_example.py`, a sophisticated process unfolds:
277
+
278
+ 1. **Initialization:** The `LollmsDiscussion.chat()` method is called with the `coder_personality`.
279
+ 2. **Knowledge Injection:** The `chat` method sees that `personality.data_source` is a string. It automatically takes the content of `coding_rules.txt` and injects it into the `discussion.data_zone`.
280
+ 3. **Tool Activation:** The method also sees `personality.active_mcps`. It enables the `python_code_interpreter` tool for this turn.
281
+ 4. **Context Assembly:** The `LollmsClient` assembles a rich prompt for the LLM that includes:
282
+ * The personality's `system_prompt`.
283
+ * The content of `coding_rules.txt` (from the `data_zone`).
284
+ * The list of available tools (including `python_code_interpreter`).
285
+ * The user's request ("Write a function...").
286
+ 5. **Reason and Act:** The LLM, now fully briefed, reasons that it needs to use the `python_code_interpreter` tool. It formulates the Python code *according to the rules it was given*.
287
+ 6. **Tool Execution:** The `local_mcp` binding receives the code and executes it in a secure local environment. It captures any output (`stdout`, `stderr`) and results.
288
+ 7. **Observation:** The execution results are sent back to the LLM as an "observation."
289
+ 8. **Final Synthesis:** The LLM now has the user's request, the rules, the code it wrote, and the code's output. It synthesizes all of this into a final, comprehensive answer for the user.
290
+
291
+ This example showcases how `lollms-client` allows you to build powerful, knowledgeable, and capable agents by simply composing personalities with data and tools.
292
+
273
293
  ## Documentation
274
294
 
275
295
  For more in-depth information, please refer to:
@@ -602,3 +622,141 @@ This project is licensed under the **Apache 2.0 License**. See the [LICENSE](LIC
602
622
  ## Changelog
603
623
 
604
624
  For a list of changes and updates, please refer to the [CHANGELOG.md](CHANGELOG.md) file.
625
+ ```
626
+
627
+ ---
628
+ ### Phase 2: Update `docs/md/lollms_discussion.md`
629
+
630
+ `[UPDATE] docs/md/lollms_discussion.md`
631
+ ```markdown
632
+ # LollmsDiscussion Class
633
+
634
+ The `LollmsDiscussion` class is a cornerstone of the `lollms-client` library, designed to represent and manage a single conversation. It provides a robust interface for handling message history, conversation branching, context formatting, and persistence.
635
+
636
+ ## Overview
637
+
638
+ A `LollmsDiscussion` can be either **in-memory** or **database-backed**, offering flexibility for different use cases.
639
+
640
+ - **In-Memory:** Ideal for temporary or transient conversations. The discussion exists only for the duration of the application's runtime.
641
+ - **Database-Backed:** Provides persistence by saving the entire conversation, including all branches and metadata, to a database file (e.g., SQLite). This is perfect for applications that need to retain user chat history.
642
+
643
+ ## Key Features
644
+
645
+ - **Message Management:** Add user and AI messages, which are automatically linked to form a conversation tree.
646
+ - **Branching:** The conversation is a tree, not a simple list. This allows for exploring different conversational paths from any point. You can regenerate an AI response, and it will create a new branch.
647
+ - **Context Exporting:** The `export()` method formats the conversation history for various LLM backends (`openai_chat`, `ollama_chat`, `lollms_text`, `markdown`), ensuring compatibility.
648
+ - **Automatic Pruning:** To prevent exceeding the model's context window, it can automatically summarize older parts of the conversation without losing the original data.
649
+ - **Persistent Data Zone:** A special field to hold context that is always included in the system prompt, separate from the main conversation flow.
650
+
651
+ ## Creating a Discussion
652
+
653
+ The recommended way to create a discussion is using the `LollmsDiscussion.create_new()` class method.
654
+
655
+ ```python
656
+ from lollms_client import LollmsClient, LollmsDataManager, LollmsDiscussion
657
+
658
+ # For an in-memory discussion (lost when the app closes)
659
+ lc = LollmsClient(binding_name="ollama", model_name="llama3")
660
+ discussion = LollmsDiscussion.create_new(lollms_client=lc, id="my-temp-discussion")
661
+
662
+ # For a persistent, database-backed discussion
663
+ # This will create a 'discussions.db' file if it doesn't exist
664
+ db_manager = LollmsDataManager('sqlite:///discussions.db')
665
+ discussion_db = LollmsDiscussion.create_new(
666
+ lollms_client=lc,
667
+ db_manager=db_manager,
668
+ discussion_metadata={"title": "My First DB Chat"}
669
+ )
670
+ ```
671
+
672
+ ## Core Properties
673
+
674
+ ### `data_zone`
675
+
676
+ The `data_zone` is a string property where you can store persistent information that should always be visible to the AI as part of its system instructions. This is incredibly useful for providing context that doesn't change, such as user profiles, complex instructions, or data that the AI should always reference.
677
+
678
+ The content of `data_zone` is automatically appended to the system prompt during context export. This is also where data from a personality's `data_source` is loaded before generation.
679
+
680
+ #### Example: Using the Data Zone
681
+
682
+ Imagine you are building a Python coding assistant. You can use the `data_zone` to hold the current state of a script the user is working on.
683
+
684
+ ```python
685
+ from lollms_client import LollmsClient, LollmsDiscussion
686
+
687
+ lc = LollmsClient(binding_name="ollama", model_name="codellama")
688
+ discussion = LollmsDiscussion.create_new(lollms_client=lc)
689
+
690
+ # Set the system prompt and initial data_zone
691
+ discussion.system_prompt = "You are a Python expert. Help the user with their code."
692
+ discussion.data_zone = "# Current script content:\n\nimport os\n\ndef list_files(path):\n pass"
693
+
694
+ # The user asks for help
695
+ user_prompt = "Flesh out the list_files function to print all files in the given path."
696
+
697
+ # When you generate a response, the AI will see the system prompt AND the data_zone
698
+ # The effective system prompt becomes:
699
+ # """
700
+ # You are a Python expert. Help the user with their code.
701
+ #
702
+ # --- data ---
703
+ # # Current script content:
704
+ #
705
+ # import os
706
+ #
707
+ # def list_files(path):
708
+ # pass
709
+ # """
710
+ response = discussion.chat(user_prompt)
711
+ print(response['ai_message'].content)
712
+
713
+ # The calling application can then parse the AI's response and update the data_zone
714
+ # for the next turn.
715
+ updated_code = "# ... updated code from AI ...\nimport os\n\ndef list_files(path):\n for f in os.listdir(path):\n print(f)"
716
+ discussion.data_zone = updated_code
717
+ discussion.commit() # If DB-backed
718
+ ```
719
+
720
+ ### Other Important Properties
721
+
722
+ - `id`: The unique identifier for the discussion.
723
+ - `system_prompt`: The main system prompt defining the AI's persona and core instructions.
724
+ - `metadata`: A dictionary for storing any custom metadata, like a title.
725
+ - `active_branch_id`: The ID of the message at the "tip" of the current conversation branch.
726
+ - `messages`: A list of all `LollmsMessage` objects in the discussion.
727
+
728
+ ## Main Methods
729
+
730
+ ### `chat()`
731
+ The `chat()` method is the primary way to interact with the discussion. It handles a full user-to-AI turn, including invoking the advanced agentic capabilities of the `LollmsClient`.
732
+
733
+ #### Personalities, Tools, and Data Sources
734
+
735
+ The `chat` method intelligently handles tool activation and data loading when a `LollmsPersonality` is provided. This allows personalities to be configured as self-contained agents with their own default tools and knowledge bases.
736
+
737
+ **Tool Activation (`use_mcps`):**
738
+
739
+ 1. **Personality has tools, `use_mcps` is not set:** The agent will use the tools defined in `personality.active_mcps`.
740
+ 2. **Personality has tools, `use_mcps` is also set:** The agent will use a *combination* of tools from both the personality and the `use_mcps` parameter for that specific turn. Duplicates are automatically handled. This allows you to augment a personality's default tools on the fly.
741
+ 3. **Personality has no tools, `use_mcps` is set:** The agent will use only the tools specified in the `use_mcps` parameter.
742
+ 4. **Neither are set:** The agentic turn is not triggered (unless a data store is used), and a simple chat generation occurs.
743
+
744
+ **Knowledge Loading (`data_source`):**
745
+
746
+ Before generation, the `chat` method checks for `personality.data_source`:
747
+
748
+ - **If it's a `str` (static data):** The string is appended to the `discussion.data_zone`, making it part of the system context for the current turn.
749
+ - **If it's a `Callable` (dynamic data):**
750
+ 1. The AI first generates a query based on the current conversation.
751
+ 2. The `chat` method calls your function with this query.
752
+ 3. The returned string is appended to the `discussion.data_zone`.
753
+ 4. The final response generation proceeds with this newly added context.
754
+
755
+ This makes it easy to create powerful, reusable agents. For a complete, runnable example of building a **Python Coder Agent** that uses both `active_mcps` and a static `data_source`, **please see the "Putting It All Together" section in the main `README.md` file.**
756
+
757
+ ### Other Methods
758
+ - `add_message(sender, content, ...)`: Adds a new message.
759
+ - `export(format_type, ...)`: Exports the discussion to a specific format.
760
+ - `commit()`: Saves changes to the database (if DB-backed).
761
+ - `summarize_and_prune()`: Automatically handles context window limits.
762
+ - `count_discussion_tokens()`: Counts the tokens for a given format.