lollms-client 0.29.0__py3-none-any.whl → 0.29.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of lollms-client might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: lollms_client
3
- Version: 0.29.0
3
+ Version: 0.29.1
4
4
  Summary: A client library for LoLLMs generate endpoint
5
5
  Author-email: ParisNeo <parisneoai@gmail.com>
6
6
  License: Apache Software License
@@ -51,8 +51,12 @@ Whether you're connecting to a remote LoLLMs server, an Ollama instance, the Ope
51
51
  * 🤖 **Function Calling with MCP:** Empowers LLMs to use external tools and functions through the Model Context Protocol (MCP), with built-in support for local Python tool execution via `local_mcp` binding and its default tools (file I/O, internet search, Python interpreter, image generation).
52
52
  * 🎭 **Personalities as Agents:** Personalities can now define their own set of required tools (MCPs) and have access to static or dynamic knowledge bases (`data_source`), turning them into self-contained, ready-to-use agents.
53
53
  * 🚀 **Streaming & Callbacks:** Efficiently handle real-time text generation with customizable callback functions, including during MCP interactions.
54
+ * 📑 **Sequential Summarization:** A `summarize` method to process and summarize texts that exceed the model's context window.
54
55
  * 📝 **Advanced Structured Content Generation:** Reliably generate structured JSON output from natural language prompts using the `generate_structured_content` helper method.
55
- * 💬 **Discussion Management:** Utilities to easily manage and format conversation histories, including a persistent `data_zone` for context that is always present in the system prompt.
56
+ * 💬 **Advanced Discussion Management:** Robustly manage conversation histories with `LollmsDiscussion`, featuring branching, context exporting, and automatic pruning.
57
+ * 🧠 **Persistent Memory & Data Zones:** `LollmsDiscussion` now supports multiple, distinct data zones (`user_data_zone`, `discussion_data_zone`, `personality_data_zone`) and a long-term `memory` field. This allows for sophisticated context layering and state management.
58
+ * ✍️ **Automatic Memorization:** A new `memorize()` method allows the AI to analyze a conversation and extract key facts, appending them to the long-term `memory` for recall in future sessions.
59
+ * 📊 **Detailed Context Analysis:** The `get_context_status()` method now provides a rich, detailed breakdown of the prompt context, showing the content and token count for each individual component (system prompt, data zones, message history).
56
60
  * ⚙️ **Configuration Management:** Flexible ways to configure bindings and generation parameters.
57
61
  * 🧩 **Extensible:** Designed to easily incorporate new LLM backends and modality services, including custom MCP toolsets.
58
62
  * 📝 **High-Level Operations:** Includes convenience methods for complex tasks like sequential summarization and deep text analysis directly within `LollmsClient`.
@@ -185,7 +189,7 @@ This script will define the personality, initialize the client, and run the agen
185
189
 
186
190
  ```python
187
191
  from pathlib import Path
188
- from lollms_client import LollmsClient, LollmsPersonality, LollmsDiscussion, MSG_TYPE
192
+ from lollms_client import LollmsClient, LollmsPersonality, LollmsDiscussion, MSG_TYPE, trace_exception
189
193
  from ascii_colors import ASCIIColors
190
194
  import json
191
195
 
@@ -276,11 +280,11 @@ except Exception as e:
276
280
  When you run `agent_example.py`, a sophisticated process unfolds:
277
281
 
278
282
  1. **Initialization:** The `LollmsDiscussion.chat()` method is called with the `coder_personality`.
279
- 2. **Knowledge Injection:** The `chat` method sees that `personality.data_source` is a string. It automatically takes the content of `coding_rules.txt` and injects it into the `discussion.data_zone`.
283
+ 2. **Knowledge Injection:** The `chat` method sees that `personality.data_source` is a string. It automatically takes the content of `coding_rules.txt` and injects it into the discussion's data zones.
280
284
  3. **Tool Activation:** The method also sees `personality.active_mcps`. It enables the `python_code_interpreter` tool for this turn.
281
285
  4. **Context Assembly:** The `LollmsClient` assembles a rich prompt for the LLM that includes:
282
286
  * The personality's `system_prompt`.
283
- * The content of `coding_rules.txt` (from the `data_zone`).
287
+ * The content of `coding_rules.txt` (from the data zones).
284
288
  * The list of available tools (including `python_code_interpreter`).
285
289
  * The user's request ("Write a function...").
286
290
  5. **Reason and Act:** The LLM, now fully briefed, reasons that it needs to use the `python_code_interpreter` tool. It formulates the Python code *according to the rules it was given*.
@@ -290,6 +294,90 @@ When you run `agent_example.py`, a sophisticated process unfolds:
290
294
 
291
295
  This example showcases how `lollms-client` allows you to build powerful, knowledgeable, and capable agents by simply composing personalities with data and tools.
292
296
 
297
+ ### Building Stateful Agents with Memory and Data Zones
298
+
299
+ The latest version of `LollmsDiscussion` introduces powerful features for creating agents that can remember information across conversations. This is achieved through structured data zones and a new `memorize()` method.
300
+
301
+ Let's build a "Personal Assistant" agent that learns about the user over time.
302
+
303
+ ```python
304
+ from lollms_client import LollmsClient, LollmsDataManager, LollmsDiscussion, MSG_TYPE
305
+ from ascii_colors import ASCIIColors
306
+ import json
307
+
308
+ # --- 1. Setup a persistent database for our discussion ---
309
+ db_manager = LollmsDataManager('sqlite:///my_assistant.db')
310
+ lc = LollmsClient(binding_name="ollama", model_name="llama3")
311
+
312
+ # Try to load an existing discussion or create a new one
313
+ discussion_id = "user_assistant_chat_1"
314
+ discussion = db_manager.get_discussion(lc, discussion_id)
315
+ if not discussion:
316
+ ASCIIColors.yellow("Creating a new discussion...")
317
+ discussion = LollmsDiscussion.create_new(
318
+ lollms_client=lc,
319
+ db_manager=db_manager,
320
+ id=discussion_id,
321
+ autosave=True # Important for persistence
322
+ )
323
+ # Let's preset some user data
324
+ discussion.user_data_zone = "User's Name: Alex\nUser's Goal: Learn about AI development."
325
+ discussion.commit()
326
+ else:
327
+ ASCIIColors.green("Loaded existing discussion.")
328
+
329
+
330
+ def run_chat_turn(prompt: str):
331
+ """Helper function to run a single chat turn and print details."""
332
+ ASCIIColors.cyan(f"\n> User: {prompt}")
333
+
334
+ # --- A. Check context status BEFORE the turn ---
335
+ ASCIIColors.magenta("\n--- Context Status (Before Generation) ---")
336
+ status = discussion.get_context_status()
337
+ print(f"Max Tokens: {status.get('max_tokens')}, Current Approx. Tokens: {status.get('current_tokens')}")
338
+ for zone, data in status.get('zones', {}).items():
339
+ print(f" - Zone: {zone}, Tokens: {data['tokens']}")
340
+ # print(f" Content: {data['content'][:80]}...") # Uncomment for more detail
341
+ print("------------------------------------------")
342
+
343
+ # --- B. Run the chat ---
344
+ ASCIIColors.green("\n< Assistant:")
345
+ response = discussion.chat(
346
+ user_message=prompt,
347
+ streaming_callback=lambda chunk, type, **k: print(chunk, end="", flush=True) if type==MSG_TYPE.MSG_TYPE_CHUNK else None
348
+ )
349
+ print() # Newline after stream
350
+
351
+ # --- C. Trigger memorization ---
352
+ ASCIIColors.yellow("\nTriggering memorization process...")
353
+ discussion.memorize()
354
+ discussion.commit() # Save the new memory to the DB
355
+ ASCIIColors.yellow("Memorization complete.")
356
+
357
+ # --- Run a few turns ---
358
+ run_chat_turn("Hi there! Can you recommend a good Python library for building web APIs?")
359
+ run_chat_turn("That sounds great. By the way, my favorite programming language is Rust, I find its safety features amazing.")
360
+ run_chat_turn("What was my favorite programming language again?")
361
+
362
+ # --- Final Inspection ---
363
+ ASCIIColors.magenta("\n--- Final Context Status ---")
364
+ status = discussion.get_context_status()
365
+ print(f"Max Tokens: {status.get('max_tokens')}, Current Approx. Tokens: {status.get('current_tokens')}")
366
+ for zone, data in status.get('zones', {}).items():
367
+ print(f" - Zone: {zone}, Tokens: {data['tokens']}")
368
+ print(f" Content: {data['content'][:150].replace(chr(10), ' ')}...")
369
+ print("------------------------------------------")
370
+
371
+ ```
372
+
373
+ #### How it Works:
374
+
375
+ 1. **Persistence:** The `LollmsDataManager` and `autosave=True` ensure that all changes to the discussion, including the data zones and memory, are saved to the `my_assistant.db` file. When you re-run the script, it loads the previous state.
376
+ 2. **`user_data_zone`:** We pre-filled this zone with basic user info. This context is provided to the AI in every turn.
377
+ 3. **`get_context_status()`:** Before each generation, we call this method to get a detailed breakdown of the prompt. This is excellent for debugging and understanding how the context window is being used.
378
+ 4. **`memorize()`:** After the user mentions their favorite language, `memorize()` is called. The LLM analyzes the last turn, identifies this new, important fact ("user's favorite language is Rust"), and appends it to the `discussion.memory` field.
379
+ 5. **Recall:** In the final turn, when asked to recall the favorite language, the AI has access to the `memory` zone and can correctly answer "Rust", even if that information had scrolled out of the recent conversation history. This demonstrates true long-term memory.
380
+
293
381
  ## Documentation
294
382
 
295
383
  For more in-depth information, please refer to:
@@ -332,8 +420,8 @@ graph LR
332
420
  * **LLM Bindings**: These are plugins that allow `LollmsClient` to communicate with different LLM backends. You choose a binding (e.g., `"ollama"`, `"lollms"`, `"pythonllamacpp"`) when you initialize `LollmsClient`.
333
421
  * **🔧 MCP Bindings**: Enable tool use and function calling. `lollms-client` includes `local_mcp` for executing Python tools. It discovers tools from a specified folder (or uses its default set), each defined by a `.py` script and a `.mcp.json` metadata file.
334
422
  * **Modality Bindings**: Similar to LLM bindings, but for services like Text-to-Speech (`tts`), Text-to-Image (`tti`), etc.
335
- * **High-Level Operations**: Methods directly on `LollmsClient` (e.g., `sequential_summarize`, `deep_analyze`, `generate_code`, `yes_no`) for performing complex, multi-step AI tasks.
336
- * **`LollmsDiscussion`**: Helps manage and format conversation histories for chat applications.
423
+ * **High-Level Operations**: Methods directly on `LollmsClient` (e.g., `sequential_summarize`, `summarize`, `deep_analyze`, `generate_code`, `yes_no`) for performing complex, multi-step AI tasks.
424
+ * **`LollmsDiscussion`**: Helps manage and format conversation histories. Now includes sophisticated context layering through multiple data zones (`user_data_zone`, `discussion_data_zone`, `personality_data_zone`) and a long-term `memory` field for stateful, multi-session interactions.
337
425
 
338
426
  ## Examples
339
427
 
@@ -611,6 +699,58 @@ response = lc.generate_text("Write a short story about a robot who discovers mus
611
699
  print(response)
612
700
  ```
613
701
 
702
+ ### Sequential Summarization for Long Texts
703
+
704
+ When dealing with a document, article, or transcript that is too large to fit into a model's context window, the `summarize` method is the solution. It intelligently chunks the text, summarizes each piece, and then synthesizes those summaries into a final, coherent output.
705
+
706
+ ```python
707
+ from lollms_client import LollmsClient, MSG_TYPE, LollmsPersonality
708
+ from ascii_colors import ASCIIColors
709
+
710
+ # --- A very long text (imagine this is 10,000+ tokens) ---
711
+ long_text = """
712
+ The history of computing is a fascinating journey from mechanical contraptions to the powerful devices we use today.
713
+ It began with devices like the abacus, used for arithmetic tasks. In the 19th century, Charles Babbage conceived
714
+ the Analytical Engine, a mechanical computer that was never fully built but laid the groundwork for modern computing.
715
+ ...
716
+ (many, many paragraphs later)
717
+ ...
718
+ Today, quantum computing promises to revolutionize the field once again, tackling problems currently intractable
719
+ for even the most powerful supercomputers. Researchers are exploring qubits and quantum entanglement to create
720
+ machines that will redefine what is computationally possible, impacting fields from medicine to materials science.
721
+ """ * 50 # Simulate a very long text
722
+
723
+ # --- Callback to see the process in action ---
724
+ def summary_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, **kwargs):
725
+ if msg_type in [MSG_TYPE.MSG_TYPE_STEP_START, MSG_TYPE.MSG_TYPE_STEP_END]:
726
+ ASCIIColors.yellow(f">> {chunk}")
727
+ elif msg_type == MSG_TYPE.MSG_TYPE_STEP:
728
+ ASCIIColors.cyan(f" {chunk}")
729
+ return True
730
+
731
+ try:
732
+ lc = LollmsClient(binding_name="ollama", model_name="llama3")
733
+
734
+ # The contextual prompt guides the focus of the summary
735
+ context_prompt = "Summarize the text, focusing on the key technological milestones and their inventors."
736
+
737
+ ASCIIColors.blue("--- Starting Sequential Summarization ---")
738
+
739
+ final_summary = lc.summarize(
740
+ text_to_summarize=long_text,
741
+ contextual_prompt=context_prompt,
742
+ chunk_size_tokens=1000, # Adjust based on your model's context size
743
+ overlap_tokens=200,
744
+ streaming_callback=summary_callback,
745
+ temperature=0.1 # Good for factual summarization
746
+ )
747
+
748
+ ASCIIColors.blue("\n--- Final Comprehensive Summary ---")
749
+ ASCIIColors.green(final_summary)
750
+
751
+ except Exception as e:
752
+ print(f"An error occurred: {e}")
753
+ ```
614
754
  ## Contributing
615
755
 
616
756
  Contributions are welcome! Whether it's bug reports, feature suggestions, documentation improvements, or new bindings, please feel free to open an issue or submit a pull request on our [GitHub repository](https://github.com/ParisNeo/lollms_client).
@@ -627,7 +767,6 @@ For a list of changes and updates, please refer to the [CHANGELOG.md](CHANGELOG.
627
767
  ---
628
768
  ### Phase 2: Update `docs/md/lollms_discussion.md`
629
769
 
630
- `[UPDATE] docs/md/lollms_discussion.md`
631
770
  ```markdown
632
771
  # LollmsDiscussion Class
633
772
 
@@ -646,7 +785,7 @@ A `LollmsDiscussion` can be either **in-memory** or **database-backed**, offerin
646
785
  - **Branching:** The conversation is a tree, not a simple list. This allows for exploring different conversational paths from any point. You can regenerate an AI response, and it will create a new branch.
647
786
  - **Context Exporting:** The `export()` method formats the conversation history for various LLM backends (`openai_chat`, `ollama_chat`, `lollms_text`, `markdown`), ensuring compatibility.
648
787
  - **Automatic Pruning:** To prevent exceeding the model's context window, it can automatically summarize older parts of the conversation without losing the original data.
649
- - **Persistent Data Zone:** A special field to hold context that is always included in the system prompt, separate from the main conversation flow.
788
+ - **Sophisticated Context Layering:** Manage conversation state with multiple, distinct data zones (`user_data_zone`, `discussion_data_zone`, `personality_data_zone`) and a long-term `memory` field, allowing for rich and persistent context.
650
789
 
651
790
  ## Creating a Discussion
652
791
 
@@ -671,56 +810,61 @@ discussion_db = LollmsDiscussion.create_new(
671
810
 
672
811
  ## Core Properties
673
812
 
674
- ### `data_zone`
813
+ ### Data and Memory Zones
675
814
 
676
- The `data_zone` is a string property where you can store persistent information that should always be visible to the AI as part of its system instructions. This is incredibly useful for providing context that doesn't change, such as user profiles, complex instructions, or data that the AI should always reference.
815
+ `LollmsDiscussion` moves beyond a single `data_zone` to a more structured system of context layers. These string properties allow you to inject specific, persistent information into the AI's system prompt, separate from the main conversational flow. The content of all non-empty zones is automatically formatted and included in the prompt.
677
816
 
678
- The content of `data_zone` is automatically appended to the system prompt during context export. This is also where data from a personality's `data_source` is loaded before generation.
817
+ #### `system_prompt`
818
+ The main instruction set for the AI's persona and core task. It's the foundation of the prompt.
819
+ - **Purpose:** Defines who the AI is and what its primary goal is.
820
+ - **Example:** `"You are a helpful and friendly assistant."`
679
821
 
680
- #### Example: Using the Data Zone
822
+ #### `memory`
823
+ A special zone for storing long-term, cross-discussion information about the user or topics. It is designed to be built up over time.
824
+ - **Purpose:** To give the AI a persistent memory that survives across different chat sessions.
825
+ - **Example:** `"User's name is Alex.\nUser's favorite programming language is Rust."`
681
826
 
682
- Imagine you are building a Python coding assistant. You can use the `data_zone` to hold the current state of a script the user is working on.
827
+ #### `user_data_zone`
828
+ Holds information specific to the current user that might be relevant for the session.
829
+ - **Purpose:** Storing user preferences, profile details, or session-specific goals.
830
+ -- **Example:** `"Current project: API development.\nUser is a beginner in Python."`
683
831
 
684
- ```python
685
- from lollms_client import LollmsClient, LollmsDiscussion
832
+ #### `discussion_data_zone`
833
+ Contains context relevant only to the current discussion.
834
+ - **Purpose:** Holding summaries, state information, or data relevant to the current conversation topic that needs to be kept in front of the AI.
835
+ - **Example:** `"The user has already tried libraries A and B and found them too complex."`
686
836
 
687
- lc = LollmsClient(binding_name="ollama", model_name="codellama")
688
- discussion = LollmsDiscussion.create_new(lollms_client=lc)
837
+ #### `personality_data_zone`
838
+ This is where static or dynamic knowledge from a `LollmsPersonality`'s `data_source` is loaded.
839
+ - **Purpose:** To provide personalities with their own built-in knowledge bases or rulesets.
840
+ - **Example:** `"Rule 1: All code must be documented.\nRule 2: Use type hints."`
689
841
 
690
- # Set the system prompt and initial data_zone
691
- discussion.system_prompt = "You are a Python expert. Help the user with their code."
692
- discussion.data_zone = "# Current script content:\n\nimport os\n\ndef list_files(path):\n pass"
842
+ #### Example: How Zones are Combined
693
843
 
694
- # The user asks for help
695
- user_prompt = "Flesh out the list_files function to print all files in the given path."
844
+ The `export()` method intelligently combines these zones. If all zones were filled, the effective system prompt would look something like this:
696
845
 
697
- # When you generate a response, the AI will see the system prompt AND the data_zone
698
- # The effective system prompt becomes:
699
- # """
700
- # You are a Python expert. Help the user with their code.
701
- #
702
- # --- data ---
703
- # # Current script content:
704
- #
705
- # import os
706
- #
707
- # def list_files(path):
708
- # pass
709
- # """
710
- response = discussion.chat(user_prompt)
711
- print(response['ai_message'].content)
712
-
713
- # The calling application can then parse the AI's response and update the data_zone
714
- # for the next turn.
715
- updated_code = "# ... updated code from AI ...\nimport os\n\ndef list_files(path):\n for f in os.listdir(path):\n print(f)"
716
- discussion.data_zone = updated_code
717
- discussion.commit() # If DB-backed
718
846
  ```
847
+ !@>system:
848
+ You are a helpful and friendly assistant.
849
+
850
+ -- Memory --
851
+ User's name is Alex.
852
+ User's favorite programming language is Rust.
719
853
 
854
+ -- User Data Zone --
855
+ Current project: API development.
856
+ User is a beginner in Python.
857
+
858
+ -- Discussion Data Zone --
859
+ The user has already tried libraries A and B and found them too complex.
860
+
861
+ -- Personality Data Zone --
862
+ Rule 1: All code must be documented.
863
+ Rule 2: Use type hints.
864
+ ```
720
865
  ### Other Important Properties
721
866
 
722
867
  - `id`: The unique identifier for the discussion.
723
- - `system_prompt`: The main system prompt defining the AI's persona and core instructions.
724
868
  - `metadata`: A dictionary for storing any custom metadata, like a title.
725
869
  - `active_branch_id`: The ID of the message at the "tip" of the current conversation branch.
726
870
  - `messages`: A list of all `LollmsMessage` objects in the discussion.
@@ -745,15 +889,72 @@ The `chat` method intelligently handles tool activation and data loading when a
745
889
 
746
890
  Before generation, the `chat` method checks for `personality.data_source`:
747
891
 
748
- - **If it's a `str` (static data):** The string is appended to the `discussion.data_zone`, making it part of the system context for the current turn.
892
+ - **If it's a `str` (static data):** The string is loaded into the `discussion.personality_data_zone`, making it part of the system context for the current turn.
749
893
  - **If it's a `Callable` (dynamic data):**
750
894
  1. The AI first generates a query based on the current conversation.
751
895
  2. The `chat` method calls your function with this query.
752
- 3. The returned string is appended to the `discussion.data_zone`.
896
+ 3. The returned string is loaded into the `discussion.personality_data_zone`.
753
897
  4. The final response generation proceeds with this newly added context.
754
898
 
755
899
  This makes it easy to create powerful, reusable agents. For a complete, runnable example of building a **Python Coder Agent** that uses both `active_mcps` and a static `data_source`, **please see the "Putting It All Together" section in the main `README.md` file.**
756
900
 
901
+ ### New Methods for State and Context Management
902
+
903
+ #### `memorize()`
904
+ This method empowers the AI to build its own long-term memory. It analyzes the current conversation, extracts key facts or preferences, and appends them to the `memory` data zone.
905
+
906
+ - **How it works:** It uses the LLM itself to summarize the most important, long-term takeaways from the recent conversation.
907
+ - **Use Case:** Perfect for creating assistants that learn about the user over time, remembering their name, preferences, or past projects without the user needing to repeat themselves.
908
+
909
+ ```python
910
+ # User has just said: "My company is called 'Innovatech'."
911
+ discussion.chat("My company is called 'Innovatech'.")
912
+
913
+ # Now, trigger memorization
914
+ discussion.memorize()
915
+ discussion.commit() # Save the updated memory to the database
916
+
917
+ # The discussion.memory field might now contain:
918
+ # "... previous memory ...
919
+ #
920
+ # --- Memory entry from 2024-06-27 10:30:00 UTC ---
921
+ # - User's company is named 'Innovatech'."
922
+ ```
923
+
924
+ #### `get_context_status()`
925
+ Provides a detailed, real-time breakdown of the current prompt context, showing exactly what will be sent to the model and how many tokens each part occupies.
926
+
927
+ - **Return Value:** A dictionary containing the `max_tokens`, `current_tokens`, and a `zones` dictionary with the content and token count for each component.
928
+ - **Use Case:** Essential for debugging context issues, understanding token usage, and visualizing how different data zones contribute to the final prompt.
929
+
930
+ ```python
931
+ import json
932
+
933
+ status = discussion.get_context_status()
934
+ print(json.dumps(status, indent=2))
935
+
936
+ # Expected Output Structure:
937
+ # {
938
+ # "max_tokens": 8192,
939
+ # "current_tokens": 521,
940
+ # "zones": {
941
+ # "system_prompt": {
942
+ # "content": "You are a helpful assistant.",
943
+ # "tokens": 12
944
+ # },
945
+ # "memory": {
946
+ # "content": "User's favorite color is blue.",
947
+ # "tokens": 15
948
+ # },
949
+ # "message_history": {
950
+ # "content": "!@>user:\nHi there!\n!@>assistant:\nHello! How can I help?\n",
951
+ # "tokens": 494,
952
+ # "message_count": 2
953
+ # }
954
+ # }
955
+ # }
956
+ ```
957
+
757
958
  ### Other Methods
758
959
  - `add_message(sender, content, ...)`: Adds a new message.
759
960
  - `export(format_type, ...)`: Exports the discussion to a specific format.
@@ -12,7 +12,7 @@ examples/text_2_audio.py,sha256=MfL4AH_NNwl6m0I0ywl4BXRZJ0b9Y_9fRqDIe6O-Sbw,3523
12
12
  examples/text_2_image.py,sha256=CwRdB9K-38Ghsezg3B7_daPtFtsvDcV2hovaMRUjueg,6495
13
13
  examples/text_2_image_diffusers.py,sha256=oIVS--ovy3FQ8qwynY3bnoDRkQPDzVXwyozbZnrQ4fc,14398
14
14
  examples/text_and_image_2_audio.py,sha256=QLvSsLff8VZZa7k7K1EFGlPpQWZy07zM4Fnli5btAl0,2074
15
- examples/text_gen.py,sha256=pqQz0y_jZZCdxE5u_8d21EYPciX-UZ35zrlDxLGDP5E,1021
15
+ examples/text_gen.py,sha256=_feyK65OQKG5j3htEXXFvC1bYTamZSOhkrz4yOn9PK4,1053
16
16
  examples/text_gen_system_prompt.py,sha256=jRQeGe1IVu_zRHX09CFiDYi7WrK9Zd5FlMqC_gnVH-g,1018
17
17
  examples/article_summary/article_summary.py,sha256=CR8mCBNcZEVCR-q34uOmrJyMlG-xk4HkMbsV-TOZEnk,1978
18
18
  examples/console_discussion/console_app.py,sha256=56ar4EKczbVTXMhTLUet5mj5WCY77pm_egZnKGQeXjs,11509
@@ -29,10 +29,10 @@ examples/mcp_examples/openai_mcp.py,sha256=7IEnPGPXZgYZyiES_VaUbQ6viQjenpcUxGiHE
29
29
  examples/mcp_examples/run_remote_mcp_example_v2.py,sha256=bbNn93NO_lKcFzfIsdvJJijGx2ePFTYfknofqZxMuRM,14626
30
30
  examples/mcp_examples/run_standard_mcp_example.py,sha256=GSZpaACPf3mDPsjA8esBQVUsIi7owI39ca5avsmvCxA,9419
31
31
  examples/test_local_models/local_chat.py,sha256=slakja2zaHOEAUsn2tn_VmI4kLx6luLBrPqAeaNsix8,456
32
- lollms_client/__init__.py,sha256=DiIvlC4e7dCOcFGBxtAmgUNIpbaIMRTiPDhEuadweGg,1147
32
+ lollms_client/__init__.py,sha256=K9y45gf5gbAd4WDNX6Lxmax_hJCXmRvZLMSgaL-XVys,1147
33
33
  lollms_client/lollms_config.py,sha256=goEseDwDxYJf3WkYJ4IrLXwg3Tfw73CXV2Avg45M_hE,21876
34
- lollms_client/lollms_core.py,sha256=U-o16h7BZT7H1tu-aZNM-14H-OuObPG6qpLsikU1Jw8,169080
35
- lollms_client/lollms_discussion.py,sha256=RVGeFyPKeLpTJEUjx2IdWFYg-d8zjPhWLQGnFFiKNvQ,56138
34
+ lollms_client/lollms_core.py,sha256=KfeOs-xt3QZCQUsHCsPi2Zh_ZpfmEmMhJLR2sMiP27Y,170420
35
+ lollms_client/lollms_discussion.py,sha256=UzDYZmBqV2cZISumg3r6HMj3Pmc2uOU4ZQ42eB4Uovs,67363
36
36
  lollms_client/lollms_js_analyzer.py,sha256=01zUvuO2F_lnUe_0NLxe1MF5aHE1hO8RZi48mNPv-aw,8361
37
37
  lollms_client/lollms_llm_binding.py,sha256=cU0cmxZfIrp-ofutbRLx7W_59dxzPXpU-vO98MqVnQA,14788
38
38
  lollms_client/lollms_mcp_binding.py,sha256=0rK9HQCBEGryNc8ApBmtOlhKE1Yfn7X7xIQssXxS2Zc,8933
@@ -44,7 +44,7 @@ lollms_client/lollms_ttm_binding.py,sha256=FjVVSNXOZXK1qvcKEfxdiX6l2b4XdGOSNnZ0u
44
44
  lollms_client/lollms_tts_binding.py,sha256=5cJYECj8PYLJAyB6SEH7_fhHYK3Om-Y3arkygCnZ24o,4342
45
45
  lollms_client/lollms_ttv_binding.py,sha256=KkTaHLBhEEdt4sSVBlbwr5i_g_TlhcrwrT-7DjOsjWQ,4131
46
46
  lollms_client/lollms_types.py,sha256=0iSH1QHRRD-ddBqoL9EEKJ8wWCuwDUlN_FrfbCdg7Lw,3522
47
- lollms_client/lollms_utilities.py,sha256=zx1X4lAXQ2eCUM4jDpu_1QV5oMGdFkpaSEdTASmaiqE,13545
47
+ lollms_client/lollms_utilities.py,sha256=8vCCO1EKI457Mjddo3mRalKOvP-sDv7nIQthOBK8fPY,13725
48
48
  lollms_client/llm_bindings/__init__.py,sha256=9sWGpmWSSj6KQ8H4lKGCjpLYwhnVdL_2N7gXCphPqh4,14
49
49
  lollms_client/llm_bindings/azure_openai/__init__.py,sha256=8C-gXoVa-OI9FmFM3PaMgrTfzqCLbs4f7CHJHxKuAR8,16675
50
50
  lollms_client/llm_bindings/claude/__init__.py,sha256=CsWILXAFytXtxp1ZAoNwq8KycW0POQ2MCmpT6Bz0Hd0,24877
@@ -53,9 +53,9 @@ lollms_client/llm_bindings/grok/__init__.py,sha256=5tIf3348RgAEaSp6FdG-LM9N8R7aR
53
53
  lollms_client/llm_bindings/groq/__init__.py,sha256=zyWKM78qHwSt5g0Bb8Njj7Jy8CYuLMyplx2maOKFFpg,12218
54
54
  lollms_client/llm_bindings/hugging_face_inference_api/__init__.py,sha256=PxgeRqT8dpa9GZoXwtSncy9AUgAN2cDKrvp_nbaWq0E,14027
55
55
  lollms_client/llm_bindings/litellm/__init__.py,sha256=pNkwyRPeENvTM4CDh6Pj3kQfxHfhX2pvXhGJDjKjp30,12340
56
- lollms_client/llm_bindings/llamacpp/__init__.py,sha256=Qj5RvsgPeHGNfb5AEwZSzFwAp4BOWjyxmm9qBNtstrc,63716
57
- lollms_client/llm_bindings/lollms/__init__.py,sha256=7GNv-YyX4YyaR1EP2banQEHnX47QpUyNZU6toAiX1ak,17854
58
- lollms_client/llm_bindings/lollms_chat/__init__.py,sha256=zCCrwAW-lOjAi7nUbvHii7E0kqJiMC0v4ENdbxTUsOs,24114
56
+ lollms_client/llm_bindings/llamacpp/__init__.py,sha256=XcccmAASfNwcOwwjKjUJAaN5NDHthU9CEroDPqlG6uM,63778
57
+ lollms_client/llm_bindings/lollms/__init__.py,sha256=scGHEKzlGX5fw2XwefVicsf28GrwgN3wU5nl4EPJ_Sk,24424
58
+ lollms_client/llm_bindings/lollms_webui/__init__.py,sha256=Thoq3PJR2e03Y2Kd_FBb-DULJK0zT5-2ID1YIJLcPlw,17864
59
59
  lollms_client/llm_bindings/mistral/__init__.py,sha256=624Gr462yBh52ttHFOapKgJOn8zZ1vZcTEcC3i4FYt8,12750
60
60
  lollms_client/llm_bindings/ollama/__init__.py,sha256=1yxw_JXye_8l1YaEznK5QhOZmLV_opY-FkYnwy530eo,36109
61
61
  lollms_client/llm_bindings/open_router/__init__.py,sha256=v91BpNcuQCbbA6r82gbgMP8UYhSrJUMOf4UtOzEo18Q,13235
@@ -92,8 +92,8 @@ lollms_client/tts_bindings/piper_tts/__init__.py,sha256=0IEWG4zH3_sOkSb9WbZzkeV5
92
92
  lollms_client/tts_bindings/xtts/__init__.py,sha256=FgcdUH06X6ZR806WQe5ixaYx0QoxtAcOgYo87a2qxYc,18266
93
93
  lollms_client/ttv_bindings/__init__.py,sha256=UZ8o2izQOJLQgtZ1D1cXoNST7rzqW22rL2Vufc7ddRc,3141
94
94
  lollms_client/ttv_bindings/lollms/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
95
- lollms_client-0.29.0.dist-info/licenses/LICENSE,sha256=HrhfyXIkWY2tGFK11kg7vPCqhgh5DcxleloqdhrpyMY,11558
96
- lollms_client-0.29.0.dist-info/METADATA,sha256=pykrb85WMir4magsF-_qpPvAJ0H9uKAR_iPr6lX7lXw,33456
97
- lollms_client-0.29.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
98
- lollms_client-0.29.0.dist-info/top_level.txt,sha256=NI_W8S4OYZvJjb0QWMZMSIpOrYzpqwPGYaklhyWKH2w,23
99
- lollms_client-0.29.0.dist-info/RECORD,,
95
+ lollms_client-0.29.1.dist-info/licenses/LICENSE,sha256=HrhfyXIkWY2tGFK11kg7vPCqhgh5DcxleloqdhrpyMY,11558
96
+ lollms_client-0.29.1.dist-info/METADATA,sha256=1-FaAIFYCrxPe780f6-SfZcmN3GTTjYggjW5qrZy3xU,44176
97
+ lollms_client-0.29.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
98
+ lollms_client-0.29.1.dist-info/top_level.txt,sha256=NI_W8S4OYZvJjb0QWMZMSIpOrYzpqwPGYaklhyWKH2w,23
99
+ lollms_client-0.29.1.dist-info/RECORD,,