lollms-client 0.25.6__py3-none-any.whl → 0.26.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of lollms-client might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: lollms_client
3
- Version: 0.25.6
3
+ Version: 0.26.0
4
4
  Summary: A client library for LoLLMs generate endpoint
5
5
  Author-email: ParisNeo <parisneoai@gmail.com>
6
6
  License: Apache Software License
@@ -327,6 +327,270 @@ The `examples/` directory in this repository contains a rich set of scripts demo
327
327
 
328
328
  Explore these examples to see `lollms-client` in action!
329
329
 
330
+ ## Using LoLLMs Client with Different Bindings
331
+
332
+ `lollms-client` supports a wide range of LLM backends through its binding system. This section provides practical examples of how to initialize `LollmsClient` for each of the major supported bindings.
333
+
334
+ ### A Note on Configuration
335
+
336
+ The recommended way to provide credentials and other binding-specific settings is through the `llm_binding_config` dictionary during `LollmsClient` initialization. While many bindings can fall back to reading environment variables (e.g., `OPENAI_API_KEY`), passing them explicitly in the config is clearer and less error-prone.
337
+
338
+ ```python
339
+ # General configuration pattern
340
+ lc = LollmsClient(
341
+ binding_name="your_binding_name",
342
+ model_name="a_model_name",
343
+ llm_binding_config={
344
+ "specific_api_key_param": "your_api_key_here",
345
+ "another_specific_param": "some_value"
346
+ }
347
+ )
348
+ ```
349
+
350
+ ---
351
+
352
+ ### 1. Local Bindings
353
+
354
+ These bindings run models directly on your local machine, giving you full control and privacy.
355
+
356
+ #### **Ollama**
357
+
358
+ The `ollama` binding connects to a running Ollama server instance on your machine or network.
359
+
360
+ **Prerequisites:**
361
+ * [Ollama installed and running](https://ollama.com/).
362
+ * Models pulled, e.g., `ollama pull llama3`.
363
+
364
+ **Usage:**
365
+
366
+ ```python
367
+ from lollms_client import LollmsClient
368
+
369
+ # Configuration for a local Ollama server
370
+ lc = LollmsClient(
371
+ binding_name="ollama",
372
+ model_name="llama3", # Or any other model you have pulled
373
+ host_address="http://localhost:11434" # Default Ollama address
374
+ )
375
+
376
+ # Now you can use lc.generate_text(), lc.chat(), etc.
377
+ response = lc.generate_text("Why is the sky blue?")
378
+ print(response)
379
+ ```
380
+
381
+ #### **PythonLlamaCpp (Local GGUF Models)**
382
+
383
+ The `pythonllamacpp` binding loads and runs GGUF model files directly using the powerful `llama-cpp-python` library. This is ideal for high-performance, local inference on CPU or GPU.
384
+
385
+ **Prerequisites:**
386
+ * A GGUF model file downloaded to your machine.
387
+ * `llama-cpp-python` installed. For GPU support, it must be compiled with the correct flags (e.g., `CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python`).
388
+
389
+ **Usage:**
390
+
391
+ ```python
392
+ from lollms_client import LollmsClient
393
+
394
+ # --- Configuration for Llama.cpp ---
395
+ # Path to your GGUF model file
396
+ MODEL_PATH = "/path/to/your/model.gguf"
397
+
398
+ # Binding-specific configuration
399
+ LLAMACPP_CONFIG = {
400
+ "n_gpu_layers": -1, # -1 for all layers to GPU, 0 for CPU
401
+ "n_ctx": 4096, # Context size
402
+ "seed": -1, # -1 for random seed
403
+ "chat_format": "chatml" # Or another format like 'llama-2'
404
+ }
405
+
406
+ try:
407
+ lc = LollmsClient(
408
+ binding_name="pythonllamacpp",
409
+ model_name=MODEL_PATH, # For this binding, model_name is the file path
410
+ llm_binding_config=LLAMACPP_CONFIG
411
+ )
412
+
413
+ response = lc.generate_text("Write a recipe for a great day.")
414
+ print(response)
415
+
416
+ except Exception as e:
417
+ print(f"Error initializing Llama.cpp binding: {e}")
418
+ print("Please ensure llama-cpp-python is installed and the model path is correct.")
419
+
420
+ ```
421
+
422
+ ---
423
+
424
+ ### 2. Cloud Service Bindings
425
+
426
+ These bindings connect to hosted LLM APIs from major providers.
427
+
428
+ #### **OpenAI**
429
+
430
+ Connects to the official OpenAI API to use models like GPT-4o, GPT-4, and GPT-3.5.
431
+
432
+ **Prerequisites:**
433
+ * An OpenAI API key.
434
+
435
+ **Usage:**
436
+
437
+ ```python
438
+ from lollms_client import LollmsClient
439
+
440
+ OPENAI_CONFIG = {
441
+ "service_key": "your_openai_api_key_here" # sk-...
442
+ }
443
+
444
+ lc = LollmsClient(
445
+ binding_name="openai",
446
+ model_name="gpt-4o",
447
+ llm_binding_config=OPENAI_CONFIG
448
+ )
449
+
450
+ response = lc.generate_text("What is the difference between AI and machine learning?")
451
+ print(response)
452
+ ```
453
+
454
+ #### **Google Gemini**
455
+
456
+ Connects to Google's Gemini family of models via the Google AI Studio API.
457
+
458
+ **Prerequisites:**
459
+ * A Google AI Studio API key.
460
+
461
+ **Usage:**
462
+
463
+ ```python
464
+ from lollms_client import LollmsClient
465
+
466
+ GEMINI_CONFIG = {
467
+ "service_key": "your_google_api_key_here"
468
+ }
469
+
470
+ lc = LollmsClient(
471
+ binding_name="gemini",
472
+ model_name="gemini-1.5-pro-latest",
473
+ llm_binding_config=GEMINI_CONFIG
474
+ )
475
+
476
+ response = lc.generate_text("Summarize the plot of 'Dune' in three sentences.")
477
+ print(response)
478
+ ```
479
+
480
+ #### **Anthropic Claude**
481
+
482
+ Connects to Anthropic's API to use the Claude family of models, including Claude 3.5 Sonnet, Opus, and Haiku.
483
+
484
+ **Prerequisites:**
485
+ * An Anthropic API key.
486
+
487
+ **Usage:**
488
+
489
+ ```python
490
+ from lollms_client import LollmsClient
491
+
492
+ CLAUDE_CONFIG = {
493
+ "service_key": "your_anthropic_api_key_here"
494
+ }
495
+
496
+ lc = LollmsClient(
497
+ binding_name="claude",
498
+ model_name="claude-3-5-sonnet-20240620",
499
+ llm_binding_config=CLAUDE_CONFIG
500
+ )
501
+
502
+ response = lc.generate_text("What are the core principles of constitutional AI?")
503
+ print(response)
504
+ ```
505
+
506
+ ---
507
+
508
+ ### 3. API Aggregator Bindings
509
+
510
+ These bindings connect to services that provide access to many different models through a single API.
511
+
512
+ #### **OpenRouter**
513
+
514
+ OpenRouter provides a unified, OpenAI-compatible interface to access models from dozens of providers (Google, Anthropic, Mistral, Groq, etc.) with one API key.
515
+
516
+ **Prerequisites:**
517
+ * An OpenRouter API key (starts with `sk-or-...`).
518
+
519
+ **Usage:**
520
+ Model names must be specified in the format `provider/model-name`.
521
+
522
+ ```python
523
+ from lollms_client import LollmsClient
524
+
525
+ OPENROUTER_CONFIG = {
526
+ "open_router_api_key": "your_openrouter_api_key_here"
527
+ }
528
+
529
+ # Example using a Claude model through OpenRouter
530
+ lc = LollmsClient(
531
+ binding_name="open_router",
532
+ model_name="anthropic/claude-3-haiku-20240307",
533
+ llm_binding_config=OPENROUTER_CONFIG
534
+ )
535
+
536
+ response = lc.generate_text("Explain what an API aggregator is, as if to a beginner.")
537
+ print(response)
538
+ ```
539
+
540
+ #### **Groq**
541
+
542
+ While Groq is a direct provider, it's famous as an aggregator of speed. It runs open-source models on custom LPU hardware for exceptionally fast inference.
543
+
544
+ **Prerequisites:**
545
+ * A Groq API key.
546
+
547
+ **Usage:**
548
+
549
+ ```python
550
+ from lollms_client import LollmsClient
551
+
552
+ GROQ_CONFIG = {
553
+ "groq_api_key": "your_groq_api_key_here"
554
+ }
555
+
556
+ lc = LollmsClient(
557
+ binding_name="groq",
558
+ model_name="llama3-8b-8192",
559
+ llm_binding_config=GROQ_CONFIG
560
+ )
561
+
562
+ response = lc.generate_text("Write a 3-line poem about incredible speed.")
563
+ print(response)
564
+ ```
565
+
566
+ #### **Hugging Face Inference API**
567
+
568
+ This connects to the serverless Hugging Face Inference API, allowing experimentation with thousands of open-source models without local hardware.
569
+
570
+ **Note:** This API can have "cold starts," so the first request might be slow.
571
+
572
+ **Prerequisites:**
573
+ * A Hugging Face User Access Token (starts with `hf_...`).
574
+
575
+ **Usage:**
576
+
577
+ ```python
578
+ from lollms_client import LollmsClient
579
+
580
+ HF_CONFIG = {
581
+ "hf_api_key": "your_hugging_face_token_here"
582
+ }
583
+
584
+ lc = LollmsClient(
585
+ binding_name="hugging_face_inference_api",
586
+ model_name="google/gemma-1.1-7b-it",
587
+ llm_binding_config=HF_CONFIG
588
+ )
589
+
590
+ response = lc.generate_text("Write a short story about a robot who discovers music.")
591
+ print(response)
592
+ ```
593
+
330
594
  ## Contributing
331
595
 
332
596
  Contributions are welcome! Whether it's bug reports, feature suggestions, documentation improvements, or new bindings, please feel free to open an issue or submit a pull request on our [GitHub repository](https://github.com/ParisNeo/lollms_client).
@@ -26,10 +26,10 @@ examples/mcp_examples/openai_mcp.py,sha256=7IEnPGPXZgYZyiES_VaUbQ6viQjenpcUxGiHE
26
26
  examples/mcp_examples/run_remote_mcp_example_v2.py,sha256=bbNn93NO_lKcFzfIsdvJJijGx2ePFTYfknofqZxMuRM,14626
27
27
  examples/mcp_examples/run_standard_mcp_example.py,sha256=GSZpaACPf3mDPsjA8esBQVUsIi7owI39ca5avsmvCxA,9419
28
28
  examples/test_local_models/local_chat.py,sha256=slakja2zaHOEAUsn2tn_VmI4kLx6luLBrPqAeaNsix8,456
29
- lollms_client/__init__.py,sha256=pXsP6DSu8Afm4PZN5PmsBipV-ZOKCS81s7bngvYCcgU,1047
29
+ lollms_client/__init__.py,sha256=1AekKYnWV53viRAkn1ZzdHEk8msiWHoiMZUENt0IAdI,1047
30
30
  lollms_client/lollms_config.py,sha256=goEseDwDxYJf3WkYJ4IrLXwg3Tfw73CXV2Avg45M_hE,21876
31
31
  lollms_client/lollms_core.py,sha256=TujAapwba9gDe6EEY4olVSP-lZrLftY4LOSex-D-IPs,159610
32
- lollms_client/lollms_discussion.py,sha256=By_dN3GJ7AtInkOUdcrXuVhKliBirKd3ZxFkaRmt1yM,48843
32
+ lollms_client/lollms_discussion.py,sha256=P_ecqhWhiFIEnvyZqX_gMCnhi2BzhP66aUxckN9JP40,48660
33
33
  lollms_client/lollms_js_analyzer.py,sha256=01zUvuO2F_lnUe_0NLxe1MF5aHE1hO8RZi48mNPv-aw,8361
34
34
  lollms_client/lollms_llm_binding.py,sha256=Kpzhs5Jx8eAlaaUacYnKV7qIq2wbME5lOEtKSfJKbpg,12161
35
35
  lollms_client/lollms_mcp_binding.py,sha256=0rK9HQCBEGryNc8ApBmtOlhKE1Yfn7X7xIQssXxS2Zc,8933
@@ -43,11 +43,17 @@ lollms_client/lollms_ttv_binding.py,sha256=KkTaHLBhEEdt4sSVBlbwr5i_g_TlhcrwrT-7D
43
43
  lollms_client/lollms_types.py,sha256=0iSH1QHRRD-ddBqoL9EEKJ8wWCuwDUlN_FrfbCdg7Lw,3522
44
44
  lollms_client/lollms_utilities.py,sha256=zx1X4lAXQ2eCUM4jDpu_1QV5oMGdFkpaSEdTASmaiqE,13545
45
45
  lollms_client/llm_bindings/__init__.py,sha256=9sWGpmWSSj6KQ8H4lKGCjpLYwhnVdL_2N7gXCphPqh4,14
46
+ lollms_client/llm_bindings/azure_openai/__init__.py,sha256=8C-gXoVa-OI9FmFM3PaMgrTfzqCLbs4f7CHJHxKuAR8,16675
47
+ lollms_client/llm_bindings/claude/__init__.py,sha256=0kXWJgbClV71PacOwPw3Hnn6ur0Ka8mFVsVzpcfsVwI,24868
46
48
  lollms_client/llm_bindings/gemini/__init__.py,sha256=ZflZVwAkAa-GfctuehOWIav977oTCdXUisQy253PFsk,21611
49
+ lollms_client/llm_bindings/groq/__init__.py,sha256=zyWKM78qHwSt5g0Bb8Njj7Jy8CYuLMyplx2maOKFFpg,12218
50
+ lollms_client/llm_bindings/hugging_face_inference_api/__init__.py,sha256=PxgeRqT8dpa9GZoXwtSncy9AUgAN2cDKrvp_nbaWq0E,14027
47
51
  lollms_client/llm_bindings/litellm/__init__.py,sha256=xlTaKosxK1tKz1YJ6witK6wAJHIENTV6O7ZbfpUOdB4,11289
48
52
  lollms_client/llm_bindings/llamacpp/__init__.py,sha256=Qj5RvsgPeHGNfb5AEwZSzFwAp4BOWjyxmm9qBNtstrc,63716
49
- lollms_client/llm_bindings/lollms/__init__.py,sha256=jfiCGJqMensJ7RymeGDDJOsdokEdlORpw9ND_Q30GYc,17831
53
+ lollms_client/llm_bindings/lollms/__init__.py,sha256=7GNv-YyX4YyaR1EP2banQEHnX47QpUyNZU6toAiX1ak,17854
54
+ lollms_client/llm_bindings/mistral/__init__.py,sha256=624Gr462yBh52ttHFOapKgJOn8zZ1vZcTEcC3i4FYt8,12750
50
55
  lollms_client/llm_bindings/ollama/__init__.py,sha256=QufsYqak2VlA2XGbzks8u55yNJFeDH2V35NGeZABkm8,32554
56
+ lollms_client/llm_bindings/open_router/__init__.py,sha256=10Dyb73zTdJATOWawJVqPgDmjN6mAw7LwQIYFIN6oJk,13274
51
57
  lollms_client/llm_bindings/openai/__init__.py,sha256=4Mk8eBdc9VScI0Sdh4g4p_0eU2afJeCEUEJnCQO-QkM,20014
52
58
  lollms_client/llm_bindings/openllm/__init__.py,sha256=xv2XDhJNCYe6NPnWBboDs24AQ1VJBOzsTuMcmuQ6xYY,29864
53
59
  lollms_client/llm_bindings/pythonllamacpp/__init__.py,sha256=7dM42TCGKh0eV0njNL1tc9cInhyvBRIXzN3dcy12Gl0,33551
@@ -81,8 +87,8 @@ lollms_client/tts_bindings/piper_tts/__init__.py,sha256=0IEWG4zH3_sOkSb9WbZzkeV5
81
87
  lollms_client/tts_bindings/xtts/__init__.py,sha256=FgcdUH06X6ZR806WQe5ixaYx0QoxtAcOgYo87a2qxYc,18266
82
88
  lollms_client/ttv_bindings/__init__.py,sha256=UZ8o2izQOJLQgtZ1D1cXoNST7rzqW22rL2Vufc7ddRc,3141
83
89
  lollms_client/ttv_bindings/lollms/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
84
- lollms_client-0.25.6.dist-info/licenses/LICENSE,sha256=HrhfyXIkWY2tGFK11kg7vPCqhgh5DcxleloqdhrpyMY,11558
85
- lollms_client-0.25.6.dist-info/METADATA,sha256=dqV9ITu1ABd8rtnvPb4N7K3qUTCD6stQJhys08xoUJs,18659
86
- lollms_client-0.25.6.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
87
- lollms_client-0.25.6.dist-info/top_level.txt,sha256=NI_W8S4OYZvJjb0QWMZMSIpOrYzpqwPGYaklhyWKH2w,23
88
- lollms_client-0.25.6.dist-info/RECORD,,
90
+ lollms_client-0.26.0.dist-info/licenses/LICENSE,sha256=HrhfyXIkWY2tGFK11kg7vPCqhgh5DcxleloqdhrpyMY,11558
91
+ lollms_client-0.26.0.dist-info/METADATA,sha256=9sfLeCWj9T_80AmnURpKudbIHYnAx5ENd_ewmZ2_5mM,25778
92
+ lollms_client-0.26.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
93
+ lollms_client-0.26.0.dist-info/top_level.txt,sha256=NI_W8S4OYZvJjb0QWMZMSIpOrYzpqwPGYaklhyWKH2w,23
94
+ lollms_client-0.26.0.dist-info/RECORD,,