abstractcore 2.3.3__py3-none-any.whl → 2.3.5__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,11 +1,11 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: abstractcore
3
- Version: 2.3.3
3
+ Version: 2.3.5
4
4
  Summary: Unified interface to all LLM providers with essential infrastructure for tool calling, streaming, and model management
5
5
  Author: Laurent-Philippe Albou
6
6
  Maintainer: Laurent-Philippe Albou
7
7
  License: MIT
8
- Project-URL: Homepage, https://github.com/lpalbou/AbstractCore
8
+ Project-URL: Homepage, https://lpalbou.github.io/AbstractCore
9
9
  Project-URL: Documentation, https://github.com/lpalbou/AbstractCore#readme
10
10
  Project-URL: Repository, https://github.com/lpalbou/AbstractCore
11
11
  Project-URL: Bug Tracker, https://github.com/lpalbou/AbstractCore/issues
@@ -89,7 +89,7 @@ Dynamic: license-file
89
89
 
90
90
  # AbstractCore
91
91
 
92
- A unified, powerful Python library for seamless interaction with multiple Large Language Model (LLM) providers.
92
+ A unified Python library for interaction with multiple Large Language Model (LLM) providers.
93
93
 
94
94
  **Write once, run everywhere.**
95
95
 
@@ -167,12 +167,12 @@ loaded_session = BasicSession.load('conversation.json', provider=llm)
167
167
 
168
168
  | Provider | Status | Setup |
169
169
  |----------|--------|-------|
170
- | **OpenAI** | Full | [Get API key](docs/prerequisites.md#openai-setup) |
171
- | **Anthropic** | Full | [Get API key](docs/prerequisites.md#anthropic-setup) |
172
- | **Ollama** | Full | [Install guide](docs/prerequisites.md#ollama-setup) |
173
- | **LMStudio** | Full | [Install guide](docs/prerequisites.md#lmstudio-setup) |
174
- | **MLX** | Full | [Setup guide](docs/prerequisites.md#mlx-setup) |
175
- | **HuggingFace** | Full | [Setup guide](docs/prerequisites.md#huggingface-setup) |
170
+ | **OpenAI** | Full | [Get API key](docs/prerequisites.md#openai-setup) |
171
+ | **Anthropic** | Full | [Get API key](docs/prerequisites.md#anthropic-setup) |
172
+ | **Ollama** | Full | [Install guide](docs/prerequisites.md#ollama-setup) |
173
+ | **LMStudio** | Full | [Install guide](docs/prerequisites.md#lmstudio-setup) |
174
+ | **MLX** | Full | [Setup guide](docs/prerequisites.md#mlx-setup) |
175
+ | **HuggingFace** | Full | [Setup guide](docs/prerequisites.md#huggingface-setup) |
176
176
 
177
177
  ## Server Mode (Optional HTTP REST API)
178
178
 
@@ -275,7 +275,7 @@ python -m abstractllm.utils.cli --provider anthropic --model claude-3-5-haiku-la
275
275
 
276
276
  ### Architecture & Advanced
277
277
  - **[Architecture](docs/architecture.md)** - System design and architecture overview
278
- - **[Tool Syntax Rewriting](docs/tool-syntax-rewriting.md)** - Format conversion for agentic CLIs
278
+ - **[Tool Calling](docs/tool-calling.md)** - Universal tool system and format conversion
279
279
 
280
280
  ## Use Cases
281
281
 
@@ -355,15 +355,15 @@ curl -X POST http://localhost:8000/v1/chat/completions \
355
355
 
356
356
  ## Why AbstractCore?
357
357
 
358
- **Unified Interface**: One API for all LLM providers
359
- **Production Ready**: Robust error handling, retries, timeouts
360
- **Type Safe**: Full Pydantic integration for structured outputs
361
- **Local & Cloud**: Run models locally or use cloud APIs
362
- **Tool Calling**: Consistent function calling across providers
363
- **Streaming**: Real-time responses for better UX
364
- **Embeddings**: Built-in vector embeddings for RAG
365
- **Server Mode**: Optional OpenAI-compatible API server
366
- **Well Documented**: Comprehensive guides and examples
358
+ - **Unified Interface**: One API for all LLM providers
359
+ - **Production Ready**: Robust error handling, retries, timeouts
360
+ - **Type Safe**: Full Pydantic integration for structured outputs
361
+ - **Local & Cloud**: Run models locally or use cloud APIs
362
+ - **Tool Calling**: Consistent function calling across providers
363
+ - **Streaming**: Real-time responses for interactive applications
364
+ - **Embeddings**: Built-in vector embeddings for RAG
365
+ - **Server Mode**: Optional OpenAI-compatible API server
366
+ - **Well Documented**: Comprehensive guides and examples
367
367
 
368
368
  ## Installation Options
369
369
 
@@ -399,7 +399,7 @@ All tests passing as of October 12th, 2025.
399
399
  ## Quick Links
400
400
 
401
401
  - **[📚 Documentation Index](docs/)** - Complete documentation navigation guide
402
- - **[🚀 Getting Started](docs/getting-started.md)** - 5-minute quick start
402
+ - **[Getting Started](docs/getting-started.md)** - 5-minute quick start
403
403
  - **[⚙️ Prerequisites](docs/prerequisites.md)** - Provider setup (OpenAI, Anthropic, Ollama, etc.)
404
404
  - **[📖 Python API](docs/api-reference.md)** - Complete Python API reference
405
405
  - **[🌐 Server Guide](docs/server.md)** - HTTP API server setup
@@ -418,4 +418,4 @@ MIT License - see [LICENSE](LICENSE) file for details.
418
418
 
419
419
  ---
420
420
 
421
- **AbstractCore** - One interface, all LLM providers. Focus on building, not managing API differences. 🚀
421
+ **AbstractCore** - One interface, all LLM providers. Focus on building, not managing API differences.
@@ -1,5 +1,5 @@
1
- abstractcore-2.3.3.dist-info/licenses/LICENSE,sha256=PI2v_4HMvd6050uDD_4AY_8PzBnu2asa3RKbdDjowTA,1078
2
- abstractllm/__init__.py,sha256=JeAt7bspfu9EG_6PpC5qEVn_S400AARivx1c0j1RXzU,1839
1
+ abstractcore-2.3.5.dist-info/licenses/LICENSE,sha256=PI2v_4HMvd6050uDD_4AY_8PzBnu2asa3RKbdDjowTA,1078
2
+ abstractllm/__init__.py,sha256=qoh8JJY02pFEGDkTHGiQlWjG5VBMmYn5j8pNBFxTxdE,1839
3
3
  abstractllm/apps/__init__.py,sha256=H6fOR28gyBW8bDCEAft2RUZhNmRYA_7fF91szKBjhpA,30
4
4
  abstractllm/apps/__main__.py,sha256=qul2xxC59G_ors-OeuODwfq6XY12ahyEe9GGLmrlHjk,1241
5
5
  abstractllm/apps/extractor.py,sha256=MsRgEShvsDYt-k30LdIxzSdcQh_TJGBLK7MFKOKb_4c,22682
@@ -28,7 +28,7 @@ abstractllm/processing/basic_judge.py,sha256=5SHpHD8GhgR7BPO8dm8nfnp8bDwGwN_DmfH
28
28
  abstractllm/processing/basic_summarizer.py,sha256=rl1IOwOxdgUkHXGN_S_hcXEUYsHQogRj1-wi94jM_28,22631
29
29
  abstractllm/providers/__init__.py,sha256=UTpR2Bf_ICFG7M--1kxUmNXs4gl026Tp-KI9zJlvMKU,574
30
30
  abstractllm/providers/anthropic_provider.py,sha256=BM8Vu89c974yicvFwlsJ5C3N0wR9Kkt1pOszViWCwAQ,19694
31
- abstractllm/providers/base.py,sha256=tSKFMBpGzADSwu00-ykRHvKhqaI72L1C6eM4HUkWOQ4,36113
31
+ abstractllm/providers/base.py,sha256=rDxd5XglfZuZNj94iwjWc3ItPSddMQ92Y2G624mb60M,42780
32
32
  abstractllm/providers/huggingface_provider.py,sha256=4u3-Z4txonlKXE3DazvgUbxtzVfmdk-ZHsKPSaIfwD4,40785
33
33
  abstractllm/providers/lmstudio_provider.py,sha256=HcMltGyiXrLb2acA_XE4wDi1d-X2VZiT3eV1IjMF150,15701
34
34
  abstractllm/providers/mlx_provider.py,sha256=_tYg7AhPSsUfCUMqh7FJ--F3ZWr2BNNCtnvH4QtxttU,13745
@@ -53,8 +53,8 @@ abstractllm/utils/__init__.py,sha256=245wfpreb4UHXsCQBHKe9DbtjKNo-KIi-k14apBRj_Q
53
53
  abstractllm/utils/cli.py,sha256=5Dgyk6d17SKHX52n0VWa9dkfd5qsGh4jO44DZjYhsDM,55782
54
54
  abstractllm/utils/self_fixes.py,sha256=QEDwNTW80iQM4ftfEY3Ghz69F018oKwLM9yeRCYZOvw,5886
55
55
  abstractllm/utils/structured_logging.py,sha256=2_bbMRjOvf0gHsRejncel_-PrhYUsOUySX_eaPcQopc,15827
56
- abstractcore-2.3.3.dist-info/METADATA,sha256=KhW3cZrR28ytkKGMQwzimAmSLjTmeOhz0ng7kp4aT1I,15053
57
- abstractcore-2.3.3.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
58
- abstractcore-2.3.3.dist-info/entry_points.txt,sha256=plz04HNXbCbQkmEQj_8xKWW8x7zBxus68HD_04-IARM,306
59
- abstractcore-2.3.3.dist-info/top_level.txt,sha256=Md-8odCjB7hTNnE5hucnifAoLrL9HvRPffZmCq2jpoI,12
60
- abstractcore-2.3.3.dist-info/RECORD,,
56
+ abstractcore-2.3.5.dist-info/METADATA,sha256=3BmiLAm1e6EmpJ7QNvmqs0QFzVnAsbDx7PGqrG-Phkg,14987
57
+ abstractcore-2.3.5.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
58
+ abstractcore-2.3.5.dist-info/entry_points.txt,sha256=plz04HNXbCbQkmEQj_8xKWW8x7zBxus68HD_04-IARM,306
59
+ abstractcore-2.3.5.dist-info/top_level.txt,sha256=Md-8odCjB7hTNnE5hucnifAoLrL9HvRPffZmCq2jpoI,12
60
+ abstractcore-2.3.5.dist-info/RECORD,,
abstractllm/__init__.py CHANGED
@@ -24,7 +24,7 @@ Quick Start:
24
24
  print(response.content)
25
25
  """
26
26
 
27
- __version__ = "2.3.3"
27
+ __version__ = "2.3.5"
28
28
 
29
29
  from .core.factory import create_llm
30
30
  from .core.session import BasicSession
@@ -227,6 +227,22 @@ class BaseProvider(AbstractLLMInterface, ABC):
227
227
  "Install with: pip install pydantic>=2.0.0"
228
228
  )
229
229
 
230
+ # Handle hybrid case: tools + structured output
231
+ if tools is not None:
232
+ return self._handle_tools_with_structured_output(
233
+ prompt=prompt,
234
+ messages=messages,
235
+ system_prompt=system_prompt,
236
+ tools=tools,
237
+ response_model=response_model,
238
+ retry_strategy=retry_strategy,
239
+ tool_call_tags=tool_call_tags,
240
+ execute_tools=execute_tools,
241
+ stream=stream,
242
+ **kwargs
243
+ )
244
+
245
+ # Standard structured output (no tools)
230
246
  from ..structured import StructuredOutputHandler
231
247
  handler = StructuredOutputHandler(retry_strategy=retry_strategy)
232
248
  return handler.generate_structured(
@@ -235,7 +251,7 @@ class BaseProvider(AbstractLLMInterface, ABC):
235
251
  response_model=response_model,
236
252
  messages=messages,
237
253
  system_prompt=system_prompt,
238
- tools=tools,
254
+ tools=None, # No tools in this path
239
255
  stream=stream,
240
256
  **kwargs
241
257
  )
@@ -824,4 +840,140 @@ class BaseProvider(AbstractLLMInterface, ABC):
824
840
  self.logger.debug(f"Non-streaming tag rewriting failed: {e}")
825
841
 
826
842
  # Return original response if rewriting fails
827
- return response
843
+ return response
844
+
845
+ def _handle_tools_with_structured_output(self,
846
+ prompt: str,
847
+ messages: Optional[List[Dict[str, str]]] = None,
848
+ system_prompt: Optional[str] = None,
849
+ tools: Optional[List] = None,
850
+ response_model: Optional[Type[BaseModel]] = None,
851
+ retry_strategy=None,
852
+ tool_call_tags: Optional[str] = None,
853
+ execute_tools: Optional[bool] = None,
854
+ stream: bool = False,
855
+ **kwargs) -> BaseModel:
856
+ """
857
+ Handle the hybrid case: tools + structured output.
858
+
859
+ Strategy: Sequential execution
860
+ 1. First, generate response with tools (may include tool calls)
861
+ 2. If tool calls are generated, execute them
862
+ 3. Then generate structured output using tool results as context
863
+
864
+ Args:
865
+ prompt: Input prompt
866
+ messages: Optional message history
867
+ system_prompt: Optional system prompt
868
+ tools: List of available tools
869
+ response_model: Pydantic model for structured output
870
+ retry_strategy: Optional retry strategy for structured output
871
+ tool_call_tags: Optional tool call tag format
872
+ execute_tools: Whether to execute tools automatically
873
+ stream: Whether to use streaming (not supported for hybrid mode)
874
+ **kwargs: Additional parameters
875
+
876
+ Returns:
877
+ Validated instance of response_model
878
+
879
+ Raises:
880
+ ValueError: If streaming is requested (not supported for hybrid mode)
881
+ """
882
+ if stream:
883
+ raise ValueError(
884
+ "Streaming is not supported when combining tools with structured output. "
885
+ "Please use either stream=True OR response_model, but not both."
886
+ )
887
+
888
+ # Step 1: Generate response with tools (normal tool execution flow)
889
+ self.logger.info("Hybrid mode: Executing tools first, then structured output",
890
+ model=self.model,
891
+ response_model=response_model.__name__,
892
+ num_tools=len(tools) if tools else 0)
893
+
894
+ # Force tool execution for hybrid mode
895
+ should_execute_tools = execute_tools if execute_tools is not None else True
896
+
897
+ # Generate response with tools using the normal flow (without response_model)
898
+ tool_response = self.generate_with_telemetry(
899
+ prompt=prompt,
900
+ messages=messages,
901
+ system_prompt=system_prompt,
902
+ tools=tools,
903
+ stream=False, # Never stream in hybrid mode
904
+ response_model=None, # No structured output in first pass
905
+ tool_call_tags=tool_call_tags,
906
+ execute_tools=should_execute_tools,
907
+ **kwargs
908
+ )
909
+
910
+ # Step 2: Generate structured output using tool results as context
911
+ # Create enhanced prompt with tool execution context
912
+ if hasattr(tool_response, 'content') and tool_response.content:
913
+ enhanced_prompt = f"""{prompt}
914
+
915
+ Based on the following tool execution results:
916
+ {tool_response.content}
917
+
918
+ Please provide a structured response."""
919
+ else:
920
+ enhanced_prompt = prompt
921
+
922
+ self.logger.info("Hybrid mode: Generating structured output with tool context",
923
+ model=self.model,
924
+ response_model=response_model.__name__,
925
+ has_tool_context=bool(hasattr(tool_response, 'content') and tool_response.content))
926
+
927
+ # Generate structured output using the enhanced prompt
928
+ from ..structured import StructuredOutputHandler
929
+ handler = StructuredOutputHandler(retry_strategy=retry_strategy)
930
+
931
+ structured_result = handler.generate_structured(
932
+ provider=self,
933
+ prompt=enhanced_prompt,
934
+ response_model=response_model,
935
+ messages=messages,
936
+ system_prompt=system_prompt,
937
+ tools=None, # No tools in structured output pass
938
+ stream=False,
939
+ **kwargs
940
+ )
941
+
942
+ self.logger.info("Hybrid mode: Successfully completed tools + structured output",
943
+ model=self.model,
944
+ response_model=response_model.__name__,
945
+ success=True)
946
+
947
+ return structured_result
948
+
949
+ def generate(self,
950
+ prompt: str,
951
+ messages: Optional[List[Dict[str, str]]] = None,
952
+ system_prompt: Optional[str] = None,
953
+ tools: Optional[List[Dict[str, Any]]] = None,
954
+ stream: bool = False,
955
+ **kwargs) -> Union[GenerateResponse, Iterator[GenerateResponse], BaseModel]:
956
+ """
957
+ Generate response from the LLM.
958
+
959
+ This method implements the AbstractLLMInterface and delegates to generate_with_telemetry.
960
+
961
+ Args:
962
+ prompt: The input prompt
963
+ messages: Optional conversation history
964
+ system_prompt: Optional system prompt
965
+ tools: Optional list of available tools
966
+ stream: Whether to stream the response
967
+ **kwargs: Additional provider-specific parameters (including response_model)
968
+
969
+ Returns:
970
+ GenerateResponse, iterator of GenerateResponse for streaming, or BaseModel for structured output
971
+ """
972
+ return self.generate_with_telemetry(
973
+ prompt=prompt,
974
+ messages=messages,
975
+ system_prompt=system_prompt,
976
+ tools=tools,
977
+ stream=stream,
978
+ **kwargs
979
+ )