abstractcore 2.9.0__py3-none-any.whl → 2.11.2__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- abstractcore/__init__.py +7 -27
- abstractcore/apps/extractor.py +33 -100
- abstractcore/apps/intent.py +19 -0
- abstractcore/apps/judge.py +20 -1
- abstractcore/apps/summarizer.py +20 -1
- abstractcore/architectures/detection.py +34 -1
- abstractcore/architectures/response_postprocessing.py +313 -0
- abstractcore/assets/architecture_formats.json +38 -8
- abstractcore/assets/model_capabilities.json +781 -160
- abstractcore/compression/__init__.py +1 -2
- abstractcore/compression/glyph_processor.py +6 -4
- abstractcore/config/main.py +31 -19
- abstractcore/config/manager.py +389 -11
- abstractcore/config/vision_config.py +5 -5
- abstractcore/core/interface.py +151 -3
- abstractcore/core/session.py +16 -10
- abstractcore/download.py +1 -1
- abstractcore/embeddings/manager.py +20 -6
- abstractcore/endpoint/__init__.py +2 -0
- abstractcore/endpoint/app.py +458 -0
- abstractcore/mcp/client.py +3 -1
- abstractcore/media/__init__.py +52 -17
- abstractcore/media/auto_handler.py +42 -22
- abstractcore/media/base.py +44 -1
- abstractcore/media/capabilities.py +12 -33
- abstractcore/media/enrichment.py +105 -0
- abstractcore/media/handlers/anthropic_handler.py +19 -28
- abstractcore/media/handlers/local_handler.py +124 -70
- abstractcore/media/handlers/openai_handler.py +19 -31
- abstractcore/media/processors/__init__.py +4 -2
- abstractcore/media/processors/audio_processor.py +57 -0
- abstractcore/media/processors/office_processor.py +8 -3
- abstractcore/media/processors/pdf_processor.py +46 -3
- abstractcore/media/processors/text_processor.py +22 -24
- abstractcore/media/processors/video_processor.py +58 -0
- abstractcore/media/types.py +97 -4
- abstractcore/media/utils/image_scaler.py +20 -2
- abstractcore/media/utils/video_frames.py +219 -0
- abstractcore/media/vision_fallback.py +136 -22
- abstractcore/processing/__init__.py +32 -3
- abstractcore/processing/basic_deepsearch.py +15 -10
- abstractcore/processing/basic_intent.py +3 -2
- abstractcore/processing/basic_judge.py +3 -2
- abstractcore/processing/basic_summarizer.py +1 -1
- abstractcore/providers/__init__.py +3 -1
- abstractcore/providers/anthropic_provider.py +95 -8
- abstractcore/providers/base.py +1516 -81
- abstractcore/providers/huggingface_provider.py +546 -69
- abstractcore/providers/lmstudio_provider.py +35 -923
- abstractcore/providers/mlx_provider.py +382 -35
- abstractcore/providers/model_capabilities.py +5 -1
- abstractcore/providers/ollama_provider.py +99 -15
- abstractcore/providers/openai_compatible_provider.py +406 -180
- abstractcore/providers/openai_provider.py +188 -44
- abstractcore/providers/openrouter_provider.py +76 -0
- abstractcore/providers/registry.py +61 -5
- abstractcore/providers/streaming.py +138 -33
- abstractcore/providers/vllm_provider.py +92 -817
- abstractcore/server/app.py +461 -13
- abstractcore/server/audio_endpoints.py +139 -0
- abstractcore/server/vision_endpoints.py +1319 -0
- abstractcore/structured/handler.py +316 -41
- abstractcore/tools/common_tools.py +5501 -2012
- abstractcore/tools/comms_tools.py +1641 -0
- abstractcore/tools/core.py +37 -7
- abstractcore/tools/handler.py +4 -9
- abstractcore/tools/parser.py +49 -2
- abstractcore/tools/tag_rewriter.py +2 -1
- abstractcore/tools/telegram_tdlib.py +407 -0
- abstractcore/tools/telegram_tools.py +261 -0
- abstractcore/utils/cli.py +1085 -72
- abstractcore/utils/token_utils.py +2 -0
- abstractcore/utils/truncation.py +29 -0
- abstractcore/utils/version.py +3 -4
- abstractcore/utils/vlm_token_calculator.py +12 -2
- abstractcore-2.11.2.dist-info/METADATA +562 -0
- abstractcore-2.11.2.dist-info/RECORD +133 -0
- {abstractcore-2.9.0.dist-info → abstractcore-2.11.2.dist-info}/WHEEL +1 -1
- {abstractcore-2.9.0.dist-info → abstractcore-2.11.2.dist-info}/entry_points.txt +1 -0
- abstractcore-2.9.0.dist-info/METADATA +0 -1189
- abstractcore-2.9.0.dist-info/RECORD +0 -119
- {abstractcore-2.9.0.dist-info → abstractcore-2.11.2.dist-info}/licenses/LICENSE +0 -0
- {abstractcore-2.9.0.dist-info → abstractcore-2.11.2.dist-info}/top_level.txt +0 -0
|
@@ -1,1189 +0,0 @@
|
|
|
1
|
-
Metadata-Version: 2.4
|
|
2
|
-
Name: abstractcore
|
|
3
|
-
Version: 2.9.0
|
|
4
|
-
Summary: Unified interface to all LLM providers with essential infrastructure for tool calling, streaming, and model management
|
|
5
|
-
Author-email: Laurent-Philippe Albou <contact@abstractcore.ai>
|
|
6
|
-
Maintainer-email: Laurent-Philippe Albou <contact@abstractcore.ai>
|
|
7
|
-
License: MIT
|
|
8
|
-
Project-URL: Homepage, https://lpalbou.github.io/AbstractCore
|
|
9
|
-
Project-URL: Documentation, https://github.com/lpalbou/AbstractCore#readme
|
|
10
|
-
Project-URL: Repository, https://github.com/lpalbou/AbstractCore
|
|
11
|
-
Project-URL: Bug Tracker, https://github.com/lpalbou/AbstractCore/issues
|
|
12
|
-
Project-URL: Changelog, https://github.com/lpalbou/AbstractCore/blob/main/CHANGELOG.md
|
|
13
|
-
Keywords: llm,openai,anthropic,ollama,lmstudio,huggingface,mlx,ai,machine-learning,natural-language-processing,tool-calling,streaming
|
|
14
|
-
Classifier: Development Status :: 4 - Beta
|
|
15
|
-
Classifier: Intended Audience :: Developers
|
|
16
|
-
Classifier: License :: OSI Approved :: MIT License
|
|
17
|
-
Classifier: Operating System :: OS Independent
|
|
18
|
-
Classifier: Programming Language :: Python :: 3
|
|
19
|
-
Classifier: Programming Language :: Python :: 3.9
|
|
20
|
-
Classifier: Programming Language :: Python :: 3.10
|
|
21
|
-
Classifier: Programming Language :: Python :: 3.11
|
|
22
|
-
Classifier: Programming Language :: Python :: 3.12
|
|
23
|
-
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
24
|
-
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
25
|
-
Classifier: Topic :: Internet :: WWW/HTTP :: HTTP Servers
|
|
26
|
-
Requires-Python: >=3.9
|
|
27
|
-
Description-Content-Type: text/markdown
|
|
28
|
-
License-File: LICENSE
|
|
29
|
-
Requires-Dist: pydantic<3.0.0,>=2.0.0
|
|
30
|
-
Requires-Dist: httpx<1.0.0,>=0.24.0
|
|
31
|
-
Requires-Dist: tiktoken<1.0.0,>=0.5.0
|
|
32
|
-
Requires-Dist: requests<3.0.0,>=2.25.0
|
|
33
|
-
Requires-Dist: Pillow<12.0.0,>=10.0.0
|
|
34
|
-
Provides-Extra: openai
|
|
35
|
-
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "openai"
|
|
36
|
-
Provides-Extra: anthropic
|
|
37
|
-
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "anthropic"
|
|
38
|
-
Provides-Extra: ollama
|
|
39
|
-
Provides-Extra: lmstudio
|
|
40
|
-
Provides-Extra: huggingface
|
|
41
|
-
Requires-Dist: transformers<5.0.0,>=4.57.1; extra == "huggingface"
|
|
42
|
-
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "huggingface"
|
|
43
|
-
Requires-Dist: torchvision>=0.17.0; extra == "huggingface"
|
|
44
|
-
Requires-Dist: torchaudio>=2.1.0; extra == "huggingface"
|
|
45
|
-
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "huggingface"
|
|
46
|
-
Requires-Dist: outlines>=0.1.0; extra == "huggingface"
|
|
47
|
-
Provides-Extra: mlx
|
|
48
|
-
Requires-Dist: mlx<1.0.0,>=0.15.0; extra == "mlx"
|
|
49
|
-
Requires-Dist: mlx-lm<1.0.0,>=0.15.0; extra == "mlx"
|
|
50
|
-
Requires-Dist: outlines>=0.1.0; extra == "mlx"
|
|
51
|
-
Provides-Extra: vllm
|
|
52
|
-
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "vllm"
|
|
53
|
-
Provides-Extra: embeddings
|
|
54
|
-
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "embeddings"
|
|
55
|
-
Requires-Dist: numpy<2.0.0,>=1.20.0; extra == "embeddings"
|
|
56
|
-
Provides-Extra: processing
|
|
57
|
-
Provides-Extra: tools
|
|
58
|
-
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "tools"
|
|
59
|
-
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "tools"
|
|
60
|
-
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "tools"
|
|
61
|
-
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "tools"
|
|
62
|
-
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "tools"
|
|
63
|
-
Provides-Extra: tool
|
|
64
|
-
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "tool"
|
|
65
|
-
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "tool"
|
|
66
|
-
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "tool"
|
|
67
|
-
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "tool"
|
|
68
|
-
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "tool"
|
|
69
|
-
Provides-Extra: media
|
|
70
|
-
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "media"
|
|
71
|
-
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "media"
|
|
72
|
-
Requires-Dist: unstructured[office]<1.0.0,>=0.10.0; extra == "media"
|
|
73
|
-
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "media"
|
|
74
|
-
Provides-Extra: compression
|
|
75
|
-
Requires-Dist: abstractcore[media]; extra == "compression"
|
|
76
|
-
Requires-Dist: pdf2image<2.0.0,>=1.16.0; extra == "compression"
|
|
77
|
-
Provides-Extra: api-providers
|
|
78
|
-
Requires-Dist: abstractcore[anthropic,openai]; extra == "api-providers"
|
|
79
|
-
Provides-Extra: local-providers
|
|
80
|
-
Requires-Dist: abstractcore[lmstudio,mlx,ollama]; extra == "local-providers"
|
|
81
|
-
Provides-Extra: local-providers-apple
|
|
82
|
-
Requires-Dist: abstractcore[lmstudio,mlx,ollama]; extra == "local-providers-apple"
|
|
83
|
-
Provides-Extra: local-providers-gpu
|
|
84
|
-
Requires-Dist: abstractcore[lmstudio,ollama,vllm]; extra == "local-providers-gpu"
|
|
85
|
-
Provides-Extra: gpu-providers
|
|
86
|
-
Requires-Dist: abstractcore[huggingface,vllm]; extra == "gpu-providers"
|
|
87
|
-
Provides-Extra: heavy-providers
|
|
88
|
-
Requires-Dist: abstractcore[huggingface]; extra == "heavy-providers"
|
|
89
|
-
Provides-Extra: all-providers
|
|
90
|
-
Requires-Dist: abstractcore[anthropic,embeddings,huggingface,lmstudio,mlx,ollama,openai,vllm]; extra == "all-providers"
|
|
91
|
-
Provides-Extra: all-providers-apple
|
|
92
|
-
Requires-Dist: abstractcore[anthropic,embeddings,huggingface,lmstudio,mlx,ollama,openai]; extra == "all-providers-apple"
|
|
93
|
-
Provides-Extra: all-providers-gpu
|
|
94
|
-
Requires-Dist: abstractcore[anthropic,embeddings,huggingface,lmstudio,ollama,openai,vllm]; extra == "all-providers-gpu"
|
|
95
|
-
Provides-Extra: all-providers-non-mlx
|
|
96
|
-
Requires-Dist: abstractcore[anthropic,embeddings,huggingface,lmstudio,ollama,openai]; extra == "all-providers-non-mlx"
|
|
97
|
-
Provides-Extra: local-providers-non-mlx
|
|
98
|
-
Requires-Dist: abstractcore[lmstudio,ollama]; extra == "local-providers-non-mlx"
|
|
99
|
-
Provides-Extra: all
|
|
100
|
-
Requires-Dist: abstractcore[anthropic,compression,dev,docs,embeddings,huggingface,lmstudio,media,mlx,ollama,openai,processing,server,test,tools,vllm]; extra == "all"
|
|
101
|
-
Provides-Extra: all-apple
|
|
102
|
-
Requires-Dist: abstractcore[anthropic,compression,dev,docs,embeddings,huggingface,lmstudio,media,mlx,ollama,openai,processing,server,test,tools]; extra == "all-apple"
|
|
103
|
-
Provides-Extra: all-gpu
|
|
104
|
-
Requires-Dist: abstractcore[anthropic,compression,dev,docs,embeddings,huggingface,lmstudio,media,ollama,openai,processing,server,test,tools,vllm]; extra == "all-gpu"
|
|
105
|
-
Provides-Extra: all-non-mlx
|
|
106
|
-
Requires-Dist: abstractcore[anthropic,compression,dev,docs,embeddings,huggingface,lmstudio,media,ollama,openai,processing,server,test,tools]; extra == "all-non-mlx"
|
|
107
|
-
Provides-Extra: lightweight
|
|
108
|
-
Requires-Dist: abstractcore[anthropic,compression,embeddings,lmstudio,media,ollama,openai,processing,server,tools]; extra == "lightweight"
|
|
109
|
-
Provides-Extra: dev
|
|
110
|
-
Requires-Dist: pytest>=7.0.0; extra == "dev"
|
|
111
|
-
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
|
|
112
|
-
Requires-Dist: pytest-mock>=3.10.0; extra == "dev"
|
|
113
|
-
Requires-Dist: black>=23.0.0; extra == "dev"
|
|
114
|
-
Requires-Dist: isort>=5.12.0; extra == "dev"
|
|
115
|
-
Requires-Dist: mypy>=1.5.0; extra == "dev"
|
|
116
|
-
Requires-Dist: ruff>=0.1.0; extra == "dev"
|
|
117
|
-
Requires-Dist: pre-commit>=3.0.0; extra == "dev"
|
|
118
|
-
Provides-Extra: server
|
|
119
|
-
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "server"
|
|
120
|
-
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "server"
|
|
121
|
-
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "server"
|
|
122
|
-
Provides-Extra: test
|
|
123
|
-
Requires-Dist: pytest>=7.0.0; extra == "test"
|
|
124
|
-
Requires-Dist: pytest-asyncio>=0.21.0; extra == "test"
|
|
125
|
-
Requires-Dist: pytest-mock>=3.10.0; extra == "test"
|
|
126
|
-
Requires-Dist: pytest-cov>=4.0.0; extra == "test"
|
|
127
|
-
Requires-Dist: responses>=0.23.0; extra == "test"
|
|
128
|
-
Requires-Dist: httpx>=0.24.0; extra == "test"
|
|
129
|
-
Provides-Extra: docs
|
|
130
|
-
Requires-Dist: mkdocs>=1.5.0; extra == "docs"
|
|
131
|
-
Requires-Dist: mkdocs-material>=9.0.0; extra == "docs"
|
|
132
|
-
Requires-Dist: mkdocstrings[python]>=0.22.0; extra == "docs"
|
|
133
|
-
Requires-Dist: mkdocs-autorefs>=0.4.0; extra == "docs"
|
|
134
|
-
Provides-Extra: full-dev
|
|
135
|
-
Requires-Dist: abstractcore[all-providers,dev,docs,test,tools]; extra == "full-dev"
|
|
136
|
-
Dynamic: license-file
|
|
137
|
-
|
|
138
|
-
# AbstractCore
|
|
139
|
-
|
|
140
|
-
[](https://pypi.org/project/abstractcore/)
|
|
141
|
-
[](https://pypi.org/project/abstractcore/)
|
|
142
|
-
[](https://github.com/lpalbou/abstractcore/blob/main/LICENSE)
|
|
143
|
-
[](https://github.com/lpalbou/abstractcore/stargazers)
|
|
144
|
-
|
|
145
|
-
A unified Python library for interaction with multiple Large Language Model (LLM) providers.
|
|
146
|
-
|
|
147
|
-
**Write once, run everywhere.**
|
|
148
|
-
|
|
149
|
-
## Quick Start
|
|
150
|
-
|
|
151
|
-
### Installation
|
|
152
|
-
|
|
153
|
-
```bash
|
|
154
|
-
# macOS/Apple Silicon (includes MLX)
|
|
155
|
-
pip install abstractcore[all]
|
|
156
|
-
|
|
157
|
-
# Linux/Windows (excludes MLX)
|
|
158
|
-
pip install abstractcore[all-non-mlx]
|
|
159
|
-
```
|
|
160
|
-
|
|
161
|
-
### Basic Usage
|
|
162
|
-
|
|
163
|
-
```python
|
|
164
|
-
from abstractcore import create_llm
|
|
165
|
-
|
|
166
|
-
# Works with any provider - just change the provider name
|
|
167
|
-
llm = create_llm("anthropic", model="claude-3-5-haiku-latest")
|
|
168
|
-
response = llm.generate("What is the capital of France?")
|
|
169
|
-
print(response.content)
|
|
170
|
-
```
|
|
171
|
-
|
|
172
|
-
### Deterministic Generation
|
|
173
|
-
|
|
174
|
-
```python
|
|
175
|
-
from abstractcore import create_llm
|
|
176
|
-
|
|
177
|
-
# Deterministic outputs with seed + temperature=0
|
|
178
|
-
llm = create_llm("openai", model="gpt-3.5-turbo", seed=42, temperature=0.0)
|
|
179
|
-
|
|
180
|
-
# These will produce identical outputs
|
|
181
|
-
response1 = llm.generate("Write exactly 3 words about coding")
|
|
182
|
-
response2 = llm.generate("Write exactly 3 words about coding")
|
|
183
|
-
print(f"Response 1: {response1.content}") # "Innovative, challenging, rewarding."
|
|
184
|
-
print(f"Response 2: {response2.content}") # "Innovative, challenging, rewarding."
|
|
185
|
-
```
|
|
186
|
-
|
|
187
|
-
### Tool Calling
|
|
188
|
-
|
|
189
|
-
```python
|
|
190
|
-
from abstractcore import create_llm, tool
|
|
191
|
-
|
|
192
|
-
@tool
|
|
193
|
-
def get_current_weather(city: str):
|
|
194
|
-
"""Fetch current weather for a given city."""
|
|
195
|
-
return f"Weather in {city}: 72°F, Sunny"
|
|
196
|
-
|
|
197
|
-
llm = create_llm("openai", model="gpt-4o-mini")
|
|
198
|
-
response = llm.generate(
|
|
199
|
-
"What's the weather like in San Francisco?",
|
|
200
|
-
tools=[get_current_weather]
|
|
201
|
-
)
|
|
202
|
-
print(response.content)
|
|
203
|
-
```
|
|
204
|
-
|
|
205
|
-
### Tool Execution Modes
|
|
206
|
-
|
|
207
|
-
AbstractCore supports two tool execution modes:
|
|
208
|
-
|
|
209
|
-
**Mode 1: Passthrough (Default)** - Returns raw tool call tags for downstream processing
|
|
210
|
-
|
|
211
|
-
```python
|
|
212
|
-
from abstractcore import create_llm
|
|
213
|
-
from abstractcore.tools import tool
|
|
214
|
-
|
|
215
|
-
@tool(name="get_weather", description="Get weather for a city")
|
|
216
|
-
def get_weather(city: str) -> str:
|
|
217
|
-
return f"Weather in {city}: Sunny, 22°C"
|
|
218
|
-
|
|
219
|
-
llm = create_llm("ollama", model="qwen3:4b") # execute_tools=False by default
|
|
220
|
-
response = llm.generate("What's the weather in Paris?", tools=[get_weather])
|
|
221
|
-
# response.content contains raw tool call tags: <|tool_call|>...
|
|
222
|
-
# Downstream runtime (AbstractRuntime, Codex, Claude Code) parses and executes
|
|
223
|
-
```
|
|
224
|
-
|
|
225
|
-
**Use case**: Agent loops, AbstractRuntime, Codex, Claude Code, custom orchestration
|
|
226
|
-
|
|
227
|
-
**Mode 2: Direct Execution** - AbstractCore executes tools and returns results
|
|
228
|
-
|
|
229
|
-
```python
|
|
230
|
-
from abstractcore import create_llm
|
|
231
|
-
from abstractcore.tools import tool
|
|
232
|
-
from abstractcore.tools.registry import register_tool
|
|
233
|
-
|
|
234
|
-
@tool(name="get_weather", description="Get weather for a city")
|
|
235
|
-
def get_weather(city: str) -> str:
|
|
236
|
-
return f"Weather in {city}: Sunny, 22°C"
|
|
237
|
-
|
|
238
|
-
register_tool(get_weather) # Required for direct execution
|
|
239
|
-
|
|
240
|
-
llm = create_llm("ollama", model="qwen3:4b", execute_tools=True)
|
|
241
|
-
response = llm.generate("What's the weather in Paris?", tools=[get_weather])
|
|
242
|
-
# response.content contains executed tool results
|
|
243
|
-
```
|
|
244
|
-
|
|
245
|
-
**Use case**: Simple scripts, single-turn tool use
|
|
246
|
-
|
|
247
|
-
> **Note**: The `@tool` decorator creates metadata but does NOT register globally. Tools are passed explicitly to `generate()`. Use `register_tool()` only when using direct execution mode.
|
|
248
|
-
|
|
249
|
-
### Response Object (GenerateResponse)
|
|
250
|
-
|
|
251
|
-
Every LLM generation returns a **GenerateResponse** object with consistent structure across all providers:
|
|
252
|
-
|
|
253
|
-
```python
|
|
254
|
-
from abstractcore import create_llm
|
|
255
|
-
|
|
256
|
-
llm = create_llm("openai", model="gpt-4o-mini")
|
|
257
|
-
response = llm.generate("Explain quantum computing in simple terms")
|
|
258
|
-
|
|
259
|
-
# Core response data
|
|
260
|
-
print(f"Content: {response.content}") # Generated text
|
|
261
|
-
print(f"Model: {response.model}") # Model used
|
|
262
|
-
print(f"Finish reason: {response.finish_reason}") # Why generation stopped
|
|
263
|
-
|
|
264
|
-
# Consistent token access across ALL providers (NEW in v2.4.7)
|
|
265
|
-
print(f"Input tokens: {response.input_tokens}") # Always available
|
|
266
|
-
print(f"Output tokens: {response.output_tokens}") # Always available
|
|
267
|
-
print(f"Total tokens: {response.total_tokens}") # Always available
|
|
268
|
-
|
|
269
|
-
# Generation time tracking (NEW in v2.4.7)
|
|
270
|
-
print(f"Generation time: {response.gen_time}ms") # Always available (rounded to 1 decimal)
|
|
271
|
-
|
|
272
|
-
# Advanced access
|
|
273
|
-
print(f"Tool calls: {response.tool_calls}") # Tools executed (if any)
|
|
274
|
-
print(f"Raw usage: {response.usage}") # Provider-specific token data
|
|
275
|
-
print(f"Metadata: {response.metadata}") # Additional context
|
|
276
|
-
|
|
277
|
-
# Comprehensive summary
|
|
278
|
-
print(f"Summary: {response.get_summary()}") # "Model: gpt-4o-mini | Tokens: 117 | Time: 1234.5ms"
|
|
279
|
-
```
|
|
280
|
-
|
|
281
|
-
**Token Count Sources:**
|
|
282
|
-
- **Provider APIs**: OpenAI, Anthropic, LMStudio (native API token counts)
|
|
283
|
-
- **AbstractCore Calculation**: MLX, HuggingFace (using `token_utils.py`)
|
|
284
|
-
- **Mixed Sources**: Ollama (combination of provider and calculated tokens)
|
|
285
|
-
|
|
286
|
-
**Backward Compatibility**: Legacy `prompt_tokens` and `completion_tokens` keys remain available in `response.usage` dictionary.
|
|
287
|
-
|
|
288
|
-
### Built-in Tools
|
|
289
|
-
|
|
290
|
-
AbstractCore includes a comprehensive set of ready-to-use tools for common tasks:
|
|
291
|
-
|
|
292
|
-
> Note: `abstractcore.tools.common_tools` requires `abstractcore[tools]` (BeautifulSoup, lxml, web search backends, etc.).
|
|
293
|
-
|
|
294
|
-
```python
|
|
295
|
-
from abstractcore.tools.common_tools import fetch_url, search_files, read_file
|
|
296
|
-
|
|
297
|
-
# Intelligent web content fetching with automatic parsing
|
|
298
|
-
result = fetch_url("https://api.github.com/repos/python/cpython")
|
|
299
|
-
# Automatically detects JSON, HTML, images, PDFs, etc. and provides structured analysis
|
|
300
|
-
|
|
301
|
-
# File system operations
|
|
302
|
-
files = search_files("def.*fetch", ".", file_pattern="*.py") # Find function definitions
|
|
303
|
-
content = read_file("config.json") # Read file contents
|
|
304
|
-
|
|
305
|
-
# Use with any LLM
|
|
306
|
-
llm = create_llm("anthropic", model="claude-3-5-haiku-latest")
|
|
307
|
-
response = llm.generate(
|
|
308
|
-
"Analyze this API response and summarize the key information",
|
|
309
|
-
tools=[fetch_url]
|
|
310
|
-
)
|
|
311
|
-
```
|
|
312
|
-
|
|
313
|
-
**Available Tools:**
|
|
314
|
-
- `fetch_url` - Intelligent web content fetching with automatic content type detection and parsing
|
|
315
|
-
- `search_files` - Search for text patterns inside files using regex
|
|
316
|
-
- `list_files` - Find and list files by names/paths using glob patterns
|
|
317
|
-
- `read_file` - Read file contents with optional line range selection
|
|
318
|
-
- `write_file` - Write content to files with directory creation
|
|
319
|
-
- `edit_file` - Edit files using pattern matching and replacement
|
|
320
|
-
- `web_search` - Search the web using DuckDuckGo
|
|
321
|
-
- `execute_command` - Execute shell commands safely with security controls
|
|
322
|
-
|
|
323
|
-
### Session Management
|
|
324
|
-
|
|
325
|
-
```python
|
|
326
|
-
from abstractcore import BasicSession, create_llm
|
|
327
|
-
|
|
328
|
-
# Create a persistent conversation session
|
|
329
|
-
llm = create_llm("openai", model="gpt-4o-mini")
|
|
330
|
-
session = BasicSession(llm, system_prompt="You are a helpful assistant.")
|
|
331
|
-
|
|
332
|
-
# Add messages with metadata
|
|
333
|
-
session.add_message('user', 'Hello!', name='alice', location='Paris')
|
|
334
|
-
response = session.generate('What is Python?', name='bob')
|
|
335
|
-
|
|
336
|
-
# Save complete conversation with optional analytics
|
|
337
|
-
session.save('conversation.json') # Basic save
|
|
338
|
-
session.save('analyzed.json', summary=True, assessment=True, facts=True) # With analytics
|
|
339
|
-
|
|
340
|
-
# Load and continue conversation
|
|
341
|
-
loaded_session = BasicSession.load('conversation.json', provider=llm)
|
|
342
|
-
```
|
|
343
|
-
|
|
344
|
-
[Learn more about Session](docs/session.md)
|
|
345
|
-
|
|
346
|
-
### Interaction Tracing (Observability)
|
|
347
|
-
|
|
348
|
-
Enable complete observability of LLM interactions for debugging, compliance, and transparency:
|
|
349
|
-
|
|
350
|
-
```python
|
|
351
|
-
from abstractcore import create_llm
|
|
352
|
-
from abstractcore.core.session import BasicSession
|
|
353
|
-
from abstractcore.utils import export_traces
|
|
354
|
-
|
|
355
|
-
# Enable tracing on provider
|
|
356
|
-
llm = create_llm('openai', model='gpt-4o-mini', enable_tracing=True, max_traces=100)
|
|
357
|
-
|
|
358
|
-
# Or on session for automatic correlation
|
|
359
|
-
session = BasicSession(provider=llm, enable_tracing=True)
|
|
360
|
-
|
|
361
|
-
# Generate with custom metadata
|
|
362
|
-
response = session.generate(
|
|
363
|
-
"Write Python code",
|
|
364
|
-
step_type='code_generation',
|
|
365
|
-
attempt_number=1
|
|
366
|
-
)
|
|
367
|
-
|
|
368
|
-
# Access complete trace
|
|
369
|
-
trace_id = response.metadata['trace_id']
|
|
370
|
-
trace = llm.get_traces(trace_id=trace_id)
|
|
371
|
-
|
|
372
|
-
# Full interaction context
|
|
373
|
-
print(f"Prompt: {trace['prompt']}")
|
|
374
|
-
print(f"Response: {trace['response']['content']}")
|
|
375
|
-
print(f"Tokens: {trace['response']['usage']['total_tokens']}")
|
|
376
|
-
print(f"Time: {trace['response']['generation_time_ms']}ms")
|
|
377
|
-
print(f"Custom metadata: {trace['metadata']}")
|
|
378
|
-
|
|
379
|
-
# Get all session traces
|
|
380
|
-
traces = session.get_interaction_history()
|
|
381
|
-
|
|
382
|
-
# Export to JSONL, JSON, or Markdown
|
|
383
|
-
export_traces(traces, format='markdown', file_path='workflow_trace.md')
|
|
384
|
-
```
|
|
385
|
-
|
|
386
|
-
**What's captured:**
|
|
387
|
-
- All prompts, system prompts, and conversation history
|
|
388
|
-
- Complete responses with token usage and timing
|
|
389
|
-
- Generation parameters (temperature, tokens, seed, etc.)
|
|
390
|
-
- Custom metadata for workflow tracking
|
|
391
|
-
- Tool calls and results
|
|
392
|
-
|
|
393
|
-
[Learn more about Interaction Tracing](docs/interaction-tracing.md)
|
|
394
|
-
|
|
395
|
-
### Async/Await Support
|
|
396
|
-
|
|
397
|
-
Execute concurrent LLM requests for batch operations, multi-provider comparisons, or non-blocking web applications. **Production-ready with validated 6-7.5x performance improvement** for concurrent requests.
|
|
398
|
-
|
|
399
|
-
```python
|
|
400
|
-
import asyncio
|
|
401
|
-
from abstractcore import create_llm
|
|
402
|
-
|
|
403
|
-
async def main():
|
|
404
|
-
llm = create_llm("openai", model="gpt-4o-mini")
|
|
405
|
-
|
|
406
|
-
# Execute 3 requests concurrently (6-7x faster!)
|
|
407
|
-
tasks = [
|
|
408
|
-
llm.agenerate(f"Summarize {topic}")
|
|
409
|
-
for topic in ["Python", "JavaScript", "Rust"]
|
|
410
|
-
]
|
|
411
|
-
responses = await asyncio.gather(*tasks)
|
|
412
|
-
|
|
413
|
-
for response in responses:
|
|
414
|
-
print(response.content)
|
|
415
|
-
|
|
416
|
-
asyncio.run(main())
|
|
417
|
-
```
|
|
418
|
-
|
|
419
|
-
**Performance (Validated with Real Testing):**
|
|
420
|
-
- **Ollama**: 7.5x faster for concurrent requests
|
|
421
|
-
- **LMStudio**: 6.5x faster for concurrent requests
|
|
422
|
-
- **OpenAI**: 6.0x faster for concurrent requests
|
|
423
|
-
- **Anthropic**: 7.4x faster for concurrent requests
|
|
424
|
-
- **Average**: ~7x speedup across all providers
|
|
425
|
-
|
|
426
|
-
**Native Async vs Fallback:**
|
|
427
|
-
- **Native async** (httpx.AsyncClient): Ollama, LMStudio, OpenAI, Anthropic
|
|
428
|
-
- **Fallback** (asyncio.to_thread): MLX, HuggingFace
|
|
429
|
-
- All providers work seamlessly - fallback keeps event loop responsive
|
|
430
|
-
|
|
431
|
-
**Use Cases:**
|
|
432
|
-
- Batch operations with 6-7x speedup via parallel execution
|
|
433
|
-
- Multi-provider comparisons (query OpenAI and Anthropic simultaneously)
|
|
434
|
-
- FastAPI/async web frameworks integration
|
|
435
|
-
- Session async for conversation management
|
|
436
|
-
|
|
437
|
-
**Works with:**
|
|
438
|
-
- All 6 providers (OpenAI, Anthropic, Ollama, LMStudio, MLX, HuggingFace)
|
|
439
|
-
- Streaming via `async for chunk in llm.agenerate(..., stream=True)`
|
|
440
|
-
- Sessions via `await session.agenerate(...)`
|
|
441
|
-
- Zero breaking changes to sync API
|
|
442
|
-
|
|
443
|
-
**Learn async patterns:**
|
|
444
|
-
|
|
445
|
-
AbstractCore includes an educational [async CLI demo](examples/async_cli_demo.py) that demonstrates 8 core async/await patterns:
|
|
446
|
-
- Event-driven progress with GlobalEventBus
|
|
447
|
-
- Parallel tool execution with asyncio.gather()
|
|
448
|
-
- Proper async streaming pattern (await first, then async for)
|
|
449
|
-
- Non-blocking animations and user input
|
|
450
|
-
|
|
451
|
-
```bash
|
|
452
|
-
# Try the educational async demo
|
|
453
|
-
python examples/async_cli_demo.py --provider ollama --model qwen3:4b --stream
|
|
454
|
-
```
|
|
455
|
-
|
|
456
|
-
[Learn more in CLI docs](docs/acore-cli.md#async-cli-demo-educational-reference)
|
|
457
|
-
|
|
458
|
-
### Media Handling
|
|
459
|
-
|
|
460
|
-
AbstractCore provides unified media handling across all providers with automatic resolution optimization. Upload images, PDFs, and documents using the same simple API regardless of your provider.
|
|
461
|
-
|
|
462
|
-
```python
|
|
463
|
-
from abstractcore import create_llm
|
|
464
|
-
|
|
465
|
-
# Vision analysis - works with any vision model
|
|
466
|
-
# Images automatically processed at maximum supported resolution
|
|
467
|
-
llm = create_llm("openai", model="gpt-4o")
|
|
468
|
-
response = llm.generate(
|
|
469
|
-
"What's in this image?",
|
|
470
|
-
media=["photo.jpg"] # Auto-resized to model's maximum capability
|
|
471
|
-
)
|
|
472
|
-
|
|
473
|
-
# Document analysis - works with any model
|
|
474
|
-
llm = create_llm("anthropic", model="claude-3.5-sonnet")
|
|
475
|
-
response = llm.generate(
|
|
476
|
-
"Summarize this research paper",
|
|
477
|
-
media=["research_paper.pdf"]
|
|
478
|
-
)
|
|
479
|
-
|
|
480
|
-
# Multiple files - mix images, PDFs, spreadsheets
|
|
481
|
-
response = llm.generate(
|
|
482
|
-
"Analyze these business documents",
|
|
483
|
-
media=["report.pdf", "chart.png", "data.xlsx"]
|
|
484
|
-
)
|
|
485
|
-
|
|
486
|
-
# Same code works with local models
|
|
487
|
-
llm = create_llm("ollama", model="qwen3-vl:8b")
|
|
488
|
-
response = llm.generate(
|
|
489
|
-
"Describe this screenshot",
|
|
490
|
-
media=["screenshot.png"] # Auto-optimized for qwen3-vl
|
|
491
|
-
)
|
|
492
|
-
```
|
|
493
|
-
|
|
494
|
-
**Key Features:**
|
|
495
|
-
- **Smart Resolution**: Automatically uses maximum resolution supported by each model
|
|
496
|
-
- **Format Support**: PNG, JPEG, GIF, WEBP, BMP, TIFF images; PDF, TXT, MD, CSV, TSV, JSON documents
|
|
497
|
-
- **Office Documents**: DOCX, XLSX, PPT (with `pip install abstractcore[all]`)
|
|
498
|
-
- **Vision Optimization**: Model-specific image processing for vision results
|
|
499
|
-
|
|
500
|
-
**Provider compatibility:**
|
|
501
|
-
- **High-resolution vision**: GPT-4o (up to 4096x4096), Claude 3.5 Sonnet (up to 1568x1568)
|
|
502
|
-
- **Local models**: qwen3-vl (up to 3584x3584), gemma3:4b, llama3.2-vision
|
|
503
|
-
- **All models**: Automatic text extraction for non-vision models
|
|
504
|
-
|
|
505
|
-
[Learn more about Media Handling](docs/media-handling-system.md)
|
|
506
|
-
|
|
507
|
-
### Glyph Visual-Text Compression (🧪 EXPERIMENTAL)
|
|
508
|
-
|
|
509
|
-
> ⚠️ **Vision Model Requirement**: This feature ONLY works with vision-capable models (e.g., gpt-4o, claude-3-5-sonnet, llama3.2-vision)
|
|
510
|
-
|
|
511
|
-
Achieve **3-4x token compression** and **faster inference** with Glyph's revolutionary visual-text compression:
|
|
512
|
-
|
|
513
|
-
```python
|
|
514
|
-
from abstractcore import create_llm
|
|
515
|
-
|
|
516
|
-
# IMPORTANT: Requires a vision-capable model
|
|
517
|
-
llm = create_llm("ollama", model="llama3.2-vision:11b") # ✓ Vision model
|
|
518
|
-
|
|
519
|
-
# Large documents are automatically compressed for efficiency
|
|
520
|
-
response = llm.generate(
|
|
521
|
-
"Analyze the key findings in this research paper",
|
|
522
|
-
media=["large_research_paper.pdf"] # Automatically compressed if beneficial
|
|
523
|
-
)
|
|
524
|
-
|
|
525
|
-
# Force compression (raises error if model lacks vision)
|
|
526
|
-
response = llm.generate(
|
|
527
|
-
"Summarize this document",
|
|
528
|
-
media=["document.pdf"],
|
|
529
|
-
glyph_compression="always" # "auto" | "always" | "never"
|
|
530
|
-
)
|
|
531
|
-
|
|
532
|
-
# Non-vision models will raise UnsupportedFeatureError
|
|
533
|
-
# llm_no_vision = create_llm("openai", model="gpt-4") # ✗ No vision
|
|
534
|
-
# response = llm_no_vision.generate("...", glyph_compression="always") # Error!
|
|
535
|
-
|
|
536
|
-
# Check compression stats
|
|
537
|
-
if response.metadata and response.metadata.get('compression_used'):
|
|
538
|
-
stats = response.metadata.get('compression_stats', {})
|
|
539
|
-
print(f"Compression ratio: {stats.get('compression_ratio')}x")
|
|
540
|
-
print(f"Processing speedup: 14% faster, 79% less memory")
|
|
541
|
-
```
|
|
542
|
-
|
|
543
|
-
**Validated Performance:**
|
|
544
|
-
- **14% faster processing** with real-world documents
|
|
545
|
-
- **79% lower memory usage** during processing
|
|
546
|
-
- **100% quality preservation** - no loss of analytical accuracy
|
|
547
|
-
- **Transparent operation** - works with existing code
|
|
548
|
-
|
|
549
|
-
[Learn more about Glyph Compression](docs/glyphs.md)
|
|
550
|
-
|
|
551
|
-
## Key Features
|
|
552
|
-
|
|
553
|
-
- **Offline-First Design**: Built primarily for open source LLMs with full offline capability. Download once, run forever without internet access
|
|
554
|
-
- **Provider Agnostic**: Seamlessly switch between OpenAI, Anthropic, Ollama, LMStudio, MLX, HuggingFace, vLLM, and any OpenAI-compatible endpoint
|
|
555
|
-
- **Async/Await Support** ⭐ NEW in v2.6.0: Native async support for concurrent requests with `asyncio.gather()` - works with all providers
|
|
556
|
-
- **Dynamic Endpoint Configuration** ⭐ NEW in v2.6.5: Pass `base_url` in POST requests to connect to custom OpenAI-compatible endpoints without environment variables
|
|
557
|
-
- **Interaction Tracing**: Complete LLM observability with programmatic access to prompts, responses, tokens, timing, and trace correlation for debugging, trust, and compliance
|
|
558
|
-
- **Glyph Visual-Text Compression**: Revolutionary compression system that renders text as optimized images for 3-4x token compression and faster inference
|
|
559
|
-
- **Centralized Configuration**: Global defaults and app-specific preferences at `~/.abstractcore/config/abstractcore.json`
|
|
560
|
-
- **Intelligent Media Handling**: Upload images, PDFs, and documents with automatic maximum resolution optimization
|
|
561
|
-
- **Vision Model Support**: Smart image processing at each model's maximum capability
|
|
562
|
-
- **Document Processing**: PDF extraction (PyMuPDF4LLM), Office documents (DOCX/XLSX/PPT), CSV/TSV analysis
|
|
563
|
-
- **Unified Tools**: Consistent tool calling across all providers
|
|
564
|
-
- **Session Management**: Persistent conversations with metadata, analytics, and complete serialization
|
|
565
|
-
- **Native Structured Output**: Server-side schema enforcement for Ollama and LMStudio (OpenAI and Anthropic also supported)
|
|
566
|
-
- **Streaming Support**: Real-time token generation for interactive experiences
|
|
567
|
-
- **Consistent Token Terminology**: Unified `input_tokens`, `output_tokens`, `total_tokens` across all providers
|
|
568
|
-
- **Embeddings**: Built-in support for semantic search and RAG applications
|
|
569
|
-
- **Universal Server**: Optional OpenAI-compatible API server with `/v1/responses` endpoint
|
|
570
|
-
|
|
571
|
-
## Supported Providers
|
|
572
|
-
|
|
573
|
-
| Provider | Status | SEED Support | Hardware | Setup |
|
|
574
|
-
|----------|--------|-------------|----------|-------|
|
|
575
|
-
| **OpenAI** | Full | Native | Any | [Get API key](docs/prerequisites.md#openai-setup) |
|
|
576
|
-
| **Anthropic** | Full | Warning* | Any | [Get API key](docs/prerequisites.md#anthropic-setup) |
|
|
577
|
-
| **Ollama** | Full | Native | Any | [Install guide](docs/prerequisites.md#ollama-setup) |
|
|
578
|
-
| **LMStudio** | Full | Native | Any | [Install guide](docs/prerequisites.md#lmstudio-setup) |
|
|
579
|
-
| **MLX** | Full | Native | **Apple Silicon only** | [Setup guide](docs/prerequisites.md#mlx-setup) |
|
|
580
|
-
| **HuggingFace** | Full | Native | Any | [Setup guide](docs/prerequisites.md#huggingface-setup) |
|
|
581
|
-
| **vLLM** | Full | Native | **NVIDIA CUDA only** | [Setup guide](docs/prerequisites.md#vllm-setup) |
|
|
582
|
-
| **OpenAI-Compatible** ⭐ NEW | Full | Native | Any | Works with llama.cpp, text-generation-webui, LocalAI, etc. |
|
|
583
|
-
|
|
584
|
-
*Anthropic doesn't support seed parameters but issues a warning when provided. Use `temperature=0.0` for more consistent outputs.
|
|
585
|
-
|
|
586
|
-
## Server Mode (Optional HTTP REST API)
|
|
587
|
-
|
|
588
|
-
AbstractCore is **primarily a Python library**. The server is an **optional component** that provides OpenAI-compatible HTTP endpoints:
|
|
589
|
-
|
|
590
|
-
```bash
|
|
591
|
-
# Install with server support
|
|
592
|
-
pip install abstractcore[server]
|
|
593
|
-
|
|
594
|
-
# Start the server
|
|
595
|
-
uvicorn abstractcore.server.app:app --host 0.0.0.0 --port 8000
|
|
596
|
-
```
|
|
597
|
-
|
|
598
|
-
Use with any OpenAI-compatible client:
|
|
599
|
-
|
|
600
|
-
```python
|
|
601
|
-
from openai import OpenAI
|
|
602
|
-
|
|
603
|
-
client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
|
|
604
|
-
response = client.chat.completions.create(
|
|
605
|
-
model="anthropic/claude-3-5-haiku-latest",
|
|
606
|
-
messages=[{"role": "user", "content": "Hello!"}]
|
|
607
|
-
)
|
|
608
|
-
```
|
|
609
|
-
|
|
610
|
-
**Server Features:**
|
|
611
|
-
- OpenAI-compatible REST endpoints (`/v1/chat/completions`, `/v1/embeddings`, `/v1/responses`)
|
|
612
|
-
- **NEW in v2.5.0**: OpenAI Responses API (`/v1/responses`) with native `input_file` support
|
|
613
|
-
- Multi-provider support through one HTTP API
|
|
614
|
-
- Comprehensive media processing (images, PDFs, Office documents, CSV/TSV)
|
|
615
|
-
- Agentic CLI integration (Codex, Crush, Gemini CLI)
|
|
616
|
-
- Streaming responses with optional opt-in
|
|
617
|
-
- Tool call format conversion
|
|
618
|
-
- Enhanced debug logging with `--debug` flag
|
|
619
|
-
- Interactive API docs at `/docs` (Swagger UI)
|
|
620
|
-
|
|
621
|
-
**When to use the server:**
|
|
622
|
-
- Integrating with existing OpenAI-compatible tools
|
|
623
|
-
- Using agentic CLIs (Codex, Crush, Gemini CLI)
|
|
624
|
-
- Building web applications that need HTTP API
|
|
625
|
-
- Multi-language access (not just Python)
|
|
626
|
-
|
|
627
|
-
## AbstractCore CLI (Optional Interactive Testing Tool)
|
|
628
|
-
|
|
629
|
-
AbstractCore includes a **built-in CLI** for interactive testing, development, and conversation management. This is an internal testing tool, distinct from external agentic CLIs.
|
|
630
|
-
|
|
631
|
-
```bash
|
|
632
|
-
# Start interactive CLI
|
|
633
|
-
python -m abstractcore.utils.cli --provider ollama --model qwen3-coder:30b
|
|
634
|
-
|
|
635
|
-
# With streaming enabled
|
|
636
|
-
python -m abstractcore.utils.cli --provider openai --model gpt-4o-mini --stream
|
|
637
|
-
|
|
638
|
-
# Single prompt execution
|
|
639
|
-
python -m abstractcore.utils.cli --provider anthropic --model claude-3-5-haiku-latest --prompt "What is Python?"
|
|
640
|
-
```
|
|
641
|
-
|
|
642
|
-
**Key Features:**
|
|
643
|
-
- Interactive REPL with conversation history
|
|
644
|
-
- Chat history compaction and management
|
|
645
|
-
- Fact extraction from conversations
|
|
646
|
-
- Conversation quality evaluation (LLM-as-a-judge)
|
|
647
|
-
- Intent analysis and deception detection
|
|
648
|
-
- Tool call testing and debugging
|
|
649
|
-
- System prompt management
|
|
650
|
-
- Multiple provider support
|
|
651
|
-
|
|
652
|
-
**Popular Commands:**
|
|
653
|
-
- `/compact` - Compress chat history while preserving context
|
|
654
|
-
- `/facts [file]` - Extract structured facts from conversation
|
|
655
|
-
- `/judge` - Evaluate conversation quality with feedback
|
|
656
|
-
- `/intent [participant]` - Analyze psychological intents and detect deception
|
|
657
|
-
- `/history [n]` - View conversation history
|
|
658
|
-
- `/stream` - Toggle real-time streaming
|
|
659
|
-
- `/system [prompt]` - Show or change system prompt
|
|
660
|
-
- `/status` - Show current provider, model, and capabilities
|
|
661
|
-
|
|
662
|
-
**Full Documentation:** [AbstractCore CLI Guide](docs/acore-cli.md)
|
|
663
|
-
|
|
664
|
-
**When to use the CLI:**
|
|
665
|
-
- Interactive development and testing
|
|
666
|
-
- Debugging tool calls and provider behavior
|
|
667
|
-
- Conversation management experiments
|
|
668
|
-
- Quick prototyping with different models
|
|
669
|
-
- Learning AbstractCore capabilities
|
|
670
|
-
|
|
671
|
-
## Built-in Applications (Ready-to-Use CLI Tools)
|
|
672
|
-
|
|
673
|
-
AbstractCore includes **five specialized command-line applications** for common LLM tasks. These are production-ready tools that can be used directly from the terminal without any Python programming.
|
|
674
|
-
|
|
675
|
-
### Available Applications
|
|
676
|
-
|
|
677
|
-
| Application | Purpose | Direct Command |
|
|
678
|
-
|-------------|---------|----------------|
|
|
679
|
-
| **Summarizer** | Document summarization | `summarizer` |
|
|
680
|
-
| **Extractor** | Entity and relationship extraction | `extractor` |
|
|
681
|
-
| **Judge** | Text evaluation and scoring | `judge` |
|
|
682
|
-
| **Intent Analyzer** | Psychological intent analysis & deception detection | `intent` |
|
|
683
|
-
| **DeepSearch** | Autonomous multi-stage research with web search | `deepsearch` |
|
|
684
|
-
|
|
685
|
-
### Quick Usage Examples
|
|
686
|
-
|
|
687
|
-
```bash
|
|
688
|
-
# Document summarization with different styles and lengths
|
|
689
|
-
summarizer document.pdf --style executive --length brief
|
|
690
|
-
summarizer report.txt --focus "technical details" --output summary.txt
|
|
691
|
-
summarizer large_doc.txt --chunk-size 15000 --provider openai --model gpt-4o-mini
|
|
692
|
-
|
|
693
|
-
# Entity extraction with various formats and options
|
|
694
|
-
extractor research_paper.pdf --format json-ld --focus technology
|
|
695
|
-
extractor article.txt --entity-types person,organization,location --output entities.jsonld
|
|
696
|
-
extractor doc.txt --iterate 3 --mode thorough --verbose
|
|
697
|
-
|
|
698
|
-
# Text evaluation with custom criteria and contexts
|
|
699
|
-
judge essay.txt --criteria clarity,accuracy,coherence --context "academic writing"
|
|
700
|
-
judge code.py --context "code review" --format plain --verbose
|
|
701
|
-
judge proposal.md --custom-criteria has_examples,covers_risks --output assessment.json
|
|
702
|
-
|
|
703
|
-
# Intent analysis with psychological insights and deception detection
|
|
704
|
-
intent conversation.txt --focus-participant user --depth comprehensive
|
|
705
|
-
intent email.txt --format plain --context document --verbose
|
|
706
|
-
intent chat_log.json --conversation-mode --provider lmstudio --model qwen/qwen3-30b-a3b-2507
|
|
707
|
-
|
|
708
|
-
# Autonomous research with web search and reflexive refinement
|
|
709
|
-
deepsearch "What are the latest advances in quantum computing?" --depth comprehensive
|
|
710
|
-
deepsearch "AI impact on healthcare" --focus "diagnosis,treatment,ethics" --reflexive
|
|
711
|
-
deepsearch "sustainable energy 2025" --max-sources 25 --provider openai --model gpt-4o-mini
|
|
712
|
-
```
|
|
713
|
-
|
|
714
|
-
### Installation & Setup
|
|
715
|
-
|
|
716
|
-
Apps are automatically available after installing AbstractCore:
|
|
717
|
-
|
|
718
|
-
```bash
|
|
719
|
-
# Install with all features
|
|
720
|
-
pip install abstractcore[all]
|
|
721
|
-
|
|
722
|
-
# Apps are immediately available
|
|
723
|
-
summarizer --help
|
|
724
|
-
extractor --help
|
|
725
|
-
judge --help
|
|
726
|
-
intent --help
|
|
727
|
-
deepsearch --help
|
|
728
|
-
```
|
|
729
|
-
|
|
730
|
-
### Alternative Usage Methods
|
|
731
|
-
|
|
732
|
-
```bash
|
|
733
|
-
# Method 1: Direct commands (recommended)
|
|
734
|
-
summarizer document.txt
|
|
735
|
-
extractor report.pdf
|
|
736
|
-
judge essay.md
|
|
737
|
-
intent conversation.txt
|
|
738
|
-
deepsearch "your research query"
|
|
739
|
-
|
|
740
|
-
# Method 2: Via Python module
|
|
741
|
-
python -m abstractcore.apps summarizer document.txt
|
|
742
|
-
python -m abstractcore.apps extractor report.pdf
|
|
743
|
-
python -m abstractcore.apps judge essay.md
|
|
744
|
-
python -m abstractcore.apps intent conversation.txt
|
|
745
|
-
python -m abstractcore.apps deepsearch "your research query"
|
|
746
|
-
```
|
|
747
|
-
|
|
748
|
-
### Key Parameters
|
|
749
|
-
|
|
750
|
-
**Common Parameters (all apps):**
|
|
751
|
-
- `--provider` + `--model` - Use different LLM providers (OpenAI, Anthropic, Ollama, etc.)
|
|
752
|
-
- `--output` - Save results to file instead of console
|
|
753
|
-
- `--verbose` - Show detailed progress information
|
|
754
|
-
- `--timeout` - HTTP timeout for LLM requests (default: 300s)
|
|
755
|
-
|
|
756
|
-
**Summarizer Parameters:**
|
|
757
|
-
- `--style` - Summary style: `structured`, `narrative`, `objective`, `analytical`, `executive`, `conversational`
|
|
758
|
-
- `--length` - Summary length: `brief`, `standard`, `detailed`, `comprehensive`
|
|
759
|
-
- `--focus` - Specific focus area for summarization
|
|
760
|
-
- `--chunk-size` - Chunk size for large documents (1000-32000, default: 8000)
|
|
761
|
-
|
|
762
|
-
**Extractor Parameters:**
|
|
763
|
-
- `--format` - Output format: `json-ld`, `triples`, `json`, `yaml`
|
|
764
|
-
- `--entity-types` - Focus on specific entities: `person,organization,location,technology,etc.`
|
|
765
|
-
- `--mode` - Extraction mode: `fast`, `balanced`, `thorough`
|
|
766
|
-
- `--iterate` - Number of refinement iterations (1-10, default: 1)
|
|
767
|
-
- `--minified` - Output compact JSON without indentation
|
|
768
|
-
|
|
769
|
-
**Judge Parameters:**
|
|
770
|
-
- `--context` - Evaluation context (e.g., "code review", "academic writing")
|
|
771
|
-
- `--criteria` - Standard criteria: `clarity,soundness,effectiveness,etc.`
|
|
772
|
-
- `--custom-criteria` - Custom evaluation criteria
|
|
773
|
-
- `--format` - Output format: `json`, `plain`, `yaml`
|
|
774
|
-
- `--include-criteria` - Include detailed criteria explanations
|
|
775
|
-
|
|
776
|
-
### Key Features
|
|
777
|
-
|
|
778
|
-
- **Provider Agnostic**: Works with any configured LLM provider (OpenAI, Anthropic, Ollama, etc.)
|
|
779
|
-
- **Multiple Formats**: Support for PDF, TXT, MD, DOCX, and more
|
|
780
|
-
- **Flexible Output**: JSON, JSON-LD, YAML, plain text formats
|
|
781
|
-
- **Batch Processing**: Process multiple files at once
|
|
782
|
-
- **Configurable**: Custom prompts, criteria, and evaluation rubrics
|
|
783
|
-
- **Production Ready**: Robust error handling and logging
|
|
784
|
-
|
|
785
|
-
### Full Documentation
|
|
786
|
-
|
|
787
|
-
Each application has documentation with examples and usage information:
|
|
788
|
-
|
|
789
|
-
- **[Summarizer Guide](docs/apps/basic-summarizer.md)** - Document summarization with multiple strategies
|
|
790
|
-
- **[Extractor Guide](docs/apps/basic-extractor.md)** - Entity and relationship extraction
|
|
791
|
-
- **[Intent Analyzer Guide](docs/apps/basic-intent.md)** - Psychological intent analysis and deception detection
|
|
792
|
-
- **[Judge Guide](docs/apps/basic-judge.md)** - Text evaluation and scoring systems
|
|
793
|
-
- **[DeepSearch Guide](docs/apps/basic-deepsearch.md)** - Autonomous multi-stage research with web search
|
|
794
|
-
|
|
795
|
-
**When to use the apps:**
|
|
796
|
-
- Processing documents without writing code
|
|
797
|
-
- Batch text analysis workflows
|
|
798
|
-
- Quick prototyping of text processing pipelines
|
|
799
|
-
- Integration with shell scripts and automation
|
|
800
|
-
- Standardized text processing tasks
|
|
801
|
-
|
|
802
|
-
## Configuration
|
|
803
|
-
|
|
804
|
-
AbstractCore provides a **centralized configuration system** that manages default models, cache directories, and logging settings from a single location. This eliminates the need to specify `--provider` and `--model` parameters repeatedly.
|
|
805
|
-
|
|
806
|
-
### Quick Setup
|
|
807
|
-
|
|
808
|
-
```bash
|
|
809
|
-
# Check current configuration (shows how to change each setting)
|
|
810
|
-
abstractcore --status
|
|
811
|
-
|
|
812
|
-
# Set defaults for all applications
|
|
813
|
-
abstractcore --set-global-default ollama/llama3:8b
|
|
814
|
-
|
|
815
|
-
# Or configure specific applications (examples of customization)
|
|
816
|
-
abstractcore --set-app-default summarizer openai gpt-4o-mini
|
|
817
|
-
abstractcore --set-app-default extractor ollama qwen3:4b-instruct
|
|
818
|
-
abstractcore --set-app-default judge anthropic claude-3-5-haiku
|
|
819
|
-
|
|
820
|
-
# Configure logging (common examples)
|
|
821
|
-
abstractcore --set-console-log-level WARNING # Reduce console output
|
|
822
|
-
abstractcore --set-console-log-level NONE # Disable console logging
|
|
823
|
-
abstractcore --enable-file-logging # Save logs to files
|
|
824
|
-
abstractcore --enable-debug-logging # Full debug mode
|
|
825
|
-
|
|
826
|
-
# Configure vision for image analysis with text-only models
|
|
827
|
-
abstractcore --set-vision-provider ollama qwen2.5vl:7b
|
|
828
|
-
abstractcore --set-vision-provider lmstudio qwen/qwen3-vl-4b
|
|
829
|
-
|
|
830
|
-
# Set API keys as needed
|
|
831
|
-
abstractcore --set-api-key openai sk-your-key-here
|
|
832
|
-
abstractcore --set-api-key anthropic your-anthropic-key
|
|
833
|
-
|
|
834
|
-
# Verify configuration (includes change commands for each setting)
|
|
835
|
-
abstractcore --status
|
|
836
|
-
```
|
|
837
|
-
|
|
838
|
-
### Priority System
|
|
839
|
-
|
|
840
|
-
AbstractCore uses a clear priority system where explicit parameters always override defaults:
|
|
841
|
-
|
|
842
|
-
1. **Explicit parameters** (highest priority): `summarizer doc.txt --provider openai --model gpt-4o-mini`
|
|
843
|
-
2. **App-specific config**: `abstractcore --set-app-default summarizer openai gpt-4o-mini`
|
|
844
|
-
3. **Global config**: `abstractcore --set-global-default openai/gpt-4o-mini`
|
|
845
|
-
4. **Built-in defaults** (lowest priority): `huggingface/unsloth/Qwen3-4B-Instruct-2507-GGUF`
|
|
846
|
-
|
|
847
|
-
### Usage After Configuration
|
|
848
|
-
|
|
849
|
-
Once configured, apps use your defaults automatically:
|
|
850
|
-
|
|
851
|
-
```bash
|
|
852
|
-
# Before configuration (requires explicit parameters)
|
|
853
|
-
summarizer document.pdf --provider openai --model gpt-4o-mini
|
|
854
|
-
|
|
855
|
-
# After configuration (uses configured defaults)
|
|
856
|
-
summarizer document.pdf
|
|
857
|
-
|
|
858
|
-
# Explicit parameters still override when needed
|
|
859
|
-
summarizer document.pdf --provider anthropic --model claude-3-5-sonnet
|
|
860
|
-
```
|
|
861
|
-
|
|
862
|
-
### Configuration Features
|
|
863
|
-
|
|
864
|
-
- **Application defaults**: Different optimal models for each app
|
|
865
|
-
- **Cache directories**: Configurable cache locations for models and data
|
|
866
|
-
- **Logging control**: Package-wide logging levels and debug mode
|
|
867
|
-
- **API key management**: Centralized API key storage
|
|
868
|
-
- **Interactive setup**: `abstractcore --configure` for guided configuration
|
|
869
|
-
|
|
870
|
-
**Complete guide**: [Centralized Configuration](docs/centralized-config.md)
|
|
871
|
-
|
|
872
|
-
### Environment Variables
|
|
873
|
-
|
|
874
|
-
AbstractCore supports environment variables for provider base URLs, enabling remote servers, Docker deployments, and non-standard ports:
|
|
875
|
-
|
|
876
|
-
```bash
|
|
877
|
-
# Ollama on remote server
|
|
878
|
-
export OLLAMA_BASE_URL="http://192.168.1.100:11434"
|
|
879
|
-
# Alternative: OLLAMA_HOST is also supported
|
|
880
|
-
export OLLAMA_HOST="http://192.168.1.100:11434"
|
|
881
|
-
|
|
882
|
-
# LMStudio on non-standard port
|
|
883
|
-
export LMSTUDIO_BASE_URL="http://localhost:1235/v1"
|
|
884
|
-
|
|
885
|
-
# OpenAI-compatible proxy
|
|
886
|
-
export OPENAI_BASE_URL="https://api.portkey.ai/v1"
|
|
887
|
-
|
|
888
|
-
# Anthropic proxy
|
|
889
|
-
export ANTHROPIC_BASE_URL="https://api.portkey.ai/v1"
|
|
890
|
-
```
|
|
891
|
-
|
|
892
|
-
**Priority**: Programmatic `base_url` parameter > Runtime configuration > Environment variable > Default value
|
|
893
|
-
|
|
894
|
-
**Provider discovery**: `get_all_providers_with_models()` automatically respects these environment variables when checking provider availability.
|
|
895
|
-
|
|
896
|
-
### Programmatic Configuration
|
|
897
|
-
|
|
898
|
-
Configure provider settings at runtime without environment variables:
|
|
899
|
-
|
|
900
|
-
```python
|
|
901
|
-
from abstractcore.config import configure_provider, get_provider_config, clear_provider_config
|
|
902
|
-
from abstractcore import create_llm
|
|
903
|
-
|
|
904
|
-
# Set provider base URL programmatically
|
|
905
|
-
configure_provider('ollama', base_url='http://192.168.1.100:11434')
|
|
906
|
-
|
|
907
|
-
# All future create_llm() calls automatically use the configured URL
|
|
908
|
-
llm = create_llm('ollama', model='llama3:8b') # Uses http://192.168.1.100:11434
|
|
909
|
-
|
|
910
|
-
# Query current configuration
|
|
911
|
-
config = get_provider_config('ollama')
|
|
912
|
-
print(config) # {'base_url': 'http://192.168.1.100:11434'}
|
|
913
|
-
|
|
914
|
-
# Clear configuration (revert to env var / default)
|
|
915
|
-
configure_provider('ollama', base_url=None)
|
|
916
|
-
# Or clear all providers
|
|
917
|
-
clear_provider_config()
|
|
918
|
-
```
|
|
919
|
-
|
|
920
|
-
**Use Cases**:
|
|
921
|
-
- **Web UI Settings**: Configure providers through settings pages
|
|
922
|
-
- **Docker Startup**: Read from custom env vars and configure programmatically
|
|
923
|
-
- **Testing**: Set mock server URLs for integration tests
|
|
924
|
-
- **Multi-tenant**: Configure different base URLs per tenant
|
|
925
|
-
|
|
926
|
-
**Priority System**:
|
|
927
|
-
1. Constructor parameter (highest): `create_llm("ollama", base_url="...")`
|
|
928
|
-
2. Runtime configuration: `configure_provider('ollama', base_url="...")`
|
|
929
|
-
3. Environment variable: `OLLAMA_BASE_URL`
|
|
930
|
-
4. Default value (lowest): `http://localhost:11434`
|
|
931
|
-
|
|
932
|
-
## Documentation
|
|
933
|
-
|
|
934
|
-
**📚 Complete Documentation:** [docs/](docs/) - Full documentation index and navigation guide
|
|
935
|
-
|
|
936
|
-
### Getting Started
|
|
937
|
-
- **[Prerequisites & Setup](docs/prerequisites.md)** - Install and configure providers (OpenAI, Anthropic, Ollama, etc.)
|
|
938
|
-
- **[Getting Started Guide](docs/getting-started.md)** - 5-minute quick start with core concepts
|
|
939
|
-
- **[Troubleshooting](docs/troubleshooting.md)** - Common issues and solutions
|
|
940
|
-
|
|
941
|
-
### Core Library (Python)
|
|
942
|
-
- **[Python API Reference](docs/api-reference.md)** - Complete Python API documentation
|
|
943
|
-
- **[Media Handling System](docs/media-handling-system.md)** - Images, PDFs, and document processing across all providers
|
|
944
|
-
- **[Session Management](docs/session.md)** - Persistent conversations, serialization, and analytics
|
|
945
|
-
- **[Embeddings Guide](docs/embeddings.md)** - Semantic search, RAG, and vector embeddings
|
|
946
|
-
- **[Code Examples](examples/)** - Working examples for all features
|
|
947
|
-
- **[Capabilities](docs/capabilities.md)** - What AbstractCore can and cannot do
|
|
948
|
-
|
|
949
|
-
### Server (Optional HTTP REST API)
|
|
950
|
-
- **[Server Documentation](docs/server.md)** - Complete server setup, API reference, and deployment
|
|
951
|
-
|
|
952
|
-
### Architecture & Advanced
|
|
953
|
-
- **[Architecture](docs/architecture.md)** - System design and architecture overview
|
|
954
|
-
- **[Tool Calling](docs/tool-calling.md)** - Universal tool system and format conversion
|
|
955
|
-
|
|
956
|
-
## Use Cases
|
|
957
|
-
|
|
958
|
-
### 1. Provider Flexibility
|
|
959
|
-
|
|
960
|
-
```python
|
|
961
|
-
# Same code works with any provider
|
|
962
|
-
providers = ["openai", "anthropic", "ollama"]
|
|
963
|
-
|
|
964
|
-
for provider in providers:
|
|
965
|
-
llm = create_llm(provider, model="gpt-4o-mini") # Auto-selects appropriate model
|
|
966
|
-
response = llm.generate("Hello!")
|
|
967
|
-
```
|
|
968
|
-
|
|
969
|
-
### 2. Vision Analysis Across Providers
|
|
970
|
-
|
|
971
|
-
```python
|
|
972
|
-
# Same image analysis works with any vision model
|
|
973
|
-
image_files = ["product_photo.jpg", "user_feedback.png"]
|
|
974
|
-
prompt = "Analyze these product images and suggest improvements"
|
|
975
|
-
|
|
976
|
-
# OpenAI GPT-4o
|
|
977
|
-
openai_llm = create_llm("openai", model="gpt-4o")
|
|
978
|
-
openai_analysis = openai_llm.generate(prompt, media=image_files)
|
|
979
|
-
|
|
980
|
-
# Anthropic Claude
|
|
981
|
-
claude_llm = create_llm("anthropic", model="claude-3.5-sonnet")
|
|
982
|
-
claude_analysis = claude_llm.generate(prompt, media=image_files)
|
|
983
|
-
|
|
984
|
-
# Local model (free)
|
|
985
|
-
local_llm = create_llm("ollama", model="qwen3-vl:8b")
|
|
986
|
-
local_analysis = local_llm.generate(prompt, media=image_files)
|
|
987
|
-
```
|
|
988
|
-
|
|
989
|
-
### 3. Document Processing Pipeline
|
|
990
|
-
|
|
991
|
-
```python
|
|
992
|
-
# Universal document analysis
|
|
993
|
-
documents = ["contract.pdf", "financial_data.xlsx", "presentation.ppt"]
|
|
994
|
-
analysis_prompt = "Extract key information and identify potential risks"
|
|
995
|
-
|
|
996
|
-
# Works with any provider
|
|
997
|
-
llm = create_llm("anthropic", model="claude-3.5-sonnet")
|
|
998
|
-
response = llm.generate(analysis_prompt, media=documents)
|
|
999
|
-
|
|
1000
|
-
# Automatic format handling:
|
|
1001
|
-
# - PDF: Text extraction with PyMuPDF4LLM
|
|
1002
|
-
# - Excel: Table parsing with pandas
|
|
1003
|
-
# - PowerPoint: Slide content extraction with unstructured
|
|
1004
|
-
```
|
|
1005
|
-
|
|
1006
|
-
### 4. Local Development, Cloud Production
|
|
1007
|
-
|
|
1008
|
-
```python
|
|
1009
|
-
# Development (free, local)
|
|
1010
|
-
llm_dev = create_llm("ollama", model="qwen3:4b-instruct-2507-q4_K_M")
|
|
1011
|
-
|
|
1012
|
-
# Production (high quality, cloud)
|
|
1013
|
-
llm_prod = create_llm("openai", model="gpt-4o-mini")
|
|
1014
|
-
```
|
|
1015
|
-
|
|
1016
|
-
### 5. Embeddings & RAG
|
|
1017
|
-
|
|
1018
|
-
```python
|
|
1019
|
-
from abstractcore.embeddings import EmbeddingManager
|
|
1020
|
-
|
|
1021
|
-
# Create embeddings for semantic search
|
|
1022
|
-
embedder = EmbeddingManager()
|
|
1023
|
-
docs_embeddings = embedder.embed_batch([
|
|
1024
|
-
"Python is great for data science",
|
|
1025
|
-
"JavaScript powers the web",
|
|
1026
|
-
"Rust ensures memory safety"
|
|
1027
|
-
])
|
|
1028
|
-
|
|
1029
|
-
# Find most similar document
|
|
1030
|
-
query_embedding = embedder.embed("Tell me about web development")
|
|
1031
|
-
similarity = embedder.compute_similarity(query, docs[0])
|
|
1032
|
-
```
|
|
1033
|
-
|
|
1034
|
-
[Learn more about Embeddings](docs/embeddings.md)
|
|
1035
|
-
|
|
1036
|
-
### 6. Structured Output
|
|
1037
|
-
|
|
1038
|
-
```python
|
|
1039
|
-
from pydantic import BaseModel
|
|
1040
|
-
|
|
1041
|
-
class MovieReview(BaseModel):
|
|
1042
|
-
title: str
|
|
1043
|
-
rating: int # 1-5
|
|
1044
|
-
summary: str
|
|
1045
|
-
|
|
1046
|
-
llm = create_llm("openai", model="gpt-4o-mini")
|
|
1047
|
-
review = llm.generate(
|
|
1048
|
-
"Review the movie Inception",
|
|
1049
|
-
response_model=MovieReview
|
|
1050
|
-
)
|
|
1051
|
-
print(f"{review.title}: {review.rating}/5")
|
|
1052
|
-
```
|
|
1053
|
-
|
|
1054
|
-
[Learn more about Structured Output](docs/structured-output.md)
|
|
1055
|
-
|
|
1056
|
-
### 7. Universal API Server
|
|
1057
|
-
|
|
1058
|
-
```bash
|
|
1059
|
-
# Start server once
|
|
1060
|
-
uvicorn abstractcore.server.app:app --port 8000
|
|
1061
|
-
|
|
1062
|
-
# Use with any OpenAI client
|
|
1063
|
-
curl -X POST http://localhost:8000/v1/chat/completions \
|
|
1064
|
-
-H "Content-Type: application/json" \
|
|
1065
|
-
-d '{
|
|
1066
|
-
"model": "ollama/qwen3-coder:30b",
|
|
1067
|
-
"messages": [{"role": "user", "content": "Write a Python function"}]
|
|
1068
|
-
}'
|
|
1069
|
-
```
|
|
1070
|
-
|
|
1071
|
-
## Why AbstractCore?
|
|
1072
|
-
|
|
1073
|
-
- **Offline-First Philosophy**: Designed for open source LLMs with complete offline operation. No internet required after initial model download
|
|
1074
|
-
- **Unified Interface**: One API for all LLM providers
|
|
1075
|
-
- **Multimodal Support**: Upload images, PDFs, and documents across all providers
|
|
1076
|
-
- **Vision Models**: Seamless integration with GPT-4o, Claude Vision, qwen3-vl, and more
|
|
1077
|
-
- **Production Ready**: Robust error handling, retries, timeouts
|
|
1078
|
-
- **Type Safe**: Full Pydantic integration for structured outputs
|
|
1079
|
-
- **Local & Cloud**: Run models locally or use cloud APIs
|
|
1080
|
-
- **Tool Calling**: Consistent function calling across providers
|
|
1081
|
-
- **Streaming**: Real-time responses for interactive applications
|
|
1082
|
-
- **Embeddings**: Built-in vector embeddings for RAG
|
|
1083
|
-
- **Server Mode**: Optional OpenAI-compatible API server
|
|
1084
|
-
- **Well Documented**: Comprehensive guides and examples
|
|
1085
|
-
|
|
1086
|
-
## Installation Options
|
|
1087
|
-
|
|
1088
|
-
```bash
|
|
1089
|
-
# Minimal core
|
|
1090
|
-
pip install abstractcore
|
|
1091
|
-
|
|
1092
|
-
# With media handling (images, PDFs, documents)
|
|
1093
|
-
pip install abstractcore[media]
|
|
1094
|
-
|
|
1095
|
-
# With specific providers
|
|
1096
|
-
pip install abstractcore[openai]
|
|
1097
|
-
pip install abstractcore[anthropic]
|
|
1098
|
-
pip install abstractcore[ollama]
|
|
1099
|
-
pip install abstractcore[lmstudio]
|
|
1100
|
-
pip install abstractcore[huggingface]
|
|
1101
|
-
pip install abstractcore[mlx] # macOS/Apple Silicon only
|
|
1102
|
-
pip install abstractcore[vllm] # NVIDIA CUDA only (Linux)
|
|
1103
|
-
|
|
1104
|
-
# With server support
|
|
1105
|
-
pip install abstractcore[server]
|
|
1106
|
-
|
|
1107
|
-
# With embeddings
|
|
1108
|
-
pip install abstractcore[embeddings]
|
|
1109
|
-
|
|
1110
|
-
# With compression (Glyph visual-text compression)
|
|
1111
|
-
pip install abstractcore[compression]
|
|
1112
|
-
|
|
1113
|
-
# Everything (recommended for Apple Silicon)
|
|
1114
|
-
pip install abstractcore[all]
|
|
1115
|
-
|
|
1116
|
-
# Cross-platform (all except MLX/vLLM - for Linux/Windows/Intel Mac)
|
|
1117
|
-
pip install abstractcore[all-non-mlx]
|
|
1118
|
-
|
|
1119
|
-
# Provider groups
|
|
1120
|
-
pip install abstractcore[all-providers] # All providers (includes MLX, excludes vLLM)
|
|
1121
|
-
pip install abstractcore[all-providers-non-mlx] # All providers except MLX (excludes vLLM)
|
|
1122
|
-
pip install abstractcore[local-providers] # Ollama, LMStudio, MLX
|
|
1123
|
-
pip install abstractcore[local-providers-non-mlx] # Ollama, LMStudio only
|
|
1124
|
-
pip install abstractcore[api-providers] # OpenAI, Anthropic
|
|
1125
|
-
pip install abstractcore[gpu-providers] # vLLM (NVIDIA CUDA only)
|
|
1126
|
-
```
|
|
1127
|
-
|
|
1128
|
-
**Hardware-Specific Notes:**
|
|
1129
|
-
- **MLX**: Requires Apple Silicon (M1/M2/M3/M4). Will not work on Intel Macs or other platforms.
|
|
1130
|
-
- **vLLM**: Requires NVIDIA GPUs with CUDA support. Will not work on Apple Silicon, AMD GPUs, or Intel integrated graphics.
|
|
1131
|
-
- **All other providers** (OpenAI, Anthropic, Ollama, LMStudio, HuggingFace): Work on any hardware.
|
|
1132
|
-
|
|
1133
|
-
**Media processing extras:**
|
|
1134
|
-
```bash
|
|
1135
|
-
# For PDF processing
|
|
1136
|
-
pip install pymupdf4llm
|
|
1137
|
-
|
|
1138
|
-
# For Office documents (DOCX, XLSX, PPT)
|
|
1139
|
-
pip install unstructured
|
|
1140
|
-
|
|
1141
|
-
# For image optimization
|
|
1142
|
-
pip install pillow
|
|
1143
|
-
|
|
1144
|
-
# For data processing (CSV, Excel)
|
|
1145
|
-
pip install pandas
|
|
1146
|
-
```
|
|
1147
|
-
|
|
1148
|
-
## Testing Status
|
|
1149
|
-
|
|
1150
|
-
All tests passing as of October 12th, 2025.
|
|
1151
|
-
|
|
1152
|
-
**Test Environment:**
|
|
1153
|
-
- Hardware: MacBook Pro (14-inch, Nov 2024)
|
|
1154
|
-
- Chip: Apple M4 Max
|
|
1155
|
-
- Memory: 128 GB
|
|
1156
|
-
- Python: 3.12.2
|
|
1157
|
-
|
|
1158
|
-
## Quick Links
|
|
1159
|
-
|
|
1160
|
-
- **[📚 Documentation Index](docs/)** - Complete documentation navigation guide
|
|
1161
|
-
- **[🔍 Interaction Tracing](docs/interaction-tracing.md)** - LLM observability and debugging ⭐ NEW
|
|
1162
|
-
- **[Getting Started](docs/getting-started.md)** - 5-minute quick start
|
|
1163
|
-
- **[⚙️ Prerequisites](docs/prerequisites.md)** - Provider setup (OpenAI, Anthropic, Ollama, etc.)
|
|
1164
|
-
- **[📖 Python API](docs/api-reference.md)** - Complete Python API reference
|
|
1165
|
-
- **[🌐 Server Guide](docs/server.md)** - HTTP API server setup
|
|
1166
|
-
- **[🔧 Troubleshooting](docs/troubleshooting.md)** - Fix common issues
|
|
1167
|
-
- **[💻 Examples](examples/)** - Working code examples
|
|
1168
|
-
- **[🐛 Issues](https://github.com/lpalbou/AbstractCore/issues)** - Report bugs
|
|
1169
|
-
- **[💬 Discussions](https://github.com/lpalbou/AbstractCore/discussions)** - Get help
|
|
1170
|
-
|
|
1171
|
-
## Contact
|
|
1172
|
-
**Maintainer:** Laurent-Philippe Albou
|
|
1173
|
-
📧 Email: [contact@abstractcore.ai](mailto:contact@abstractcore.ai)
|
|
1174
|
-
|
|
1175
|
-
## Contributing
|
|
1176
|
-
|
|
1177
|
-
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
|
1178
|
-
|
|
1179
|
-
## License
|
|
1180
|
-
|
|
1181
|
-
MIT License - see [LICENSE](LICENSE) file for details.
|
|
1182
|
-
|
|
1183
|
-
---
|
|
1184
|
-
|
|
1185
|
-
**AbstractCore** - One interface, all LLM providers. Focus on building, not managing API differences.
|
|
1186
|
-
|
|
1187
|
-
---
|
|
1188
|
-
|
|
1189
|
-
> **Migration Note**: This project was previously known as "AbstractLLM" and has been completely rebranded to "AbstractCore" as of version 2.4.0. See [CHANGELOG.md](CHANGELOG.md) for migration details.
|