cross-ai-core 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,109 @@
1
+ Metadata-Version: 2.4
2
+ Name: cross-ai-core
3
+ Version: 0.2.0
4
+ Summary: Multi-provider AI dispatcher with response caching and error handling.
5
+ Author: b202i
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/b202i/cross-ai-core
8
+ Project-URL: Repository, https://github.com/b202i/cross-ai-core
9
+ Project-URL: Bug Tracker, https://github.com/b202i/cross-ai-core/issues
10
+ Keywords: ai,llm,anthropic,openai,gemini,xai,perplexity,multi-provider
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Classifier: Programming Language :: Python :: 3.11
17
+ Classifier: Programming Language :: Python :: 3.12
18
+ Classifier: Programming Language :: Python :: 3.13
19
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
20
+ Requires-Python: >=3.10
21
+ Description-Content-Type: text/markdown
22
+ Requires-Dist: anthropic>=0.84.0
23
+ Requires-Dist: google-genai>=1.65.0
24
+ Requires-Dist: openai>=1.70.0
25
+ Requires-Dist: requests>=2.32.4
26
+ Provides-Extra: dev
27
+ Requires-Dist: pytest>=8.0; extra == "dev"
28
+ Requires-Dist: pytest-mock>=3.12; extra == "dev"
29
+
30
+ # cross-ai-core
31
+
32
+ Multi-provider AI dispatcher with MD5-keyed response caching and unified error handling.
33
+
34
+ Supports **Anthropic**, **xAI**, **OpenAI**, **Google Gemini**, and **Perplexity** through a single consistent interface.
35
+
36
+ ## Requirements
37
+
38
+ - **Python 3.10 or newer** (3.11 recommended for development)
39
+ - No upper version limit — tested on 3.10–3.13
40
+
41
+ ## Install
42
+
43
+ ```bash
44
+ pip install cross-ai-core
45
+ ```
46
+
47
+ ## Dependencies
48
+
49
+ | Package | Version | Purpose |
50
+ |---|---|---|
51
+ | `anthropic` | ≥0.84.0 | Anthropic / Claude API client |
52
+ | `google-genai` | ≥1.65.0 | Google Gemini API client |
53
+ | `openai` | ≥1.70.0 | OpenAI and xAI (Grok) API client |
54
+ | `requests` | ≥2.32.4 | HTTP for Perplexity API |
55
+
56
+ ## Quick start
57
+
58
+ ```python
59
+ from dotenv import load_dotenv
60
+ load_dotenv("~/.crossenv") # your app loads keys; the library reads os.environ
61
+
62
+ from cross_ai_core import process_prompt, get_content, get_default_ai
63
+
64
+ provider = get_default_ai() # reads DEFAULT_AI from env, falls back to "xai"
65
+ result = process_prompt(provider, "Explain transformer attention in 3 sentences.",
66
+ verbose=False, use_cache=True)
67
+ print(get_content(provider, result.response))
68
+ ```
69
+
70
+ ## Configuration (environment variables)
71
+
72
+ | Variable | Default | Purpose |
73
+ |---|---|---|
74
+ | `DEFAULT_AI` | `xai` | Default provider when none is specified |
75
+ | `XAI_API_KEY` | — | xAI / Grok API key |
76
+ | `ANTHROPIC_API_KEY` | — | Anthropic / Claude API key |
77
+ | `OPENAI_API_KEY` | — | OpenAI API key |
78
+ | `GEMINI_API_KEY` | — | Google Gemini API key |
79
+ | `PERPLEXITY_API_KEY` | — | Perplexity API key |
80
+ | `CROSS_API_CACHE_DIR` | `~/.cross_api_cache/` | Response cache directory |
81
+ | `CROSS_NO_CACHE` | — | Set to `1` to disable caching globally |
82
+
83
+ The library only reads from `os.environ` — it never calls `load_dotenv()` itself.
84
+ Load your `.env` or `~/.crossenv` before importing.
85
+
86
+ ## Caching
87
+
88
+ Responses are cached by MD5 hash of the request payload in `~/.cross_api_cache/`.
89
+ The cache is safe to delete at any time.
90
+
91
+ ```python
92
+ # Bypass cache for one call
93
+ result = process_prompt(provider, prompt, verbose=False, use_cache=False)
94
+
95
+ # Check if a response was served from cache
96
+ if result.was_cached:
97
+ print("from cache")
98
+ ```
99
+
100
+ ## Adding a provider
101
+
102
+ 1. Create `cross_ai_core/ai_<name>.py` implementing `BaseAIHandler`
103
+ (`get_payload`, `get_client`, `get_cached_response`, `get_model`, `get_make`,
104
+ `get_content`, `put_content`, `get_data_content`, `get_title`, `get_usage`).
105
+ 2. Register in `cross_ai_core/ai_handler.py`: add to `AI_HANDLER_REGISTRY` and `AI_LIST`.
106
+
107
+ ## License
108
+
109
+ MIT
@@ -0,0 +1,80 @@
1
+ # cross-ai-core
2
+
3
+ Multi-provider AI dispatcher with MD5-keyed response caching and unified error handling.
4
+
5
+ Supports **Anthropic**, **xAI**, **OpenAI**, **Google Gemini**, and **Perplexity** through a single consistent interface.
6
+
7
+ ## Requirements
8
+
9
+ - **Python 3.10 or newer** (3.11 recommended for development)
10
+ - No upper version limit — tested on 3.10–3.13
11
+
12
+ ## Install
13
+
14
+ ```bash
15
+ pip install cross-ai-core
16
+ ```
17
+
18
+ ## Dependencies
19
+
20
+ | Package | Version | Purpose |
21
+ |---|---|---|
22
+ | `anthropic` | ≥0.84.0 | Anthropic / Claude API client |
23
+ | `google-genai` | ≥1.65.0 | Google Gemini API client |
24
+ | `openai` | ≥1.70.0 | OpenAI and xAI (Grok) API client |
25
+ | `requests` | ≥2.32.4 | HTTP for Perplexity API |
26
+
27
+ ## Quick start
28
+
29
+ ```python
30
+ from dotenv import load_dotenv
31
+ load_dotenv("~/.crossenv") # your app loads keys; the library reads os.environ
32
+
33
+ from cross_ai_core import process_prompt, get_content, get_default_ai
34
+
35
+ provider = get_default_ai() # reads DEFAULT_AI from env, falls back to "xai"
36
+ result = process_prompt(provider, "Explain transformer attention in 3 sentences.",
37
+ verbose=False, use_cache=True)
38
+ print(get_content(provider, result.response))
39
+ ```
40
+
41
+ ## Configuration (environment variables)
42
+
43
+ | Variable | Default | Purpose |
44
+ |---|---|---|
45
+ | `DEFAULT_AI` | `xai` | Default provider when none is specified |
46
+ | `XAI_API_KEY` | — | xAI / Grok API key |
47
+ | `ANTHROPIC_API_KEY` | — | Anthropic / Claude API key |
48
+ | `OPENAI_API_KEY` | — | OpenAI API key |
49
+ | `GEMINI_API_KEY` | — | Google Gemini API key |
50
+ | `PERPLEXITY_API_KEY` | — | Perplexity API key |
51
+ | `CROSS_API_CACHE_DIR` | `~/.cross_api_cache/` | Response cache directory |
52
+ | `CROSS_NO_CACHE` | — | Set to `1` to disable caching globally |
53
+
54
+ The library only reads from `os.environ` — it never calls `load_dotenv()` itself.
55
+ Load your `.env` or `~/.crossenv` before importing.
56
+
57
+ ## Caching
58
+
59
+ Responses are cached by MD5 hash of the request payload in `~/.cross_api_cache/`.
60
+ The cache is safe to delete at any time.
61
+
62
+ ```python
63
+ # Bypass cache for one call
64
+ result = process_prompt(provider, prompt, verbose=False, use_cache=False)
65
+
66
+ # Check if a response was served from cache
67
+ if result.was_cached:
68
+ print("from cache")
69
+ ```
70
+
71
+ ## Adding a provider
72
+
73
+ 1. Create `cross_ai_core/ai_<name>.py` implementing `BaseAIHandler`
74
+ (`get_payload`, `get_client`, `get_cached_response`, `get_model`, `get_make`,
75
+ `get_content`, `put_content`, `get_data_content`, `get_title`, `get_usage`).
76
+ 2. Register in `cross_ai_core/ai_handler.py`: add to `AI_HANDLER_REGISTRY` and `AI_LIST`.
77
+
78
+ ## License
79
+
80
+ MIT
@@ -0,0 +1,76 @@
1
+ """
2
+ cross_ai_core — Multi-provider AI dispatcher with caching and error handling.
3
+
4
+ Public API
5
+ ----------
6
+ ::
7
+
8
+ from cross_ai_core import process_prompt, get_content, get_default_ai
9
+
10
+ result = process_prompt("gemini", prompt, verbose=False, use_cache=True)
11
+ text = get_content("gemini", result.response)
12
+
13
+ Supported providers: xai, anthropic, openai, perplexity, gemini
14
+
15
+ Adding a provider
16
+ -----------------
17
+ 1. Create ``cross_ai_core/ai_<name>.py`` implementing ``BaseAIHandler``.
18
+ 2. Register in ``cross_ai_core/ai_handler.py``: add to ``AI_HANDLER_REGISTRY``
19
+ and ``AI_LIST``.
20
+
21
+ Cache
22
+ -----
23
+ Responses are cached by default in ``~/.cross_api_cache/``.
24
+ Override with the ``CROSS_API_CACHE_DIR`` environment variable.
25
+ Bypass per-call with ``use_cache=False``, or globally with ``CROSS_NO_CACHE=1``.
26
+
27
+ Keys
28
+ ----
29
+ The library reads API keys from ``os.environ``. The calling application is
30
+ responsible for loading ``.env`` or ``~/.crossenv`` before importing.
31
+ """
32
+
33
+ from cross_ai_core.ai_handler import ( # noqa: F401
34
+ AI_HANDLER_REGISTRY,
35
+ AI_LIST,
36
+ AIResponse,
37
+ check_api_key,
38
+ get_ai_list,
39
+ get_ai_make,
40
+ get_ai_model,
41
+ get_content,
42
+ get_data_content,
43
+ get_data_title,
44
+ get_default_ai,
45
+ get_usage,
46
+ process_prompt,
47
+ put_content,
48
+ )
49
+ from cross_ai_core.ai_base import BaseAIHandler, _get_cache_dir # noqa: F401
50
+ from cross_ai_core.ai_error_handler import handle_api_error # noqa: F401
51
+
52
+ __version__ = "0.2.0"
53
+ __all__ = [
54
+ # Core dispatch
55
+ "process_prompt",
56
+ "get_content",
57
+ "put_content",
58
+ "get_data_content",
59
+ "get_data_title",
60
+ "get_default_ai",
61
+ "get_ai_model",
62
+ "get_ai_make",
63
+ "get_ai_list",
64
+ "get_usage",
65
+ "check_api_key",
66
+ # Registry
67
+ "AI_HANDLER_REGISTRY",
68
+ "AI_LIST",
69
+ "AIResponse",
70
+ # Extension points
71
+ "BaseAIHandler",
72
+ "_get_cache_dir",
73
+ # Error handling
74
+ "handle_api_error",
75
+ ]
76
+
@@ -0,0 +1,234 @@
1
+ import hashlib
2
+ import json
3
+ import os
4
+
5
+ from anthropic import Anthropic
6
+
7
+ from .ai_base import BaseAIHandler, _get_cache_dir
8
+
9
+ AI_MAKE = "anthropic"
10
+ AI_MODEL = "claude-opus-4-5" # 02mar26
11
+ MAX_TOKENS = 16000 # min 16000 required for extended thinking
12
+
13
+ """
14
+ or direct Anthropic API:
15
+
16
+ # Claude 3 Family (Legacy)
17
+ claude-3-haiku-20240307
18
+ claude-3-sonnet-20240229
19
+ claude-3-opus-20240229
20
+
21
+ # Claude 3.5 Family
22
+ claude-3-5-haiku-20241022
23
+ claude-3-5-sonnet-v2-20241022
24
+
25
+ # Claude 3.7 Family
26
+ claude-3-7-sonnet-20250219 # First hybrid reasoning model; extended thinking mode
27
+
28
+ # Claude 4 Family (Current - 2025/2026)
29
+ claude-sonnet-4-5 # Latest Sonnet; fast, highly capable, best for most tasks
30
+ claude-opus-4-5 # Most intelligent Claude 4 model; best for complex reasoning
31
+
32
+ # Differences Between Claude Model Families
33
+
34
+ Claude 4 Family (Current)
35
+
36
+ claude-opus-4-5:
37
+ - Most intelligent model in the Claude 4 family
38
+ - Best for advanced reasoning, research synthesis, and complex multi-step tasks
39
+ - Highest capability, higher cost; ideal for wiki-style report generation
40
+ - Supports extended thinking for deep analytical work
41
+
42
+ claude-sonnet-4-5:
43
+ - Balanced intelligence and speed in the Claude 4 family
44
+ - Best for high-throughput tasks, data processing, and content generation
45
+ - Faster and more cost-effective than Opus 4
46
+
47
+ Claude 3.7 Family
48
+
49
+ claude-3-7-sonnet-20250219:
50
+ - First hybrid reasoning model on the market
51
+ - Extended thinking capabilities for complex problem-solving and nuanced analysis
52
+ - Strong for strategic analysis and advanced coding tasks
53
+
54
+ Claude 3.5 Family
55
+
56
+ claude-3-5-sonnet-v2-20241022:
57
+ - Most intelligent in the 3.5 family; balances top-tier performance with speed
58
+ claude-3-5-haiku-20241022:
59
+ - Fastest and most cost-effective in the 3.5 family
60
+
61
+ Claude 3 Family (Legacy)
62
+
63
+ Opus: Strong on highly complex tasks like math and coding; R&D and advanced analysis
64
+ Sonnet: Balances intelligence and speed for high-throughput tasks
65
+ Haiku: Near-instant responsiveness; live support, translations, content moderation
66
+ """
67
+
68
+
69
+ class AnthropicHandler(BaseAIHandler):
70
+
71
+ @classmethod
72
+ def get_payload(cls, prompt: str):
73
+ return get_anthropic_payload(prompt)
74
+
75
+ @classmethod
76
+ def get_client(cls):
77
+ return get_anthropic_client()
78
+
79
+ @classmethod
80
+ def get_cached_response(cls, client, payload, verbose, use_cache):
81
+ return get_anthropic_cached_response(client, payload, verbose, use_cache)
82
+
83
+ @classmethod
84
+ def get_model(cls):
85
+ return AI_MODEL
86
+
87
+ @classmethod
88
+ def get_make(cls):
89
+ return AI_MAKE
90
+
91
+ @classmethod
92
+ def get_content(cls, gen_content):
93
+ return get_content(gen_content)
94
+
95
+ @classmethod
96
+ def put_content(cls, report, gen_content):
97
+ return put_content(report, gen_content)
98
+
99
+ @classmethod
100
+ def get_data_content(cls, select_data):
101
+ return get_data_content(select_data)
102
+
103
+ @classmethod
104
+ def get_title(cls, gen_content):
105
+ return get_title(gen_content)
106
+
107
+ @classmethod
108
+ def get_usage(cls, response: dict) -> dict:
109
+ """Extract token counts from an Anthropic-format response dict.
110
+ Note: extended thinking tokens are included in output_tokens."""
111
+ usage = response.get("usage", {})
112
+ inp = usage.get("input_tokens", 0)
113
+ out = usage.get("output_tokens", 0)
114
+ return {"input_tokens": inp, "output_tokens": out, "total_tokens": inp + out}
115
+
116
+
117
+ def get_anthropic_cached_response(client, payload, verbose=False, use_cache=True):
118
+ if not use_cache:
119
+ if verbose:
120
+ print("Cache disabled; fetching fresh data.")
121
+ response = client.messages.create(**payload)
122
+ json_str = response.to_json()
123
+ json_response = json.loads(json_str)
124
+ return json_response, False # Not cached
125
+ else:
126
+
127
+ # Convert param to a string for hashing
128
+ param_str = json.dumps(payload, sort_keys=True)
129
+ md5_hash = hashlib.md5(param_str.encode('utf-8')).hexdigest()
130
+
131
+ # Construct the cache file path
132
+ cache_dir = _get_cache_dir()
133
+ cache_file = os.path.join(cache_dir, f"{md5_hash}.json")
134
+
135
+ # Check if the response is already in cache
136
+ json_response = {"message": "init"}
137
+ if os.path.exists(cache_file):
138
+ if verbose:
139
+ print(f"api_cache: Using api_cache: {cache_file}")
140
+ with open(cache_file, 'r') as f:
141
+ return json.load(f), True # Cached
142
+ else:
143
+ if verbose:
144
+ print("api_cache: cache miss, submitting API request")
145
+
146
+ # If not in cache, fetch the response
147
+ response = client.messages.create(**payload)
148
+ json_str = response.to_json()
149
+ json_response = None
150
+
151
+ try:
152
+ json_response = json.loads(json_str)
153
+ except ValueError as e:
154
+ print(f"api_cache: failed json conversion: {e}")
155
+
156
+ if json_response is None:
157
+ return {}
158
+
159
+ # Save to cache
160
+ if not os.path.exists(cache_dir):
161
+ os.makedirs(cache_dir)
162
+ if verbose:
163
+ print(f"api_cache: api_cache dir created: {cache_dir}")
164
+
165
+ try:
166
+ with open(cache_file, 'w') as f:
167
+ json.dump(json_response, f)
168
+ except Exception as e:
169
+ print(f"api_cache: An error occurred: {str(e)}")
170
+
171
+ if verbose:
172
+ print(f"api_cache: api_cache created: {cache_file}")
173
+
174
+ return json_response, False # Fresh API call, not cached
175
+
176
+
177
+ def get_anthropic_payload(prompt):
178
+ gen_payload = { # Store parameters into a dictionary for calling X and saving with the output
179
+ "model": AI_MODEL,
180
+ "max_tokens": MAX_TOKENS,
181
+ "thinking": { # Extended thinking: deep reasoning mode (Claude 4 / 3.7+)
182
+ "type": "enabled",
183
+ "budget_tokens": 10000, # Tokens reserved for internal reasoning (must be < max_tokens)
184
+ },
185
+ "system": "You are a seasoned investigative reporter, "
186
+ "striving to be accurate, fair and balanced.",
187
+ "messages": [
188
+ {
189
+ "role": "user",
190
+ "content": prompt,
191
+ }
192
+ ],
193
+ }
194
+ return gen_payload
195
+
196
+
197
+ def get_anthropic_client():
198
+ client = Anthropic(
199
+ api_key=os.getenv("ANTHROPIC_API_KEY"),
200
+ # base_url="https://api.x.ai",
201
+ )
202
+ return client
203
+
204
+
205
+ def get_title(story_instance):
206
+ content = get_data_content(story_instance)
207
+ title = content.splitlines()[0]
208
+ return title
209
+
210
+
211
+ def get_content(gen_response):
212
+ # Extended thinking returns multiple blocks; find the text block
213
+ for block in gen_response["content"]:
214
+ if block.get("type") == "text":
215
+ return block["text"]
216
+ return gen_response["content"][0]["text"] # fallback
217
+
218
+
219
+ def put_content(report, gen_response):
220
+ # Extended thinking returns multiple blocks; update the text block
221
+ for block in gen_response["content"]:
222
+ if block.get("type") == "text":
223
+ block["text"] = report
224
+ return gen_response
225
+ gen_response["content"][0]["text"] = report # fallback
226
+ return gen_response
227
+
228
+
229
+ def get_data_content(select_data):
230
+ # Extended thinking responses contain multiple blocks; find the text block
231
+ for block in select_data["gen_response"]["content"]:
232
+ if block.get("type") == "text":
233
+ return block["text"]
234
+ return select_data["gen_response"]["content"][0]["text"] # fallback
@@ -0,0 +1,52 @@
1
+ """
2
+ ai_base.py — Abstract base class for all cross-ai-core provider handlers.
3
+
4
+ Every provider (Anthropic, xAI, OpenAI, Gemini, Perplexity) inherits from
5
+ BaseAIHandler and implements each classmethod.
6
+
7
+ Cache directory
8
+ ---------------
9
+ Provider handlers cache responses in a directory resolved by
10
+ ``_get_cache_dir()``. The location defaults to ``~/.cross_api_cache/`` and
11
+ can be overridden by setting ``CROSS_API_CACHE_DIR`` in the environment (or
12
+ in ``~/.crossenv`` before importing the library):
13
+
14
+ CROSS_API_CACHE_DIR=~/my-project/cache
15
+ """
16
+
17
+ import os
18
+ from abc import ABC, abstractmethod
19
+
20
+
21
+ def _get_cache_dir() -> str:
22
+ """
23
+ Return the API response cache directory path (as a string).
24
+
25
+ Resolution order:
26
+ 1. ``CROSS_API_CACHE_DIR`` environment variable
27
+ 2. ``~/.cross_api_cache/`` (default)
28
+ """
29
+ env_val = os.environ.get("CROSS_API_CACHE_DIR", "").strip()
30
+ return os.path.expanduser(env_val if env_val else "~/.cross_api_cache")
31
+
32
+
33
+ class BaseAIHandler(ABC): # Abstract Base Class
34
+
35
+ @classmethod
36
+ @abstractmethod
37
+ def get_payload(cls, prompt: str):
38
+ """Generate the payload for the given prompt."""
39
+ pass
40
+
41
+ @classmethod
42
+ @abstractmethod
43
+ def get_client(cls, *args, **kwargs):
44
+ """Return the client for making API requests."""
45
+ pass
46
+
47
+ @classmethod
48
+ @abstractmethod
49
+ def get_cached_response(cls, *args, **kwargs):
50
+ """Return the cached response function."""
51
+ pass
52
+