openaivec 1.0.6__py3-none-any.whl → 1.0.8__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: openaivec
3
- Version: 1.0.6
3
+ Version: 1.0.8
4
4
  Summary: Generative mutation for tabular calculation
5
5
  Project-URL: Homepage, https://microsoft.github.io/openaivec/
6
6
  Project-URL: Repository, https://github.com/microsoft/openaivec
@@ -26,7 +26,7 @@ Description-Content-Type: text/markdown
26
26
 
27
27
  # openaivec
28
28
 
29
- Transform pandas and Spark workflows with AI-powered text processing—batching, caching, and guardrails included.
29
+ Transform pandas and Spark workflows with AI-powered text processing—batching, caching, and guardrails included. Built for OpenAI batch pipelines so you can group prompts, cut API overhead, and keep outputs aligned with your data.
30
30
 
31
31
  [Contributor guidelines](AGENTS.md)
32
32
 
@@ -92,6 +92,7 @@ Batching alone removes most HTTP overhead, and letting batching overlap with con
92
92
  ## Why openaivec?
93
93
 
94
94
  - Drop-in `.ai` and `.aio` accessors keep pandas analysts in familiar tooling.
95
+ - OpenAI batch-optimized: `BatchingMapProxy`/`AsyncBatchingMapProxy` coalesce requests, dedupe prompts, and keep column order stable.
95
96
  - Smart batching (`BatchingMapProxy`/`AsyncBatchingMapProxy`) dedupes prompts, preserves order, and releases waiters on failure.
96
97
  - Reasoning support mirrors the OpenAI SDK; structured outputs accept Pydantic `response_format`.
97
98
  - Built-in caches and retries remove boilerplate; helpers reuse caches across pandas, Spark, and async flows.
@@ -100,7 +101,7 @@ Batching alone removes most HTTP overhead, and letting batching overlap with con
100
101
 
101
102
  # Overview
102
103
 
103
- Vectorized OpenAI access so you process many inputs per call instead of one-by-one. Batching proxies dedupe inputs, enforce ordered outputs, and unblock waiters even on upstream errors. Cache helpers (`responses_with_cache`, Spark UDF builders) plug into the same layer so expensive prompts are reused across pandas, Spark, and async flows. Reasoning models honor SDK semantics. Requires Python 3.10+.
104
+ Vectorized OpenAI batch processing so you handle many inputs per call instead of one-by-one. Batching proxies dedupe inputs, enforce ordered outputs, and unblock waiters even on upstream errors. Cache helpers (`responses_with_cache`, Spark UDF builders) plug into the same layer so expensive prompts are reused across pandas, Spark, and async flows. Reasoning models honor SDK semantics. Requires Python 3.10+.
104
105
 
105
106
  ## Core Workflows
106
107
 
@@ -33,7 +33,7 @@ openaivec/task/nlp/sentiment_analysis.py,sha256=P1AFazqmlE9Dy0OShNOXcY8X5rvsGg7X
33
33
  openaivec/task/nlp/translation.py,sha256=IgTy0PQZVF_Q6qis60STim7Vd7rYPVTfTfwP_U1kAKk,6603
34
34
  openaivec/task/table/__init__.py,sha256=kJz15WDJXjyC7UIHKBvlTRhCf347PCDMH5T5fONV2sU,83
35
35
  openaivec/task/table/fillna.py,sha256=nMlXvlUvyWgM9DxJDeRX3M37jxlqg0MgRet1Ds3ni5Y,6571
36
- openaivec-1.0.6.dist-info/METADATA,sha256=BGxZEIH0fFnyidYs3GgIYOJv6x9BDRJMmXZ34pEXDbU,13878
37
- openaivec-1.0.6.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
38
- openaivec-1.0.6.dist-info/licenses/LICENSE,sha256=ws_MuBL-SCEBqPBFl9_FqZkaaydIJmxHrJG2parhU4M,1141
39
- openaivec-1.0.6.dist-info/RECORD,,
36
+ openaivec-1.0.8.dist-info/METADATA,sha256=FhVcgphOCyS0OzmNRN5C-50wFKST559y9-3IYD-EGEU,14139
37
+ openaivec-1.0.8.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
38
+ openaivec-1.0.8.dist-info/licenses/LICENSE,sha256=ws_MuBL-SCEBqPBFl9_FqZkaaydIJmxHrJG2parhU4M,1141
39
+ openaivec-1.0.8.dist-info/RECORD,,