LLM-Bridge 1.12.0__py3-none-any.whl → 1.12.0a0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: LLM-Bridge
3
- Version: 1.12.0
3
+ Version: 1.12.0a0
4
4
  Summary: A Bridge for LLMs
5
5
  Author-email: windsnow1025 <windsnow1025@gmail.com>
6
6
  License-Expression: MIT
@@ -30,7 +30,7 @@ Dynamic: license-file
30
30
 
31
31
  # LLM Bridge
32
32
 
33
- LLM Bridge is a unified Python interface for interacting with LLMs, including OpenAI (Native / Azure / GitHub), Gemini (AI Studio / Vertex), Claude, and Grok.
33
+ LLM Bridge is a unified Python interface for interacting with LLMs, including OpenAI, OpenAI-Azure, OpenAI-GitHub, Gemini, Claude, and Grok.
34
34
 
35
35
  GitHub: [https://github.com/windsnow1025/LLM-Bridge](https://github.com/windsnow1025/LLM-Bridge)
36
36
 
@@ -39,31 +39,30 @@ PyPI: [https://pypi.org/project/LLM-Bridge/](https://pypi.org/project/LLM-Bridge
39
39
  ## Workflow and Features
40
40
 
41
41
  1. **Message Preprocessor**: extracts text content from documents (Word, Excel, PPT, Code files, PDFs) which are not natively supported by the target model.
42
- 2. **Chat Client Factory**: creates a client for the specific LLM API with model parameters
43
- 1. **Model Message Converter**: converts general messages to model messages
44
- 1. **Media Processor**: converts general media (Image, Audio, Video, PDF) to model compatible formats.
42
+ 2. **Chat Client Factory**: create a client for the specific LLM API with model parameters
43
+ 1. **Model Message Converter**: convert general messages to model messages
44
+ 1. **Media Processor**: converts media (Image, Audio, Video, PDF) which are natively supported by the target model into compatible formats.
45
45
  3. **Chat Client**: generate stream or non-stream responses
46
- - **Model Thoughts**: captures and formats the model's thinking process
47
- - **Code Execution**: auto generate and execute Python code
48
- - **Web Search + Citations**: extracts and formats citations from search results
49
- - **Token Counter**: tracks and reports input and output token usage
46
+ 1. **Model Thoughts**: captures and formats the model's thinking process
47
+ 2. **Search Citations**: extracts and formats citations from search results
48
+ 3. **Token Counter**: tracks and reports input and output token usage
50
49
 
51
- ### Supported Features for API Types
50
+ ### Model Features
52
51
 
53
52
  The features listed represent the maximum capabilities of each API type supported by LLM Bridge.
54
53
 
55
54
  | API Type | Input Format | Capabilities | Output Format |
56
55
  |----------|--------------------------------|--------------------------------------------------|-------------------|
57
- | OpenAI | Text, Image, PDF | Thinking, Web Search, Code Execution | Text |
56
+ | OpenAI | Text, Image | Thinking, Web Search, Code Execution | Text |
58
57
  | Gemini | Text, Image, Video, Audio, PDF | Thinking, Web Search + Citations, Code Execution | Text, Image, File |
59
58
  | Claude | Text, Image, PDF | Thinking, Web Search, Code Execution | Text |
60
59
  | Grok | Text, Image | | Text |
61
60
 
62
61
  #### Planned Features
63
62
 
64
- - Structured Output
65
- - More features for API Types
66
- - Native support for Grok
63
+ - OpenAI: Web Search: Citations, Image Output
64
+ - Gemini: Code Execution: Code, Code Output
65
+ - Claude: Code Execution, File Output
67
66
 
68
67
  ## Installation
69
68
 
@@ -55,8 +55,8 @@ llm_bridge/type/model_message/claude_message.py,sha256=gYJUTbLUeifQMva3Axarc-VFe
55
55
  llm_bridge/type/model_message/gemini_message.py,sha256=mh8pf929g7_NkBzSOwnLXyrwSzTT4yt2FmyX7NZn0sM,4302
56
56
  llm_bridge/type/model_message/openai_message.py,sha256=xFaLY-cZoSwNd7E9BSWQjBNcRfCVH11X9s2yxXlctR0,453
57
57
  llm_bridge/type/model_message/openai_responses_message.py,sha256=be1q2euA0ybjj4NO6NxOGIRB9eJuXSb4ssUm_bM4Ocs,1529
58
- llm_bridge-1.12.0.dist-info/licenses/LICENSE,sha256=m6uon-6P_CaiqcBfApMfjG9YRtDxcr40Z52JcqUCEAE,1069
59
- llm_bridge-1.12.0.dist-info/METADATA,sha256=H-1XWKJ35voqZsQOlo2YVZC-MDq8YI_cT1_QiawJnno,3388
60
- llm_bridge-1.12.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
61
- llm_bridge-1.12.0.dist-info/top_level.txt,sha256=PtxyrgNX1lSa1Ab_qswg0sekSXejG5zrS6b_v3Po05g,11
62
- llm_bridge-1.12.0.dist-info/RECORD,,
58
+ llm_bridge-1.12.0a0.dist-info/licenses/LICENSE,sha256=m6uon-6P_CaiqcBfApMfjG9YRtDxcr40Z52JcqUCEAE,1069
59
+ llm_bridge-1.12.0a0.dist-info/METADATA,sha256=-OGfwe8cLZCU2LcEhR4X6cE4Ks9-wuzltmcp80y4F3k,3374
60
+ llm_bridge-1.12.0a0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
61
+ llm_bridge-1.12.0a0.dist-info/top_level.txt,sha256=PtxyrgNX1lSa1Ab_qswg0sekSXejG5zrS6b_v3Po05g,11
62
+ llm_bridge-1.12.0a0.dist-info/RECORD,,