chatterer 0.1.7__tar.gz → 0.1.9__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. {chatterer-0.1.7 → chatterer-0.1.9}/PKG-INFO +169 -166
  2. {chatterer-0.1.7 → chatterer-0.1.9}/README.md +136 -136
  3. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/__init__.py +55 -39
  4. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/language_model.py +492 -371
  5. chatterer-0.1.9/chatterer/messages.py +9 -0
  6. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/strategies/__init__.py +13 -13
  7. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/strategies/atom_of_thoughts.py +975 -975
  8. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/strategies/base.py +14 -14
  9. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/__init__.py +25 -17
  10. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/citation_chunking/__init__.py +3 -3
  11. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/citation_chunking/chunks.py +53 -53
  12. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/citation_chunking/citation_chunker.py +118 -118
  13. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/citation_chunking/citations.py +285 -285
  14. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/citation_chunking/prompt.py +157 -157
  15. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/citation_chunking/reference.py +26 -26
  16. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/citation_chunking/utils.py +138 -138
  17. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/convert_to_text.py +463 -466
  18. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/webpage_to_markdown/__init__.py +4 -4
  19. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/webpage_to_markdown/playwright_bot.py +649 -649
  20. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/tools/webpage_to_markdown/utils.py +329 -329
  21. chatterer-0.1.9/chatterer/utils/__init__.py +15 -0
  22. chatterer-0.1.9/chatterer/utils/code_agent.py +134 -0
  23. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/utils/image.py +288 -284
  24. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer.egg-info/PKG-INFO +169 -166
  25. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer.egg-info/SOURCES.txt +2 -0
  26. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer.egg-info/requires.txt +5 -1
  27. {chatterer-0.1.7 → chatterer-0.1.9}/pyproject.toml +3 -6
  28. {chatterer-0.1.7 → chatterer-0.1.9}/setup.cfg +4 -4
  29. chatterer-0.1.7/chatterer/messages.py +0 -8
  30. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer/py.typed +0 -0
  31. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer.egg-info/dependency_links.txt +0 -0
  32. {chatterer-0.1.7 → chatterer-0.1.9}/chatterer.egg-info/top_level.txt +0 -0
@@ -1,166 +1,169 @@
1
- Metadata-Version: 2.4
2
- Name: chatterer
3
- Version: 0.1.7
4
- Summary: The highest-level interface for various LLM APIs.
5
- Requires-Python: >=3.12
6
- Description-Content-Type: text/markdown
7
- Requires-Dist: instructor>=1.7.2
8
- Requires-Dist: langchain>=0.3.19
9
- Provides-Extra: dev
10
- Requires-Dist: neo4j-extension>=0.1.14; extra == "dev"
11
- Requires-Dist: colorama>=0.4.6; extra == "dev"
12
- Requires-Dist: ipykernel>=6.29.5; extra == "dev"
13
- Provides-Extra: conversion
14
- Requires-Dist: markdownify>=1.1.0; extra == "conversion"
15
- Requires-Dist: commonmark>=0.9.1; extra == "conversion"
16
- Requires-Dist: playwright>=1.50.0; extra == "conversion"
17
- Requires-Dist: pillow>=11.1.0; extra == "conversion"
18
- Requires-Dist: mistune>=3.1.2; extra == "conversion"
19
- Requires-Dist: markitdown>=0.0.2; extra == "conversion"
20
- Requires-Dist: pymupdf>=1.25.4; extra == "conversion"
21
- Provides-Extra: langchain-providers
22
- Requires-Dist: langchain-openai>=0.3.7; extra == "langchain-providers"
23
- Requires-Dist: langchain-anthropic>=0.3.8; extra == "langchain-providers"
24
- Requires-Dist: langchain-google-genai>=2.0.10; extra == "langchain-providers"
25
- Requires-Dist: langchain-ollama>=0.2.3; extra == "langchain-providers"
26
- Provides-Extra: all
27
- Requires-Dist: chatterer[langchain-providers]; extra == "all"
28
- Requires-Dist: chatterer[conversion]; extra == "all"
29
- Requires-Dist: chatterer[dev]; extra == "all"
30
-
31
- # Chatterer
32
-
33
- **Simplified, Structured AI Assistant Framework**
34
-
35
- `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
36
-
37
- The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
38
-
39
- ---
40
-
41
- ## Quick Install
42
-
43
- ```bash
44
- pip install chatterer
45
- ```
46
-
47
- ---
48
-
49
- ## Quickstart Example
50
-
51
- Generate text quickly using OpenAI:
52
-
53
- ```python
54
- from chatterer import Chatterer
55
-
56
- chat = Chatterer.openai("gpt-4o-mini")
57
- response = chat.generate("What is the meaning of life?")
58
- print(response)
59
- ```
60
-
61
- Messages can be input as plain strings or structured lists:
62
-
63
- ```python
64
- response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
65
- print(response)
66
- ```
67
-
68
- ### Structured Output with Pydantic
69
-
70
- ```python
71
- from pydantic import BaseModel
72
-
73
- class AnswerModel(BaseModel):
74
- question: str
75
- answer: str
76
-
77
- response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
78
- print(response.question, response.answer)
79
- ```
80
-
81
- ### Async Example
82
-
83
- ```python
84
- import asyncio
85
-
86
- async def main():
87
- response = await chat.agenerate("Explain async in Python briefly.")
88
- print(response)
89
-
90
- asyncio.run(main())
91
- ```
92
-
93
- ---
94
-
95
- ## Atom-of-Thought Pipeline (AoT)
96
-
97
- `AoTPipeline` provides structured reasoning by:
98
-
99
- - Detecting question domains (general, math, coding, philosophy, multihop).
100
- - Decomposing questions recursively.
101
- - Generating direct, decomposition-based, and simplified answers.
102
- - Combining answers via ensemble.
103
-
104
- ### AoT Usage Example
105
-
106
- ```python
107
- from chatterer import Chatterer
108
- from chatterer.strategies import AoTStrategy, AoTPipeline
109
-
110
- pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
111
- strategy = AoTStrategy(pipeline=pipeline)
112
-
113
- question = "What would Newton discover if hit by an apple falling from 100 meters?"
114
- answer = strategy.invoke(question)
115
- print(answer)
116
- ```
117
-
118
- ---
119
-
120
- ## Supported Models
121
-
122
- - **OpenAI**
123
- - **Anthropic**
124
- - **Google Gemini**
125
- - **Ollama** (local models)
126
-
127
- Initialize models easily:
128
-
129
- ```python
130
- openai_chat = Chatterer.openai("gpt-4o-mini")
131
- anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
132
- gemini_chat = Chatterer.google("gemini-2.0-flash")
133
- ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
134
- ```
135
-
136
- ---
137
-
138
- ## Advanced Features
139
-
140
- - **Streaming responses**
141
- - **Async/Await support**
142
- - **Structured outputs with Pydantic models**
143
-
144
- ---
145
-
146
- ## Logging
147
-
148
- Built-in logging for easy debugging:
149
-
150
- ```python
151
- import logging
152
- logging.basicConfig(level=logging.DEBUG)
153
- ```
154
-
155
- ---
156
-
157
- ## Contributing
158
-
159
- Feel free to open an issue or pull request.
160
-
161
- ---
162
-
163
- ## License
164
-
165
- MIT License
166
-
1
+ Metadata-Version: 2.4
2
+ Name: chatterer
3
+ Version: 0.1.9
4
+ Summary: The highest-level interface for various LLM APIs.
5
+ Requires-Python: >=3.12
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: instructor>=1.7.2
8
+ Requires-Dist: langchain>=0.3.19
9
+ Provides-Extra: dev
10
+ Requires-Dist: neo4j-extension>=0.1.14; extra == "dev"
11
+ Requires-Dist: colorama>=0.4.6; extra == "dev"
12
+ Requires-Dist: ipykernel>=6.29.5; extra == "dev"
13
+ Provides-Extra: conversion
14
+ Requires-Dist: markdownify>=1.1.0; extra == "conversion"
15
+ Requires-Dist: commonmark>=0.9.1; extra == "conversion"
16
+ Requires-Dist: playwright>=1.50.0; extra == "conversion"
17
+ Requires-Dist: pillow>=11.1.0; extra == "conversion"
18
+ Requires-Dist: mistune>=3.1.2; extra == "conversion"
19
+ Requires-Dist: markitdown>=0.0.2; extra == "conversion"
20
+ Requires-Dist: pymupdf>=1.25.4; extra == "conversion"
21
+ Provides-Extra: langchain
22
+ Requires-Dist: chatterer[langchain-providers]; extra == "langchain"
23
+ Requires-Dist: langchain-experimental>=0.3.4; extra == "langchain"
24
+ Provides-Extra: langchain-providers
25
+ Requires-Dist: langchain-openai>=0.3.7; extra == "langchain-providers"
26
+ Requires-Dist: langchain-anthropic>=0.3.8; extra == "langchain-providers"
27
+ Requires-Dist: langchain-google-genai>=2.0.10; extra == "langchain-providers"
28
+ Requires-Dist: langchain-ollama>=0.2.3; extra == "langchain-providers"
29
+ Provides-Extra: all
30
+ Requires-Dist: chatterer[langchain]; extra == "all"
31
+ Requires-Dist: chatterer[conversion]; extra == "all"
32
+ Requires-Dist: chatterer[dev]; extra == "all"
33
+
34
+ # Chatterer
35
+
36
+ **Simplified, Structured AI Assistant Framework**
37
+
38
+ `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
39
+
40
+ The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
41
+
42
+ ---
43
+
44
+ ## Quick Install
45
+
46
+ ```bash
47
+ pip install chatterer
48
+ ```
49
+
50
+ ---
51
+
52
+ ## Quickstart Example
53
+
54
+ Generate text quickly using OpenAI:
55
+
56
+ ```python
57
+ from chatterer import Chatterer
58
+
59
+ chat = Chatterer.openai("gpt-4o-mini")
60
+ response = chat.generate("What is the meaning of life?")
61
+ print(response)
62
+ ```
63
+
64
+ Messages can be input as plain strings or structured lists:
65
+
66
+ ```python
67
+ response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
68
+ print(response)
69
+ ```
70
+
71
+ ### Structured Output with Pydantic
72
+
73
+ ```python
74
+ from pydantic import BaseModel
75
+
76
+ class AnswerModel(BaseModel):
77
+ question: str
78
+ answer: str
79
+
80
+ response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
81
+ print(response.question, response.answer)
82
+ ```
83
+
84
+ ### Async Example
85
+
86
+ ```python
87
+ import asyncio
88
+
89
+ async def main():
90
+ response = await chat.agenerate("Explain async in Python briefly.")
91
+ print(response)
92
+
93
+ asyncio.run(main())
94
+ ```
95
+
96
+ ---
97
+
98
+ ## Atom-of-Thought Pipeline (AoT)
99
+
100
+ `AoTPipeline` provides structured reasoning by:
101
+
102
+ - Detecting question domains (general, math, coding, philosophy, multihop).
103
+ - Decomposing questions recursively.
104
+ - Generating direct, decomposition-based, and simplified answers.
105
+ - Combining answers via ensemble.
106
+
107
+ ### AoT Usage Example
108
+
109
+ ```python
110
+ from chatterer import Chatterer
111
+ from chatterer.strategies import AoTStrategy, AoTPipeline
112
+
113
+ pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
114
+ strategy = AoTStrategy(pipeline=pipeline)
115
+
116
+ question = "What would Newton discover if hit by an apple falling from 100 meters?"
117
+ answer = strategy.invoke(question)
118
+ print(answer)
119
+ ```
120
+
121
+ ---
122
+
123
+ ## Supported Models
124
+
125
+ - **OpenAI**
126
+ - **Anthropic**
127
+ - **Google Gemini**
128
+ - **Ollama** (local models)
129
+
130
+ Initialize models easily:
131
+
132
+ ```python
133
+ openai_chat = Chatterer.openai("gpt-4o-mini")
134
+ anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
135
+ gemini_chat = Chatterer.google("gemini-2.0-flash")
136
+ ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
137
+ ```
138
+
139
+ ---
140
+
141
+ ## Advanced Features
142
+
143
+ - **Streaming responses**
144
+ - **Async/Await support**
145
+ - **Structured outputs with Pydantic models**
146
+
147
+ ---
148
+
149
+ ## Logging
150
+
151
+ Built-in logging for easy debugging:
152
+
153
+ ```python
154
+ import logging
155
+ logging.basicConfig(level=logging.DEBUG)
156
+ ```
157
+
158
+ ---
159
+
160
+ ## Contributing
161
+
162
+ Feel free to open an issue or pull request.
163
+
164
+ ---
165
+
166
+ ## License
167
+
168
+ MIT License
169
+
@@ -1,136 +1,136 @@
1
- # Chatterer
2
-
3
- **Simplified, Structured AI Assistant Framework**
4
-
5
- `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
6
-
7
- The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
8
-
9
- ---
10
-
11
- ## Quick Install
12
-
13
- ```bash
14
- pip install chatterer
15
- ```
16
-
17
- ---
18
-
19
- ## Quickstart Example
20
-
21
- Generate text quickly using OpenAI:
22
-
23
- ```python
24
- from chatterer import Chatterer
25
-
26
- chat = Chatterer.openai("gpt-4o-mini")
27
- response = chat.generate("What is the meaning of life?")
28
- print(response)
29
- ```
30
-
31
- Messages can be input as plain strings or structured lists:
32
-
33
- ```python
34
- response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
35
- print(response)
36
- ```
37
-
38
- ### Structured Output with Pydantic
39
-
40
- ```python
41
- from pydantic import BaseModel
42
-
43
- class AnswerModel(BaseModel):
44
- question: str
45
- answer: str
46
-
47
- response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
48
- print(response.question, response.answer)
49
- ```
50
-
51
- ### Async Example
52
-
53
- ```python
54
- import asyncio
55
-
56
- async def main():
57
- response = await chat.agenerate("Explain async in Python briefly.")
58
- print(response)
59
-
60
- asyncio.run(main())
61
- ```
62
-
63
- ---
64
-
65
- ## Atom-of-Thought Pipeline (AoT)
66
-
67
- `AoTPipeline` provides structured reasoning by:
68
-
69
- - Detecting question domains (general, math, coding, philosophy, multihop).
70
- - Decomposing questions recursively.
71
- - Generating direct, decomposition-based, and simplified answers.
72
- - Combining answers via ensemble.
73
-
74
- ### AoT Usage Example
75
-
76
- ```python
77
- from chatterer import Chatterer
78
- from chatterer.strategies import AoTStrategy, AoTPipeline
79
-
80
- pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
81
- strategy = AoTStrategy(pipeline=pipeline)
82
-
83
- question = "What would Newton discover if hit by an apple falling from 100 meters?"
84
- answer = strategy.invoke(question)
85
- print(answer)
86
- ```
87
-
88
- ---
89
-
90
- ## Supported Models
91
-
92
- - **OpenAI**
93
- - **Anthropic**
94
- - **Google Gemini**
95
- - **Ollama** (local models)
96
-
97
- Initialize models easily:
98
-
99
- ```python
100
- openai_chat = Chatterer.openai("gpt-4o-mini")
101
- anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
102
- gemini_chat = Chatterer.google("gemini-2.0-flash")
103
- ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
104
- ```
105
-
106
- ---
107
-
108
- ## Advanced Features
109
-
110
- - **Streaming responses**
111
- - **Async/Await support**
112
- - **Structured outputs with Pydantic models**
113
-
114
- ---
115
-
116
- ## Logging
117
-
118
- Built-in logging for easy debugging:
119
-
120
- ```python
121
- import logging
122
- logging.basicConfig(level=logging.DEBUG)
123
- ```
124
-
125
- ---
126
-
127
- ## Contributing
128
-
129
- Feel free to open an issue or pull request.
130
-
131
- ---
132
-
133
- ## License
134
-
135
- MIT License
136
-
1
+ # Chatterer
2
+
3
+ **Simplified, Structured AI Assistant Framework**
4
+
5
+ `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
6
+
7
+ The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
8
+
9
+ ---
10
+
11
+ ## Quick Install
12
+
13
+ ```bash
14
+ pip install chatterer
15
+ ```
16
+
17
+ ---
18
+
19
+ ## Quickstart Example
20
+
21
+ Generate text quickly using OpenAI:
22
+
23
+ ```python
24
+ from chatterer import Chatterer
25
+
26
+ chat = Chatterer.openai("gpt-4o-mini")
27
+ response = chat.generate("What is the meaning of life?")
28
+ print(response)
29
+ ```
30
+
31
+ Messages can be input as plain strings or structured lists:
32
+
33
+ ```python
34
+ response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
35
+ print(response)
36
+ ```
37
+
38
+ ### Structured Output with Pydantic
39
+
40
+ ```python
41
+ from pydantic import BaseModel
42
+
43
+ class AnswerModel(BaseModel):
44
+ question: str
45
+ answer: str
46
+
47
+ response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
48
+ print(response.question, response.answer)
49
+ ```
50
+
51
+ ### Async Example
52
+
53
+ ```python
54
+ import asyncio
55
+
56
+ async def main():
57
+ response = await chat.agenerate("Explain async in Python briefly.")
58
+ print(response)
59
+
60
+ asyncio.run(main())
61
+ ```
62
+
63
+ ---
64
+
65
+ ## Atom-of-Thought Pipeline (AoT)
66
+
67
+ `AoTPipeline` provides structured reasoning by:
68
+
69
+ - Detecting question domains (general, math, coding, philosophy, multihop).
70
+ - Decomposing questions recursively.
71
+ - Generating direct, decomposition-based, and simplified answers.
72
+ - Combining answers via ensemble.
73
+
74
+ ### AoT Usage Example
75
+
76
+ ```python
77
+ from chatterer import Chatterer
78
+ from chatterer.strategies import AoTStrategy, AoTPipeline
79
+
80
+ pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
81
+ strategy = AoTStrategy(pipeline=pipeline)
82
+
83
+ question = "What would Newton discover if hit by an apple falling from 100 meters?"
84
+ answer = strategy.invoke(question)
85
+ print(answer)
86
+ ```
87
+
88
+ ---
89
+
90
+ ## Supported Models
91
+
92
+ - **OpenAI**
93
+ - **Anthropic**
94
+ - **Google Gemini**
95
+ - **Ollama** (local models)
96
+
97
+ Initialize models easily:
98
+
99
+ ```python
100
+ openai_chat = Chatterer.openai("gpt-4o-mini")
101
+ anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
102
+ gemini_chat = Chatterer.google("gemini-2.0-flash")
103
+ ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
104
+ ```
105
+
106
+ ---
107
+
108
+ ## Advanced Features
109
+
110
+ - **Streaming responses**
111
+ - **Async/Await support**
112
+ - **Structured outputs with Pydantic models**
113
+
114
+ ---
115
+
116
+ ## Logging
117
+
118
+ Built-in logging for easy debugging:
119
+
120
+ ```python
121
+ import logging
122
+ logging.basicConfig(level=logging.DEBUG)
123
+ ```
124
+
125
+ ---
126
+
127
+ ## Contributing
128
+
129
+ Feel free to open an issue or pull request.
130
+
131
+ ---
132
+
133
+ ## License
134
+
135
+ MIT License
136
+