chatterer 0.1.5__tar.gz → 0.1.7__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (36) hide show
  1. {chatterer-0.1.5 → chatterer-0.1.7}/PKG-INFO +166 -150
  2. {chatterer-0.1.5 → chatterer-0.1.7}/README.md +136 -136
  3. chatterer-0.1.7/chatterer/__init__.py +39 -0
  4. chatterer-0.1.7/chatterer/language_model.py +371 -0
  5. chatterer-0.1.7/chatterer/messages.py +8 -0
  6. chatterer-0.1.7/chatterer/py.typed +0 -0
  7. chatterer-0.1.7/chatterer/strategies/__init__.py +13 -0
  8. chatterer-0.1.7/chatterer/strategies/atom_of_thoughts.py +975 -0
  9. {chatterer-0.1.5 → chatterer-0.1.7}/chatterer/strategies/base.py +14 -14
  10. chatterer-0.1.7/chatterer/tools/__init__.py +17 -0
  11. chatterer-0.1.7/chatterer/tools/citation_chunking/__init__.py +3 -0
  12. chatterer-0.1.7/chatterer/tools/citation_chunking/chunks.py +53 -0
  13. chatterer-0.1.7/chatterer/tools/citation_chunking/citation_chunker.py +118 -0
  14. chatterer-0.1.7/chatterer/tools/citation_chunking/citations.py +285 -0
  15. chatterer-0.1.7/chatterer/tools/citation_chunking/prompt.py +157 -0
  16. chatterer-0.1.7/chatterer/tools/citation_chunking/reference.py +26 -0
  17. chatterer-0.1.7/chatterer/tools/citation_chunking/utils.py +138 -0
  18. chatterer-0.1.7/chatterer/tools/convert_to_text.py +466 -0
  19. chatterer-0.1.7/chatterer/tools/webpage_to_markdown/__init__.py +4 -0
  20. chatterer-0.1.7/chatterer/tools/webpage_to_markdown/playwright_bot.py +649 -0
  21. chatterer-0.1.7/chatterer/tools/webpage_to_markdown/utils.py +329 -0
  22. chatterer-0.1.7/chatterer/utils/image.py +284 -0
  23. {chatterer-0.1.5 → chatterer-0.1.7}/chatterer.egg-info/PKG-INFO +166 -150
  24. chatterer-0.1.7/chatterer.egg-info/SOURCES.txt +27 -0
  25. chatterer-0.1.7/chatterer.egg-info/requires.txt +27 -0
  26. chatterer-0.1.7/pyproject.toml +30 -0
  27. {chatterer-0.1.5 → chatterer-0.1.7}/setup.cfg +4 -4
  28. chatterer-0.1.5/chatterer/__init__.py +0 -21
  29. chatterer-0.1.5/chatterer/language_model.py +0 -608
  30. chatterer-0.1.5/chatterer/strategies/__init__.py +0 -19
  31. chatterer-0.1.5/chatterer/strategies/atom_of_thoughts.py +0 -594
  32. chatterer-0.1.5/chatterer.egg-info/SOURCES.txt +0 -12
  33. chatterer-0.1.5/chatterer.egg-info/requires.txt +0 -8
  34. chatterer-0.1.5/pyproject.toml +0 -15
  35. {chatterer-0.1.5 → chatterer-0.1.7}/chatterer.egg-info/dependency_links.txt +0 -0
  36. {chatterer-0.1.5 → chatterer-0.1.7}/chatterer.egg-info/top_level.txt +0 -0
@@ -1,150 +1,166 @@
1
- Metadata-Version: 2.2
2
- Name: chatterer
3
- Version: 0.1.5
4
- Summary: The highest-level interface for various LLM APIs.
5
- Requires-Python: >=3.12
6
- Description-Content-Type: text/markdown
7
- Requires-Dist: instructor>=1.7.2
8
- Requires-Dist: langchain>=0.3.19
9
- Provides-Extra: all
10
- Requires-Dist: langchain-openai>=0.3.7; extra == "all"
11
- Requires-Dist: langchain-anthropic>=0.3.8; extra == "all"
12
- Requires-Dist: langchain-google-genai>=2.0.10; extra == "all"
13
- Requires-Dist: langchain-ollama>=0.2.3; extra == "all"
14
-
15
- # Chatterer
16
-
17
- **Simplified, Structured AI Assistant Framework**
18
-
19
- `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
20
-
21
- The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
22
-
23
- ---
24
-
25
- ## Quick Install
26
-
27
- ```bash
28
- pip install chatterer
29
- ```
30
-
31
- ---
32
-
33
- ## Quickstart Example
34
-
35
- Generate text quickly using OpenAI:
36
-
37
- ```python
38
- from chatterer import Chatterer
39
-
40
- chat = Chatterer.openai("gpt-4o-mini")
41
- response = chat.generate("What is the meaning of life?")
42
- print(response)
43
- ```
44
-
45
- Messages can be input as plain strings or structured lists:
46
-
47
- ```python
48
- response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
49
- print(response)
50
- ```
51
-
52
- ### Structured Output with Pydantic
53
-
54
- ```python
55
- from pydantic import BaseModel
56
-
57
- class AnswerModel(BaseModel):
58
- question: str
59
- answer: str
60
-
61
- response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
62
- print(response.question, response.answer)
63
- ```
64
-
65
- ### Async Example
66
-
67
- ```python
68
- import asyncio
69
-
70
- async def main():
71
- response = await chat.agenerate("Explain async in Python briefly.")
72
- print(response)
73
-
74
- asyncio.run(main())
75
- ```
76
-
77
- ---
78
-
79
- ## Atom-of-Thought Pipeline (AoT)
80
-
81
- `AoTPipeline` provides structured reasoning by:
82
-
83
- - Detecting question domains (general, math, coding, philosophy, multihop).
84
- - Decomposing questions recursively.
85
- - Generating direct, decomposition-based, and simplified answers.
86
- - Combining answers via ensemble.
87
-
88
- ### AoT Usage Example
89
-
90
- ```python
91
- from chatterer import Chatterer
92
- from chatterer.strategies import AoTStrategy, AoTPipeline
93
-
94
- pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
95
- strategy = AoTStrategy(pipeline=pipeline)
96
-
97
- question = "What would Newton discover if hit by an apple falling from 100 meters?"
98
- answer = strategy.invoke(question)
99
- print(answer)
100
- ```
101
-
102
- ---
103
-
104
- ## Supported Models
105
-
106
- - **OpenAI**
107
- - **Anthropic**
108
- - **Google Gemini**
109
- - **Ollama** (local models)
110
-
111
- Initialize models easily:
112
-
113
- ```python
114
- openai_chat = Chatterer.openai("gpt-4o-mini")
115
- anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
116
- gemini_chat = Chatterer.google("gemini-2.0-flash")
117
- ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
118
- ```
119
-
120
- ---
121
-
122
- ## Advanced Features
123
-
124
- - **Streaming responses**
125
- - **Async/Await support**
126
- - **Structured outputs with Pydantic models**
127
-
128
- ---
129
-
130
- ## Logging
131
-
132
- Built-in logging for easy debugging:
133
-
134
- ```python
135
- import logging
136
- logging.basicConfig(level=logging.DEBUG)
137
- ```
138
-
139
- ---
140
-
141
- ## Contributing
142
-
143
- Feel free to open an issue or pull request.
144
-
145
- ---
146
-
147
- ## License
148
-
149
- MIT License
150
-
1
+ Metadata-Version: 2.4
2
+ Name: chatterer
3
+ Version: 0.1.7
4
+ Summary: The highest-level interface for various LLM APIs.
5
+ Requires-Python: >=3.12
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: instructor>=1.7.2
8
+ Requires-Dist: langchain>=0.3.19
9
+ Provides-Extra: dev
10
+ Requires-Dist: neo4j-extension>=0.1.14; extra == "dev"
11
+ Requires-Dist: colorama>=0.4.6; extra == "dev"
12
+ Requires-Dist: ipykernel>=6.29.5; extra == "dev"
13
+ Provides-Extra: conversion
14
+ Requires-Dist: markdownify>=1.1.0; extra == "conversion"
15
+ Requires-Dist: commonmark>=0.9.1; extra == "conversion"
16
+ Requires-Dist: playwright>=1.50.0; extra == "conversion"
17
+ Requires-Dist: pillow>=11.1.0; extra == "conversion"
18
+ Requires-Dist: mistune>=3.1.2; extra == "conversion"
19
+ Requires-Dist: markitdown>=0.0.2; extra == "conversion"
20
+ Requires-Dist: pymupdf>=1.25.4; extra == "conversion"
21
+ Provides-Extra: langchain-providers
22
+ Requires-Dist: langchain-openai>=0.3.7; extra == "langchain-providers"
23
+ Requires-Dist: langchain-anthropic>=0.3.8; extra == "langchain-providers"
24
+ Requires-Dist: langchain-google-genai>=2.0.10; extra == "langchain-providers"
25
+ Requires-Dist: langchain-ollama>=0.2.3; extra == "langchain-providers"
26
+ Provides-Extra: all
27
+ Requires-Dist: chatterer[langchain-providers]; extra == "all"
28
+ Requires-Dist: chatterer[conversion]; extra == "all"
29
+ Requires-Dist: chatterer[dev]; extra == "all"
30
+
31
+ # Chatterer
32
+
33
+ **Simplified, Structured AI Assistant Framework**
34
+
35
+ `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
36
+
37
+ The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
38
+
39
+ ---
40
+
41
+ ## Quick Install
42
+
43
+ ```bash
44
+ pip install chatterer
45
+ ```
46
+
47
+ ---
48
+
49
+ ## Quickstart Example
50
+
51
+ Generate text quickly using OpenAI:
52
+
53
+ ```python
54
+ from chatterer import Chatterer
55
+
56
+ chat = Chatterer.openai("gpt-4o-mini")
57
+ response = chat.generate("What is the meaning of life?")
58
+ print(response)
59
+ ```
60
+
61
+ Messages can be input as plain strings or structured lists:
62
+
63
+ ```python
64
+ response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
65
+ print(response)
66
+ ```
67
+
68
+ ### Structured Output with Pydantic
69
+
70
+ ```python
71
+ from pydantic import BaseModel
72
+
73
+ class AnswerModel(BaseModel):
74
+ question: str
75
+ answer: str
76
+
77
+ response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
78
+ print(response.question, response.answer)
79
+ ```
80
+
81
+ ### Async Example
82
+
83
+ ```python
84
+ import asyncio
85
+
86
+ async def main():
87
+ response = await chat.agenerate("Explain async in Python briefly.")
88
+ print(response)
89
+
90
+ asyncio.run(main())
91
+ ```
92
+
93
+ ---
94
+
95
+ ## Atom-of-Thought Pipeline (AoT)
96
+
97
+ `AoTPipeline` provides structured reasoning by:
98
+
99
+ - Detecting question domains (general, math, coding, philosophy, multihop).
100
+ - Decomposing questions recursively.
101
+ - Generating direct, decomposition-based, and simplified answers.
102
+ - Combining answers via ensemble.
103
+
104
+ ### AoT Usage Example
105
+
106
+ ```python
107
+ from chatterer import Chatterer
108
+ from chatterer.strategies import AoTStrategy, AoTPipeline
109
+
110
+ pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
111
+ strategy = AoTStrategy(pipeline=pipeline)
112
+
113
+ question = "What would Newton discover if hit by an apple falling from 100 meters?"
114
+ answer = strategy.invoke(question)
115
+ print(answer)
116
+ ```
117
+
118
+ ---
119
+
120
+ ## Supported Models
121
+
122
+ - **OpenAI**
123
+ - **Anthropic**
124
+ - **Google Gemini**
125
+ - **Ollama** (local models)
126
+
127
+ Initialize models easily:
128
+
129
+ ```python
130
+ openai_chat = Chatterer.openai("gpt-4o-mini")
131
+ anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
132
+ gemini_chat = Chatterer.google("gemini-2.0-flash")
133
+ ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
134
+ ```
135
+
136
+ ---
137
+
138
+ ## Advanced Features
139
+
140
+ - **Streaming responses**
141
+ - **Async/Await support**
142
+ - **Structured outputs with Pydantic models**
143
+
144
+ ---
145
+
146
+ ## Logging
147
+
148
+ Built-in logging for easy debugging:
149
+
150
+ ```python
151
+ import logging
152
+ logging.basicConfig(level=logging.DEBUG)
153
+ ```
154
+
155
+ ---
156
+
157
+ ## Contributing
158
+
159
+ Feel free to open an issue or pull request.
160
+
161
+ ---
162
+
163
+ ## License
164
+
165
+ MIT License
166
+
@@ -1,136 +1,136 @@
1
- # Chatterer
2
-
3
- **Simplified, Structured AI Assistant Framework**
4
-
5
- `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
6
-
7
- The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
8
-
9
- ---
10
-
11
- ## Quick Install
12
-
13
- ```bash
14
- pip install chatterer
15
- ```
16
-
17
- ---
18
-
19
- ## Quickstart Example
20
-
21
- Generate text quickly using OpenAI:
22
-
23
- ```python
24
- from chatterer import Chatterer
25
-
26
- chat = Chatterer.openai("gpt-4o-mini")
27
- response = chat.generate("What is the meaning of life?")
28
- print(response)
29
- ```
30
-
31
- Messages can be input as plain strings or structured lists:
32
-
33
- ```python
34
- response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
35
- print(response)
36
- ```
37
-
38
- ### Structured Output with Pydantic
39
-
40
- ```python
41
- from pydantic import BaseModel
42
-
43
- class AnswerModel(BaseModel):
44
- question: str
45
- answer: str
46
-
47
- response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
48
- print(response.question, response.answer)
49
- ```
50
-
51
- ### Async Example
52
-
53
- ```python
54
- import asyncio
55
-
56
- async def main():
57
- response = await chat.agenerate("Explain async in Python briefly.")
58
- print(response)
59
-
60
- asyncio.run(main())
61
- ```
62
-
63
- ---
64
-
65
- ## Atom-of-Thought Pipeline (AoT)
66
-
67
- `AoTPipeline` provides structured reasoning by:
68
-
69
- - Detecting question domains (general, math, coding, philosophy, multihop).
70
- - Decomposing questions recursively.
71
- - Generating direct, decomposition-based, and simplified answers.
72
- - Combining answers via ensemble.
73
-
74
- ### AoT Usage Example
75
-
76
- ```python
77
- from chatterer import Chatterer
78
- from chatterer.strategies import AoTStrategy, AoTPipeline
79
-
80
- pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
81
- strategy = AoTStrategy(pipeline=pipeline)
82
-
83
- question = "What would Newton discover if hit by an apple falling from 100 meters?"
84
- answer = strategy.invoke(question)
85
- print(answer)
86
- ```
87
-
88
- ---
89
-
90
- ## Supported Models
91
-
92
- - **OpenAI**
93
- - **Anthropic**
94
- - **Google Gemini**
95
- - **Ollama** (local models)
96
-
97
- Initialize models easily:
98
-
99
- ```python
100
- openai_chat = Chatterer.openai("gpt-4o-mini")
101
- anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
102
- gemini_chat = Chatterer.google("gemini-2.0-flash")
103
- ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
104
- ```
105
-
106
- ---
107
-
108
- ## Advanced Features
109
-
110
- - **Streaming responses**
111
- - **Async/Await support**
112
- - **Structured outputs with Pydantic models**
113
-
114
- ---
115
-
116
- ## Logging
117
-
118
- Built-in logging for easy debugging:
119
-
120
- ```python
121
- import logging
122
- logging.basicConfig(level=logging.DEBUG)
123
- ```
124
-
125
- ---
126
-
127
- ## Contributing
128
-
129
- Feel free to open an issue or pull request.
130
-
131
- ---
132
-
133
- ## License
134
-
135
- MIT License
136
-
1
+ # Chatterer
2
+
3
+ **Simplified, Structured AI Assistant Framework**
4
+
5
+ `chatterer` is a Python library designed as a type-safe LangChain wrapper for interacting with various language models (OpenAI, Anthropic, Gemini, Ollama, etc.). It supports structured outputs via Pydantic models, plain text responses, and asynchronous calls.
6
+
7
+ The structured reasoning in `chatterer` is inspired by the [Atom-of-Thought](https://github.com/qixucen/atom) pipeline.
8
+
9
+ ---
10
+
11
+ ## Quick Install
12
+
13
+ ```bash
14
+ pip install chatterer
15
+ ```
16
+
17
+ ---
18
+
19
+ ## Quickstart Example
20
+
21
+ Generate text quickly using OpenAI:
22
+
23
+ ```python
24
+ from chatterer import Chatterer
25
+
26
+ chat = Chatterer.openai("gpt-4o-mini")
27
+ response = chat.generate("What is the meaning of life?")
28
+ print(response)
29
+ ```
30
+
31
+ Messages can be input as plain strings or structured lists:
32
+
33
+ ```python
34
+ response = chat.generate([{ "role": "user", "content": "What's 2+2?" }])
35
+ print(response)
36
+ ```
37
+
38
+ ### Structured Output with Pydantic
39
+
40
+ ```python
41
+ from pydantic import BaseModel
42
+
43
+ class AnswerModel(BaseModel):
44
+ question: str
45
+ answer: str
46
+
47
+ response = chat.generate_pydantic(AnswerModel, "What's the capital of France?")
48
+ print(response.question, response.answer)
49
+ ```
50
+
51
+ ### Async Example
52
+
53
+ ```python
54
+ import asyncio
55
+
56
+ async def main():
57
+ response = await chat.agenerate("Explain async in Python briefly.")
58
+ print(response)
59
+
60
+ asyncio.run(main())
61
+ ```
62
+
63
+ ---
64
+
65
+ ## Atom-of-Thought Pipeline (AoT)
66
+
67
+ `AoTPipeline` provides structured reasoning by:
68
+
69
+ - Detecting question domains (general, math, coding, philosophy, multihop).
70
+ - Decomposing questions recursively.
71
+ - Generating direct, decomposition-based, and simplified answers.
72
+ - Combining answers via ensemble.
73
+
74
+ ### AoT Usage Example
75
+
76
+ ```python
77
+ from chatterer import Chatterer
78
+ from chatterer.strategies import AoTStrategy, AoTPipeline
79
+
80
+ pipeline = AoTPipeline(chatterer=Chatterer.openai(), max_depth=2)
81
+ strategy = AoTStrategy(pipeline=pipeline)
82
+
83
+ question = "What would Newton discover if hit by an apple falling from 100 meters?"
84
+ answer = strategy.invoke(question)
85
+ print(answer)
86
+ ```
87
+
88
+ ---
89
+
90
+ ## Supported Models
91
+
92
+ - **OpenAI**
93
+ - **Anthropic**
94
+ - **Google Gemini**
95
+ - **Ollama** (local models)
96
+
97
+ Initialize models easily:
98
+
99
+ ```python
100
+ openai_chat = Chatterer.openai("gpt-4o-mini")
101
+ anthropic_chat = Chatterer.anthropic("claude-3-7-sonnet-20250219")
102
+ gemini_chat = Chatterer.google("gemini-2.0-flash")
103
+ ollama_chat = Chatterer.ollama("deepseek-r1:1.5b")
104
+ ```
105
+
106
+ ---
107
+
108
+ ## Advanced Features
109
+
110
+ - **Streaming responses**
111
+ - **Async/Await support**
112
+ - **Structured outputs with Pydantic models**
113
+
114
+ ---
115
+
116
+ ## Logging
117
+
118
+ Built-in logging for easy debugging:
119
+
120
+ ```python
121
+ import logging
122
+ logging.basicConfig(level=logging.DEBUG)
123
+ ```
124
+
125
+ ---
126
+
127
+ ## Contributing
128
+
129
+ Feel free to open an issue or pull request.
130
+
131
+ ---
132
+
133
+ ## License
134
+
135
+ MIT License
136
+
@@ -0,0 +1,39 @@
1
+ from .language_model import Chatterer
2
+ from .messages import (
3
+ AIMessage,
4
+ BaseMessage,
5
+ HumanMessage,
6
+ SystemMessage,
7
+ )
8
+ from .strategies import (
9
+ AoTPipeline,
10
+ AoTPrompter,
11
+ AoTStrategy,
12
+ BaseStrategy,
13
+ )
14
+ from .tools import (
15
+ anything_to_markdown,
16
+ citation_chunker,
17
+ get_default_html_to_markdown_options,
18
+ html_to_markdown,
19
+ pdf_to_text,
20
+ pyscripts_to_snippets,
21
+ )
22
+
23
+ __all__ = [
24
+ "BaseStrategy",
25
+ "Chatterer",
26
+ "AoTStrategy",
27
+ "AoTPipeline",
28
+ "AoTPrompter",
29
+ "html_to_markdown",
30
+ "anything_to_markdown",
31
+ "pdf_to_text",
32
+ "get_default_html_to_markdown_options",
33
+ "pyscripts_to_snippets",
34
+ "citation_chunker",
35
+ "BaseMessage",
36
+ "HumanMessage",
37
+ "SystemMessage",
38
+ "AIMessage",
39
+ ]