haiku.rag 0.10.2__py3-none-any.whl → 0.14.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (56) hide show
  1. README.md +205 -0
  2. {haiku_rag-0.10.2.dist-info → haiku_rag-0.14.0.dist-info}/METADATA +100 -41
  3. haiku_rag-0.14.0.dist-info/RECORD +6 -0
  4. haiku/rag/__init__.py +0 -0
  5. haiku/rag/app.py +0 -437
  6. haiku/rag/chunker.py +0 -51
  7. haiku/rag/cli.py +0 -466
  8. haiku/rag/client.py +0 -605
  9. haiku/rag/config.py +0 -81
  10. haiku/rag/embeddings/__init__.py +0 -35
  11. haiku/rag/embeddings/base.py +0 -15
  12. haiku/rag/embeddings/ollama.py +0 -17
  13. haiku/rag/embeddings/openai.py +0 -16
  14. haiku/rag/embeddings/vllm.py +0 -19
  15. haiku/rag/embeddings/voyageai.py +0 -17
  16. haiku/rag/logging.py +0 -56
  17. haiku/rag/mcp.py +0 -156
  18. haiku/rag/migration.py +0 -316
  19. haiku/rag/monitor.py +0 -73
  20. haiku/rag/qa/__init__.py +0 -15
  21. haiku/rag/qa/agent.py +0 -91
  22. haiku/rag/qa/prompts.py +0 -60
  23. haiku/rag/reader.py +0 -115
  24. haiku/rag/reranking/__init__.py +0 -34
  25. haiku/rag/reranking/base.py +0 -13
  26. haiku/rag/reranking/cohere.py +0 -34
  27. haiku/rag/reranking/mxbai.py +0 -28
  28. haiku/rag/reranking/vllm.py +0 -44
  29. haiku/rag/research/__init__.py +0 -20
  30. haiku/rag/research/common.py +0 -53
  31. haiku/rag/research/dependencies.py +0 -47
  32. haiku/rag/research/graph.py +0 -29
  33. haiku/rag/research/models.py +0 -70
  34. haiku/rag/research/nodes/evaluate.py +0 -80
  35. haiku/rag/research/nodes/plan.py +0 -63
  36. haiku/rag/research/nodes/search.py +0 -93
  37. haiku/rag/research/nodes/synthesize.py +0 -51
  38. haiku/rag/research/prompts.py +0 -114
  39. haiku/rag/research/state.py +0 -25
  40. haiku/rag/store/__init__.py +0 -4
  41. haiku/rag/store/engine.py +0 -269
  42. haiku/rag/store/models/__init__.py +0 -4
  43. haiku/rag/store/models/chunk.py +0 -17
  44. haiku/rag/store/models/document.py +0 -17
  45. haiku/rag/store/repositories/__init__.py +0 -9
  46. haiku/rag/store/repositories/chunk.py +0 -424
  47. haiku/rag/store/repositories/document.py +0 -237
  48. haiku/rag/store/repositories/settings.py +0 -155
  49. haiku/rag/store/upgrades/__init__.py +0 -62
  50. haiku/rag/store/upgrades/v0_10_1.py +0 -64
  51. haiku/rag/store/upgrades/v0_9_3.py +0 -112
  52. haiku/rag/utils.py +0 -199
  53. haiku_rag-0.10.2.dist-info/RECORD +0 -54
  54. {haiku_rag-0.10.2.dist-info → haiku_rag-0.14.0.dist-info}/WHEEL +0 -0
  55. {haiku_rag-0.10.2.dist-info → haiku_rag-0.14.0.dist-info}/entry_points.txt +0 -0
  56. {haiku_rag-0.10.2.dist-info → haiku_rag-0.14.0.dist-info}/licenses/LICENSE +0 -0
README.md ADDED
@@ -0,0 +1,205 @@
1
+ # Haiku RAG
2
+
3
+ Retrieval-Augmented Generation (RAG) library built on LanceDB.
4
+
5
+ `haiku.rag` is a Retrieval-Augmented Generation (RAG) library built to work with LanceDB as a local vector database. It uses LanceDB for storing embeddings and performs semantic (vector) search as well as full-text search combined through native hybrid search with Reciprocal Rank Fusion. Both open-source (Ollama) as well as commercial (OpenAI, VoyageAI) embedding providers are supported.
6
+
7
+ > **Note**: Configuration now uses YAML files instead of environment variables. If you're upgrading from an older version, run `haiku-rag init-config --from-env` to migrate your `.env` file to `haiku.rag.yaml`. See [Configuration](https://ggozad.github.io/haiku.rag/configuration/) for details.
8
+
9
+ ## Features
10
+
11
+ - **Local LanceDB**: No external servers required, supports also LanceDB cloud storage, S3, Google Cloud & Azure
12
+ - **Multiple embedding providers**: Ollama, VoyageAI, OpenAI, vLLM
13
+ - **Multiple QA providers**: Any provider/model supported by Pydantic AI
14
+ - **Research graph (multi‑agent)**: Plan → Search → Evaluate → Synthesize with agentic AI
15
+ - **Native hybrid search**: Vector + full-text search with native LanceDB RRF reranking
16
+ - **Reranking**: Default search result reranking with MixedBread AI, Cohere, Zero Entropy, or vLLM
17
+ - **Question answering**: Built-in QA agents on your documents
18
+ - **File monitoring**: Auto-index files when run as server
19
+ - **40+ file formats**: PDF, DOCX, HTML, Markdown, code files, URLs
20
+ - **MCP server**: Expose as tools for AI assistants
21
+ - **A2A agent**: Conversational agent with context and multi-turn dialogue
22
+ - **CLI & Python API**: Use from command line or Python
23
+
24
+ ## Installation
25
+
26
+ **Python 3.12 or newer required**
27
+
28
+ ### Full Package (Recommended)
29
+
30
+ ```bash
31
+ uv pip install haiku.rag
32
+ ```
33
+
34
+ Includes all features: document processing, all embedding providers, rerankers, and A2A agent support.
35
+
36
+ ### Slim Package (Minimal Dependencies)
37
+
38
+ ```bash
39
+ uv pip install haiku.rag-slim
40
+ ```
41
+
42
+ Install only the extras you need. See the [Installation](https://ggozad.github.io/haiku.rag/installation/) documentation for available options
43
+
44
+ ## Quick Start
45
+
46
+ ```bash
47
+ # Add documents
48
+ haiku-rag add "Your content here"
49
+ haiku-rag add "Your content here" --meta author=alice --meta topic=notes
50
+ haiku-rag add-src document.pdf --meta source=manual
51
+
52
+ # Search
53
+ haiku-rag search "query"
54
+
55
+ # Search with filters
56
+ haiku-rag search "query" --filter "uri LIKE '%.pdf' AND title LIKE '%paper%'"
57
+
58
+ # Ask questions
59
+ haiku-rag ask "Who is the author of haiku.rag?"
60
+
61
+ # Ask questions with citations
62
+ haiku-rag ask "Who is the author of haiku.rag?" --cite
63
+
64
+ # Deep QA (multi-agent question decomposition)
65
+ haiku-rag ask "Who is the author of haiku.rag?" --deep --cite
66
+
67
+ # Deep QA with verbose output
68
+ haiku-rag ask "Who is the author of haiku.rag?" --deep --verbose
69
+
70
+ # Multi‑agent research (iterative plan/search/evaluate)
71
+ haiku-rag research \
72
+ "What are the main drivers and trends of global temperature anomalies since 1990?" \
73
+ --max-iterations 2 \
74
+ --confidence-threshold 0.8 \
75
+ --max-concurrency 3 \
76
+ --verbose
77
+
78
+ # Rebuild database (re-chunk and re-embed all documents)
79
+ haiku-rag rebuild
80
+
81
+ # Start server with file monitoring
82
+ haiku-rag serve --monitor
83
+ ```
84
+
85
+ To customize settings, create a `haiku.rag.yaml` config file (see [Configuration](https://ggozad.github.io/haiku.rag/configuration/)).
86
+
87
+ ## Python Usage
88
+
89
+ ```python
90
+ from haiku.rag.client import HaikuRAG
91
+ from haiku.rag.research import (
92
+ PlanNode,
93
+ ResearchContext,
94
+ ResearchDeps,
95
+ ResearchState,
96
+ build_research_graph,
97
+ stream_research_graph,
98
+ )
99
+
100
+ async with HaikuRAG("database.lancedb") as client:
101
+ # Add document
102
+ doc = await client.create_document("Your content")
103
+
104
+ # Search (reranking enabled by default)
105
+ results = await client.search("query")
106
+ for chunk, score in results:
107
+ print(f"{score:.3f}: {chunk.content}")
108
+
109
+ # Ask questions
110
+ answer = await client.ask("Who is the author of haiku.rag?")
111
+ print(answer)
112
+
113
+ # Ask questions with citations
114
+ answer = await client.ask("Who is the author of haiku.rag?", cite=True)
115
+ print(answer)
116
+
117
+ # Multi‑agent research pipeline (Plan → Search → Evaluate → Synthesize)
118
+ graph = build_research_graph()
119
+ question = (
120
+ "What are the main drivers and trends of global temperature "
121
+ "anomalies since 1990?"
122
+ )
123
+ state = ResearchState(
124
+ context=ResearchContext(original_question=question),
125
+ max_iterations=2,
126
+ confidence_threshold=0.8,
127
+ max_concurrency=2,
128
+ )
129
+ deps = ResearchDeps(client=client)
130
+
131
+ # Blocking run (final result only)
132
+ result = await graph.run(
133
+ PlanNode(provider="openai", model="gpt-4o-mini"),
134
+ state=state,
135
+ deps=deps,
136
+ )
137
+ print(result.output.title)
138
+
139
+ # Streaming progress (log/report/error events)
140
+ async for event in stream_research_graph(
141
+ graph,
142
+ PlanNode(provider="openai", model="gpt-4o-mini"),
143
+ state,
144
+ deps,
145
+ ):
146
+ if event.type == "log":
147
+ iteration = event.state.iterations if event.state else state.iterations
148
+ print(f"[{iteration}] {event.message}")
149
+ elif event.type == "report":
150
+ print("\nResearch complete!\n")
151
+ print(event.report.title)
152
+ print(event.report.executive_summary)
153
+ ```
154
+
155
+ ## MCP Server
156
+
157
+ Use with AI assistants like Claude Desktop:
158
+
159
+ ```bash
160
+ haiku-rag serve --stdio
161
+ ```
162
+
163
+ Provides tools for document management and search directly in your AI assistant.
164
+
165
+ ## A2A Agent
166
+
167
+ Run as a conversational agent with the Agent-to-Agent protocol:
168
+
169
+ ```bash
170
+ # Start the A2A server
171
+ haiku-rag serve --a2a
172
+
173
+ # Connect with the interactive client (in another terminal)
174
+ haiku-rag a2aclient
175
+ ```
176
+
177
+ The A2A agent provides:
178
+
179
+ - Multi-turn dialogue with context
180
+ - Intelligent multi-search for complex questions
181
+ - Source citations with titles and URIs
182
+ - Full document retrieval on request
183
+
184
+ ## Examples
185
+
186
+ See the [examples directory](examples/) for working examples:
187
+
188
+ - **[Interactive Research Assistant](examples/ag-ui-research/)** - Full-stack research assistant with Pydantic AI and AG-UI featuring human-in-the-loop approval and real-time state synchronization
189
+ - **[Docker Setup](examples/docker/)** - Complete Docker deployment with file monitoring, MCP server, and A2A agent
190
+ - **[A2A Security](examples/a2a-security/)** - Authentication examples (API key, OAuth2, GitHub)
191
+
192
+ ## Documentation
193
+
194
+ Full documentation at: https://ggozad.github.io/haiku.rag/
195
+
196
+ - [Installation](https://ggozad.github.io/haiku.rag/installation/) - Provider setup
197
+ - [Configuration](https://ggozad.github.io/haiku.rag/configuration/) - YAML configuration
198
+ - [CLI](https://ggozad.github.io/haiku.rag/cli/) - Command reference
199
+ - [Python API](https://ggozad.github.io/haiku.rag/python/) - Complete API docs
200
+ - [Agents](https://ggozad.github.io/haiku.rag/agents/) - QA agent and multi-agent research
201
+ - [MCP Server](https://ggozad.github.io/haiku.rag/mcp/) - Model Context Protocol integration
202
+ - [A2A Agent](https://ggozad.github.io/haiku.rag/a2a/) - Agent-to-Agent protocol support
203
+ - [Benchmarks](https://ggozad.github.io/haiku.rag/benchmarks/) - Performance Benchmarks
204
+
205
+ mcp-name: io.github.ggozad/haiku-rag
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: haiku.rag
3
- Version: 0.10.2
3
+ Version: 0.14.0
4
4
  Summary: Agentic Retrieval Augmented Generation (RAG) with LanceDB
5
5
  Author-email: Yiorgis Gozadinos <ggozadinos@gmail.com>
6
6
  License: MIT
@@ -13,27 +13,11 @@ Classifier: Operating System :: MacOS
13
13
  Classifier: Operating System :: Microsoft :: Windows :: Windows 10
14
14
  Classifier: Operating System :: Microsoft :: Windows :: Windows 11
15
15
  Classifier: Operating System :: POSIX :: Linux
16
- Classifier: Programming Language :: Python :: 3.10
17
- Classifier: Programming Language :: Python :: 3.11
18
16
  Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Programming Language :: Python :: 3.13
19
18
  Classifier: Typing :: Typed
20
19
  Requires-Python: >=3.12
21
- Requires-Dist: docling>=2.52.0
22
- Requires-Dist: fastmcp>=2.12.3
23
- Requires-Dist: httpx>=0.28.1
24
- Requires-Dist: lancedb>=0.25.0
25
- Requires-Dist: pydantic-ai>=1.0.8
26
- Requires-Dist: pydantic-graph>=1.0.8
27
- Requires-Dist: pydantic>=2.11.9
28
- Requires-Dist: python-dotenv>=1.1.1
29
- Requires-Dist: rich>=14.1.0
30
- Requires-Dist: tiktoken>=0.11.0
31
- Requires-Dist: typer>=0.16.1
32
- Requires-Dist: watchfiles>=1.1.0
33
- Provides-Extra: mxbai
34
- Requires-Dist: mxbai-rerank>=0.1.6; extra == 'mxbai'
35
- Provides-Extra: voyageai
36
- Requires-Dist: voyageai>=0.3.5; extra == 'voyageai'
20
+ Requires-Dist: haiku-rag-slim[a2a,cohere,docling,mxbai,voyageai,zeroentropy]
37
21
  Description-Content-Type: text/markdown
38
22
 
39
23
  # Haiku RAG
@@ -42,7 +26,7 @@ Retrieval-Augmented Generation (RAG) library built on LanceDB.
42
26
 
43
27
  `haiku.rag` is a Retrieval-Augmented Generation (RAG) library built to work with LanceDB as a local vector database. It uses LanceDB for storing embeddings and performs semantic (vector) search as well as full-text search combined through native hybrid search with Reciprocal Rank Fusion. Both open-source (Ollama) as well as commercial (OpenAI, VoyageAI) embedding providers are supported.
44
28
 
45
- > **Note**: Starting with version 0.7.0, haiku.rag uses LanceDB instead of SQLite. If you have an existing SQLite database, use `haiku-rag migrate old_database.sqlite` to migrate your data safely.
29
+ > **Note**: Configuration now uses YAML files instead of environment variables. If you're upgrading from an older version, run `haiku-rag init-config --from-env` to migrate your `.env` file to `haiku.rag.yaml`. See [Configuration](https://ggozad.github.io/haiku.rag/configuration/) for details.
46
30
 
47
31
  ## Features
48
32
 
@@ -51,19 +35,37 @@ Retrieval-Augmented Generation (RAG) library built on LanceDB.
51
35
  - **Multiple QA providers**: Any provider/model supported by Pydantic AI
52
36
  - **Research graph (multi‑agent)**: Plan → Search → Evaluate → Synthesize with agentic AI
53
37
  - **Native hybrid search**: Vector + full-text search with native LanceDB RRF reranking
54
- - **Reranking**: Default search result reranking with MixedBread AI, Cohere, or vLLM
38
+ - **Reranking**: Default search result reranking with MixedBread AI, Cohere, Zero Entropy, or vLLM
55
39
  - **Question answering**: Built-in QA agents on your documents
56
40
  - **File monitoring**: Auto-index files when run as server
57
41
  - **40+ file formats**: PDF, DOCX, HTML, Markdown, code files, URLs
58
42
  - **MCP server**: Expose as tools for AI assistants
43
+ - **A2A agent**: Conversational agent with context and multi-turn dialogue
59
44
  - **CLI & Python API**: Use from command line or Python
60
45
 
61
- ## Quick Start
46
+ ## Installation
47
+
48
+ **Python 3.12 or newer required**
49
+
50
+ ### Full Package (Recommended)
62
51
 
63
52
  ```bash
64
- # Install
65
53
  uv pip install haiku.rag
54
+ ```
55
+
56
+ Includes all features: document processing, all embedding providers, rerankers, and A2A agent support.
57
+
58
+ ### Slim Package (Minimal Dependencies)
59
+
60
+ ```bash
61
+ uv pip install haiku.rag-slim
62
+ ```
66
63
 
64
+ Install only the extras you need. See the [Installation](https://ggozad.github.io/haiku.rag/installation/) documentation for available options
65
+
66
+ ## Quick Start
67
+
68
+ ```bash
67
69
  # Add documents
68
70
  haiku-rag add "Your content here"
69
71
  haiku-rag add "Your content here" --meta author=alice --meta topic=notes
@@ -72,12 +74,21 @@ haiku-rag add-src document.pdf --meta source=manual
72
74
  # Search
73
75
  haiku-rag search "query"
74
76
 
77
+ # Search with filters
78
+ haiku-rag search "query" --filter "uri LIKE '%.pdf' AND title LIKE '%paper%'"
79
+
75
80
  # Ask questions
76
81
  haiku-rag ask "Who is the author of haiku.rag?"
77
82
 
78
83
  # Ask questions with citations
79
84
  haiku-rag ask "Who is the author of haiku.rag?" --cite
80
85
 
86
+ # Deep QA (multi-agent question decomposition)
87
+ haiku-rag ask "Who is the author of haiku.rag?" --deep --cite
88
+
89
+ # Deep QA with verbose output
90
+ haiku-rag ask "Who is the author of haiku.rag?" --deep --verbose
91
+
81
92
  # Multi‑agent research (iterative plan/search/evaluate)
82
93
  haiku-rag research \
83
94
  "What are the main drivers and trends of global temperature anomalies since 1990?" \
@@ -89,24 +100,23 @@ haiku-rag research \
89
100
  # Rebuild database (re-chunk and re-embed all documents)
90
101
  haiku-rag rebuild
91
102
 
92
- # Migrate from SQLite to LanceDB
93
- haiku-rag migrate old_database.sqlite
94
-
95
103
  # Start server with file monitoring
96
- export MONITOR_DIRECTORIES="/path/to/docs"
97
- haiku-rag serve
104
+ haiku-rag serve --monitor
98
105
  ```
99
106
 
107
+ To customize settings, create a `haiku.rag.yaml` config file (see [Configuration](https://ggozad.github.io/haiku.rag/configuration/)).
108
+
100
109
  ## Python Usage
101
110
 
102
111
  ```python
103
112
  from haiku.rag.client import HaikuRAG
104
113
  from haiku.rag.research import (
114
+ PlanNode,
105
115
  ResearchContext,
106
116
  ResearchDeps,
107
117
  ResearchState,
108
118
  build_research_graph,
109
- PlanNode,
119
+ stream_research_graph,
110
120
  )
111
121
 
112
122
  async with HaikuRAG("database.lancedb") as client:
@@ -128,22 +138,40 @@ async with HaikuRAG("database.lancedb") as client:
128
138
 
129
139
  # Multi‑agent research pipeline (Plan → Search → Evaluate → Synthesize)
130
140
  graph = build_research_graph()
141
+ question = (
142
+ "What are the main drivers and trends of global temperature "
143
+ "anomalies since 1990?"
144
+ )
131
145
  state = ResearchState(
132
- question=(
133
- "What are the main drivers and trends of global temperature "
134
- "anomalies since 1990?"
135
- ),
136
- context=ResearchContext(original_question="…"),
146
+ context=ResearchContext(original_question=question),
137
147
  max_iterations=2,
138
148
  confidence_threshold=0.8,
139
- max_concurrency=3,
149
+ max_concurrency=2,
140
150
  )
141
151
  deps = ResearchDeps(client=client)
142
- start = PlanNode(provider=None, model=None)
143
- result = await graph.run(start, state=state, deps=deps)
144
- report = result.output
145
- print(report.title)
146
- print(report.executive_summary)
152
+
153
+ # Blocking run (final result only)
154
+ result = await graph.run(
155
+ PlanNode(provider="openai", model="gpt-4o-mini"),
156
+ state=state,
157
+ deps=deps,
158
+ )
159
+ print(result.output.title)
160
+
161
+ # Streaming progress (log/report/error events)
162
+ async for event in stream_research_graph(
163
+ graph,
164
+ PlanNode(provider="openai", model="gpt-4o-mini"),
165
+ state,
166
+ deps,
167
+ ):
168
+ if event.type == "log":
169
+ iteration = event.state.iterations if event.state else state.iterations
170
+ print(f"[{iteration}] {event.message}")
171
+ elif event.type == "report":
172
+ print("\nResearch complete!\n")
173
+ print(event.report.title)
174
+ print(event.report.executive_summary)
147
175
  ```
148
176
 
149
177
  ## MCP Server
@@ -156,13 +184,44 @@ haiku-rag serve --stdio
156
184
 
157
185
  Provides tools for document management and search directly in your AI assistant.
158
186
 
187
+ ## A2A Agent
188
+
189
+ Run as a conversational agent with the Agent-to-Agent protocol:
190
+
191
+ ```bash
192
+ # Start the A2A server
193
+ haiku-rag serve --a2a
194
+
195
+ # Connect with the interactive client (in another terminal)
196
+ haiku-rag a2aclient
197
+ ```
198
+
199
+ The A2A agent provides:
200
+
201
+ - Multi-turn dialogue with context
202
+ - Intelligent multi-search for complex questions
203
+ - Source citations with titles and URIs
204
+ - Full document retrieval on request
205
+
206
+ ## Examples
207
+
208
+ See the [examples directory](examples/) for working examples:
209
+
210
+ - **[Interactive Research Assistant](examples/ag-ui-research/)** - Full-stack research assistant with Pydantic AI and AG-UI featuring human-in-the-loop approval and real-time state synchronization
211
+ - **[Docker Setup](examples/docker/)** - Complete Docker deployment with file monitoring, MCP server, and A2A agent
212
+ - **[A2A Security](examples/a2a-security/)** - Authentication examples (API key, OAuth2, GitHub)
213
+
159
214
  ## Documentation
160
215
 
161
216
  Full documentation at: https://ggozad.github.io/haiku.rag/
162
217
 
163
218
  - [Installation](https://ggozad.github.io/haiku.rag/installation/) - Provider setup
164
- - [Configuration](https://ggozad.github.io/haiku.rag/configuration/) - Environment variables
219
+ - [Configuration](https://ggozad.github.io/haiku.rag/configuration/) - YAML configuration
165
220
  - [CLI](https://ggozad.github.io/haiku.rag/cli/) - Command reference
166
221
  - [Python API](https://ggozad.github.io/haiku.rag/python/) - Complete API docs
167
222
  - [Agents](https://ggozad.github.io/haiku.rag/agents/) - QA agent and multi-agent research
223
+ - [MCP Server](https://ggozad.github.io/haiku.rag/mcp/) - Model Context Protocol integration
224
+ - [A2A Agent](https://ggozad.github.io/haiku.rag/a2a/) - Agent-to-Agent protocol support
168
225
  - [Benchmarks](https://ggozad.github.io/haiku.rag/benchmarks/) - Performance Benchmarks
226
+
227
+ mcp-name: io.github.ggozad/haiku-rag
@@ -0,0 +1,6 @@
1
+ README.md,sha256=N8nk6cs6JkWHAmVz3ci7lTiLr6Xq_UqifWGivZnuPJU,7216
2
+ haiku_rag-0.14.0.dist-info/METADATA,sha256=I3H0hBrGIgDwtIvFe9tnu2-PeLESLlhEHVrgmrKCnr4,8085
3
+ haiku_rag-0.14.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
4
+ haiku_rag-0.14.0.dist-info/entry_points.txt,sha256=G1U3nAkNd5YDYd4v0tuYFbriz0i-JheCsFuT9kIoGCI,48
5
+ haiku_rag-0.14.0.dist-info/licenses/LICENSE,sha256=eXZrWjSk9PwYFNK9yUczl3oPl95Z4V9UXH7bPN46iPo,1065
6
+ haiku_rag-0.14.0.dist-info/RECORD,,
haiku/rag/__init__.py DELETED
File without changes