haiku.rag 0.10.2__py3-none-any.whl → 0.19.3__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- README.md +172 -0
- {haiku_rag-0.10.2.dist-info → haiku_rag-0.19.3.dist-info}/METADATA +79 -51
- haiku_rag-0.19.3.dist-info/RECORD +6 -0
- {haiku_rag-0.10.2.dist-info → haiku_rag-0.19.3.dist-info}/WHEEL +1 -1
- haiku/rag/__init__.py +0 -0
- haiku/rag/app.py +0 -437
- haiku/rag/chunker.py +0 -51
- haiku/rag/cli.py +0 -466
- haiku/rag/client.py +0 -605
- haiku/rag/config.py +0 -81
- haiku/rag/embeddings/__init__.py +0 -35
- haiku/rag/embeddings/base.py +0 -15
- haiku/rag/embeddings/ollama.py +0 -17
- haiku/rag/embeddings/openai.py +0 -16
- haiku/rag/embeddings/vllm.py +0 -19
- haiku/rag/embeddings/voyageai.py +0 -17
- haiku/rag/logging.py +0 -56
- haiku/rag/mcp.py +0 -156
- haiku/rag/migration.py +0 -316
- haiku/rag/monitor.py +0 -73
- haiku/rag/qa/__init__.py +0 -15
- haiku/rag/qa/agent.py +0 -91
- haiku/rag/qa/prompts.py +0 -60
- haiku/rag/reader.py +0 -115
- haiku/rag/reranking/__init__.py +0 -34
- haiku/rag/reranking/base.py +0 -13
- haiku/rag/reranking/cohere.py +0 -34
- haiku/rag/reranking/mxbai.py +0 -28
- haiku/rag/reranking/vllm.py +0 -44
- haiku/rag/research/__init__.py +0 -20
- haiku/rag/research/common.py +0 -53
- haiku/rag/research/dependencies.py +0 -47
- haiku/rag/research/graph.py +0 -29
- haiku/rag/research/models.py +0 -70
- haiku/rag/research/nodes/evaluate.py +0 -80
- haiku/rag/research/nodes/plan.py +0 -63
- haiku/rag/research/nodes/search.py +0 -93
- haiku/rag/research/nodes/synthesize.py +0 -51
- haiku/rag/research/prompts.py +0 -114
- haiku/rag/research/state.py +0 -25
- haiku/rag/store/__init__.py +0 -4
- haiku/rag/store/engine.py +0 -269
- haiku/rag/store/models/__init__.py +0 -4
- haiku/rag/store/models/chunk.py +0 -17
- haiku/rag/store/models/document.py +0 -17
- haiku/rag/store/repositories/__init__.py +0 -9
- haiku/rag/store/repositories/chunk.py +0 -424
- haiku/rag/store/repositories/document.py +0 -237
- haiku/rag/store/repositories/settings.py +0 -155
- haiku/rag/store/upgrades/__init__.py +0 -62
- haiku/rag/store/upgrades/v0_10_1.py +0 -64
- haiku/rag/store/upgrades/v0_9_3.py +0 -112
- haiku/rag/utils.py +0 -199
- haiku_rag-0.10.2.dist-info/RECORD +0 -54
- {haiku_rag-0.10.2.dist-info → haiku_rag-0.19.3.dist-info}/entry_points.txt +0 -0
- {haiku_rag-0.10.2.dist-info → haiku_rag-0.19.3.dist-info}/licenses/LICENSE +0 -0
README.md
ADDED
|
@@ -0,0 +1,172 @@
|
|
|
1
|
+
# Haiku RAG
|
|
2
|
+
|
|
3
|
+
Retrieval-Augmented Generation (RAG) library built on LanceDB.
|
|
4
|
+
|
|
5
|
+
`haiku.rag` is a Retrieval-Augmented Generation (RAG) library built to work with LanceDB as a local vector database. It uses LanceDB for storing embeddings and performs semantic (vector) search as well as full-text search combined through native hybrid search with Reciprocal Rank Fusion. Both open-source (Ollama) as well as commercial (OpenAI, VoyageAI) embedding providers are supported.
|
|
6
|
+
|
|
7
|
+
## Features
|
|
8
|
+
|
|
9
|
+
- **Local LanceDB**: No external servers required, supports also LanceDB cloud storage, S3, Google Cloud & Azure
|
|
10
|
+
- **Multiple embedding providers**: Ollama, LM Studio, VoyageAI, OpenAI, vLLM
|
|
11
|
+
- **Multiple QA providers**: Any provider/model supported by Pydantic AI (Ollama, LM Studio, OpenAI, Anthropic, etc.)
|
|
12
|
+
- **Native hybrid search**: Vector + full-text search with native LanceDB RRF reranking
|
|
13
|
+
- **Reranking**: Default search result reranking with MixedBread AI, Cohere, Zero Entropy, or vLLM
|
|
14
|
+
- **Question answering**: Built-in QA agents on your documents
|
|
15
|
+
- **Research graph (multi‑agent)**: Plan → Search → Evaluate → Synthesize with agentic AI
|
|
16
|
+
- **File monitoring**: Auto-index files when run as server
|
|
17
|
+
- **CLI & Python API**: Use from command line or Python
|
|
18
|
+
- **MCP server**: Expose as tools for AI assistants
|
|
19
|
+
- **Flexible document processing**: Local (docling) or remote (docling-serve) processing
|
|
20
|
+
|
|
21
|
+
## Installation
|
|
22
|
+
|
|
23
|
+
**Python 3.12 or newer required**
|
|
24
|
+
|
|
25
|
+
### Full Package (Recommended)
|
|
26
|
+
|
|
27
|
+
```bash
|
|
28
|
+
uv pip install haiku.rag
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
Includes all features: document processing, all embedding providers, and rerankers.
|
|
32
|
+
|
|
33
|
+
### Slim Package (Minimal Dependencies)
|
|
34
|
+
|
|
35
|
+
```bash
|
|
36
|
+
uv pip install haiku.rag-slim
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
Install only the extras you need. See the [Installation](https://ggozad.github.io/haiku.rag/installation/) documentation for available options
|
|
40
|
+
|
|
41
|
+
## Quick Start
|
|
42
|
+
|
|
43
|
+
```bash
|
|
44
|
+
# Add documents
|
|
45
|
+
haiku-rag add "Your content here"
|
|
46
|
+
haiku-rag add "Your content here" --meta author=alice --meta topic=notes
|
|
47
|
+
haiku-rag add-src document.pdf --meta source=manual
|
|
48
|
+
|
|
49
|
+
# Search
|
|
50
|
+
haiku-rag search "query"
|
|
51
|
+
|
|
52
|
+
# Search with filters
|
|
53
|
+
haiku-rag search "query" --filter "uri LIKE '%.pdf' AND title LIKE '%paper%'"
|
|
54
|
+
|
|
55
|
+
# Ask questions
|
|
56
|
+
haiku-rag ask "Who is the author of haiku.rag?"
|
|
57
|
+
|
|
58
|
+
# Ask questions with citations
|
|
59
|
+
haiku-rag ask "Who is the author of haiku.rag?" --cite
|
|
60
|
+
|
|
61
|
+
# Deep QA (multi-agent question decomposition)
|
|
62
|
+
haiku-rag ask "Who is the author of haiku.rag?" --deep --cite
|
|
63
|
+
|
|
64
|
+
# Deep QA with verbose output
|
|
65
|
+
haiku-rag ask "Who is the author of haiku.rag?" --deep --verbose
|
|
66
|
+
|
|
67
|
+
# Multi‑agent research (iterative plan/search/evaluate)
|
|
68
|
+
haiku-rag research \
|
|
69
|
+
"What are the main drivers and trends of global temperature anomalies since 1990?" \
|
|
70
|
+
--max-iterations 2 \
|
|
71
|
+
--confidence-threshold 0.8 \
|
|
72
|
+
--max-concurrency 3 \
|
|
73
|
+
--verbose
|
|
74
|
+
|
|
75
|
+
# Rebuild database (re-chunk and re-embed all documents)
|
|
76
|
+
haiku-rag rebuild
|
|
77
|
+
|
|
78
|
+
# Start server with file monitoring
|
|
79
|
+
haiku-rag serve --monitor
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
To customize settings, create a `haiku.rag.yaml` config file (see [Configuration](https://ggozad.github.io/haiku.rag/configuration/)).
|
|
83
|
+
|
|
84
|
+
## Python Usage
|
|
85
|
+
|
|
86
|
+
```python
|
|
87
|
+
from haiku.rag.client import HaikuRAG
|
|
88
|
+
from haiku.rag.config import Config
|
|
89
|
+
from haiku.rag.graph.agui import stream_graph
|
|
90
|
+
from haiku.rag.graph.research import (
|
|
91
|
+
ResearchContext,
|
|
92
|
+
ResearchDeps,
|
|
93
|
+
ResearchState,
|
|
94
|
+
build_research_graph,
|
|
95
|
+
)
|
|
96
|
+
|
|
97
|
+
async with HaikuRAG("database.lancedb") as client:
|
|
98
|
+
# Add document
|
|
99
|
+
doc = await client.create_document("Your content")
|
|
100
|
+
|
|
101
|
+
# Search (reranking enabled by default)
|
|
102
|
+
results = await client.search("query")
|
|
103
|
+
for chunk, score in results:
|
|
104
|
+
print(f"{score:.3f}: {chunk.content}")
|
|
105
|
+
|
|
106
|
+
# Ask questions
|
|
107
|
+
answer = await client.ask("Who is the author of haiku.rag?")
|
|
108
|
+
print(answer)
|
|
109
|
+
|
|
110
|
+
# Ask questions with citations
|
|
111
|
+
answer = await client.ask("Who is the author of haiku.rag?", cite=True)
|
|
112
|
+
print(answer)
|
|
113
|
+
|
|
114
|
+
# Multi‑agent research pipeline (Plan → Search → Evaluate → Synthesize)
|
|
115
|
+
# Graph settings (provider, model, max_iterations, etc.) come from config
|
|
116
|
+
graph = build_research_graph(config=Config)
|
|
117
|
+
question = (
|
|
118
|
+
"What are the main drivers and trends of global temperature "
|
|
119
|
+
"anomalies since 1990?"
|
|
120
|
+
)
|
|
121
|
+
context = ResearchContext(original_question=question)
|
|
122
|
+
state = ResearchState.from_config(context=context, config=Config)
|
|
123
|
+
deps = ResearchDeps(client=client)
|
|
124
|
+
|
|
125
|
+
# Blocking run (final result only)
|
|
126
|
+
report = await graph.run(state=state, deps=deps)
|
|
127
|
+
print(report.title)
|
|
128
|
+
|
|
129
|
+
# Streaming progress (AG-UI events)
|
|
130
|
+
async for event in stream_graph(graph, state, deps):
|
|
131
|
+
if event["type"] == "STEP_STARTED":
|
|
132
|
+
print(f"Starting step: {event['stepName']}")
|
|
133
|
+
elif event["type"] == "ACTIVITY_SNAPSHOT":
|
|
134
|
+
print(f" {event['content']}")
|
|
135
|
+
elif event["type"] == "RUN_FINISHED":
|
|
136
|
+
print("\nResearch complete!\n")
|
|
137
|
+
result = event["result"]
|
|
138
|
+
print(result["title"])
|
|
139
|
+
print(result["executive_summary"])
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
## MCP Server
|
|
143
|
+
|
|
144
|
+
Use with AI assistants like Claude Desktop:
|
|
145
|
+
|
|
146
|
+
```bash
|
|
147
|
+
haiku-rag serve --stdio
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
Provides tools for document management and search directly in your AI assistant.
|
|
151
|
+
|
|
152
|
+
## Examples
|
|
153
|
+
|
|
154
|
+
See the [examples directory](examples/) for working examples:
|
|
155
|
+
|
|
156
|
+
- **[Interactive Research Assistant](examples/ag-ui-research/)** - Full-stack research assistant with Pydantic AI and AG-UI featuring human-in-the-loop approval and real-time state synchronization
|
|
157
|
+
- **[Docker Setup](examples/docker/)** - Complete Docker deployment with file monitoring and MCP server
|
|
158
|
+
- **[A2A Server](examples/a2a-server/)** - Self-contained A2A protocol server package with conversational agent interface
|
|
159
|
+
|
|
160
|
+
## Documentation
|
|
161
|
+
|
|
162
|
+
Full documentation at: https://ggozad.github.io/haiku.rag/
|
|
163
|
+
|
|
164
|
+
- [Installation](https://ggozad.github.io/haiku.rag/installation/) - Provider setup
|
|
165
|
+
- [Configuration](https://ggozad.github.io/haiku.rag/configuration/) - YAML configuration
|
|
166
|
+
- [CLI](https://ggozad.github.io/haiku.rag/cli/) - Command reference
|
|
167
|
+
- [Python API](https://ggozad.github.io/haiku.rag/python/) - Complete API docs
|
|
168
|
+
- [Agents](https://ggozad.github.io/haiku.rag/agents/) - QA agent and multi-agent research
|
|
169
|
+
- [MCP Server](https://ggozad.github.io/haiku.rag/mcp/) - Model Context Protocol integration
|
|
170
|
+
- [Benchmarks](https://ggozad.github.io/haiku.rag/benchmarks/) - Performance Benchmarks
|
|
171
|
+
|
|
172
|
+
mcp-name: io.github.ggozad/haiku-rag
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: haiku.rag
|
|
3
|
-
Version: 0.
|
|
3
|
+
Version: 0.19.3
|
|
4
4
|
Summary: Agentic Retrieval Augmented Generation (RAG) with LanceDB
|
|
5
5
|
Author-email: Yiorgis Gozadinos <ggozadinos@gmail.com>
|
|
6
6
|
License: MIT
|
|
@@ -13,27 +13,13 @@ Classifier: Operating System :: MacOS
|
|
|
13
13
|
Classifier: Operating System :: Microsoft :: Windows :: Windows 10
|
|
14
14
|
Classifier: Operating System :: Microsoft :: Windows :: Windows 11
|
|
15
15
|
Classifier: Operating System :: POSIX :: Linux
|
|
16
|
-
Classifier: Programming Language :: Python :: 3.10
|
|
17
|
-
Classifier: Programming Language :: Python :: 3.11
|
|
18
16
|
Classifier: Programming Language :: Python :: 3.12
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
19
18
|
Classifier: Typing :: Typed
|
|
20
19
|
Requires-Python: >=3.12
|
|
21
|
-
Requires-Dist: docling
|
|
22
|
-
|
|
23
|
-
Requires-Dist:
|
|
24
|
-
Requires-Dist: lancedb>=0.25.0
|
|
25
|
-
Requires-Dist: pydantic-ai>=1.0.8
|
|
26
|
-
Requires-Dist: pydantic-graph>=1.0.8
|
|
27
|
-
Requires-Dist: pydantic>=2.11.9
|
|
28
|
-
Requires-Dist: python-dotenv>=1.1.1
|
|
29
|
-
Requires-Dist: rich>=14.1.0
|
|
30
|
-
Requires-Dist: tiktoken>=0.11.0
|
|
31
|
-
Requires-Dist: typer>=0.16.1
|
|
32
|
-
Requires-Dist: watchfiles>=1.1.0
|
|
33
|
-
Provides-Extra: mxbai
|
|
34
|
-
Requires-Dist: mxbai-rerank>=0.1.6; extra == 'mxbai'
|
|
35
|
-
Provides-Extra: voyageai
|
|
36
|
-
Requires-Dist: voyageai>=0.3.5; extra == 'voyageai'
|
|
20
|
+
Requires-Dist: haiku-rag-slim[cohere,docling,inspector,mxbai,voyageai,zeroentropy]==0.19.3
|
|
21
|
+
Provides-Extra: inspector
|
|
22
|
+
Requires-Dist: textual>=1.0.0; extra == 'inspector'
|
|
37
23
|
Description-Content-Type: text/markdown
|
|
38
24
|
|
|
39
25
|
# Haiku RAG
|
|
@@ -42,28 +28,43 @@ Retrieval-Augmented Generation (RAG) library built on LanceDB.
|
|
|
42
28
|
|
|
43
29
|
`haiku.rag` is a Retrieval-Augmented Generation (RAG) library built to work with LanceDB as a local vector database. It uses LanceDB for storing embeddings and performs semantic (vector) search as well as full-text search combined through native hybrid search with Reciprocal Rank Fusion. Both open-source (Ollama) as well as commercial (OpenAI, VoyageAI) embedding providers are supported.
|
|
44
30
|
|
|
45
|
-
> **Note**: Starting with version 0.7.0, haiku.rag uses LanceDB instead of SQLite. If you have an existing SQLite database, use `haiku-rag migrate old_database.sqlite` to migrate your data safely.
|
|
46
|
-
|
|
47
31
|
## Features
|
|
48
32
|
|
|
49
33
|
- **Local LanceDB**: No external servers required, supports also LanceDB cloud storage, S3, Google Cloud & Azure
|
|
50
|
-
- **Multiple embedding providers**: Ollama, VoyageAI, OpenAI, vLLM
|
|
51
|
-
- **Multiple QA providers**: Any provider/model supported by Pydantic AI
|
|
52
|
-
- **Research graph (multi‑agent)**: Plan → Search → Evaluate → Synthesize with agentic AI
|
|
34
|
+
- **Multiple embedding providers**: Ollama, LM Studio, VoyageAI, OpenAI, vLLM
|
|
35
|
+
- **Multiple QA providers**: Any provider/model supported by Pydantic AI (Ollama, LM Studio, OpenAI, Anthropic, etc.)
|
|
53
36
|
- **Native hybrid search**: Vector + full-text search with native LanceDB RRF reranking
|
|
54
|
-
- **Reranking**: Default search result reranking with MixedBread AI, Cohere, or vLLM
|
|
37
|
+
- **Reranking**: Default search result reranking with MixedBread AI, Cohere, Zero Entropy, or vLLM
|
|
55
38
|
- **Question answering**: Built-in QA agents on your documents
|
|
39
|
+
- **Research graph (multi‑agent)**: Plan → Search → Evaluate → Synthesize with agentic AI
|
|
56
40
|
- **File monitoring**: Auto-index files when run as server
|
|
57
|
-
- **40+ file formats**: PDF, DOCX, HTML, Markdown, code files, URLs
|
|
58
|
-
- **MCP server**: Expose as tools for AI assistants
|
|
59
41
|
- **CLI & Python API**: Use from command line or Python
|
|
42
|
+
- **MCP server**: Expose as tools for AI assistants
|
|
43
|
+
- **Flexible document processing**: Local (docling) or remote (docling-serve) processing
|
|
60
44
|
|
|
61
|
-
##
|
|
45
|
+
## Installation
|
|
46
|
+
|
|
47
|
+
**Python 3.12 or newer required**
|
|
48
|
+
|
|
49
|
+
### Full Package (Recommended)
|
|
62
50
|
|
|
63
51
|
```bash
|
|
64
|
-
# Install
|
|
65
52
|
uv pip install haiku.rag
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
Includes all features: document processing, all embedding providers, and rerankers.
|
|
56
|
+
|
|
57
|
+
### Slim Package (Minimal Dependencies)
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
uv pip install haiku.rag-slim
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
Install only the extras you need. See the [Installation](https://ggozad.github.io/haiku.rag/installation/) documentation for available options
|
|
66
64
|
|
|
65
|
+
## Quick Start
|
|
66
|
+
|
|
67
|
+
```bash
|
|
67
68
|
# Add documents
|
|
68
69
|
haiku-rag add "Your content here"
|
|
69
70
|
haiku-rag add "Your content here" --meta author=alice --meta topic=notes
|
|
@@ -72,12 +73,21 @@ haiku-rag add-src document.pdf --meta source=manual
|
|
|
72
73
|
# Search
|
|
73
74
|
haiku-rag search "query"
|
|
74
75
|
|
|
76
|
+
# Search with filters
|
|
77
|
+
haiku-rag search "query" --filter "uri LIKE '%.pdf' AND title LIKE '%paper%'"
|
|
78
|
+
|
|
75
79
|
# Ask questions
|
|
76
80
|
haiku-rag ask "Who is the author of haiku.rag?"
|
|
77
81
|
|
|
78
82
|
# Ask questions with citations
|
|
79
83
|
haiku-rag ask "Who is the author of haiku.rag?" --cite
|
|
80
84
|
|
|
85
|
+
# Deep QA (multi-agent question decomposition)
|
|
86
|
+
haiku-rag ask "Who is the author of haiku.rag?" --deep --cite
|
|
87
|
+
|
|
88
|
+
# Deep QA with verbose output
|
|
89
|
+
haiku-rag ask "Who is the author of haiku.rag?" --deep --verbose
|
|
90
|
+
|
|
81
91
|
# Multi‑agent research (iterative plan/search/evaluate)
|
|
82
92
|
haiku-rag research \
|
|
83
93
|
"What are the main drivers and trends of global temperature anomalies since 1990?" \
|
|
@@ -89,24 +99,23 @@ haiku-rag research \
|
|
|
89
99
|
# Rebuild database (re-chunk and re-embed all documents)
|
|
90
100
|
haiku-rag rebuild
|
|
91
101
|
|
|
92
|
-
# Migrate from SQLite to LanceDB
|
|
93
|
-
haiku-rag migrate old_database.sqlite
|
|
94
|
-
|
|
95
102
|
# Start server with file monitoring
|
|
96
|
-
|
|
97
|
-
haiku-rag serve
|
|
103
|
+
haiku-rag serve --monitor
|
|
98
104
|
```
|
|
99
105
|
|
|
106
|
+
To customize settings, create a `haiku.rag.yaml` config file (see [Configuration](https://ggozad.github.io/haiku.rag/configuration/)).
|
|
107
|
+
|
|
100
108
|
## Python Usage
|
|
101
109
|
|
|
102
110
|
```python
|
|
103
111
|
from haiku.rag.client import HaikuRAG
|
|
104
|
-
from haiku.rag.
|
|
112
|
+
from haiku.rag.config import Config
|
|
113
|
+
from haiku.rag.graph.agui import stream_graph
|
|
114
|
+
from haiku.rag.graph.research import (
|
|
105
115
|
ResearchContext,
|
|
106
116
|
ResearchDeps,
|
|
107
117
|
ResearchState,
|
|
108
118
|
build_research_graph,
|
|
109
|
-
PlanNode,
|
|
110
119
|
)
|
|
111
120
|
|
|
112
121
|
async with HaikuRAG("database.lancedb") as client:
|
|
@@ -127,23 +136,31 @@ async with HaikuRAG("database.lancedb") as client:
|
|
|
127
136
|
print(answer)
|
|
128
137
|
|
|
129
138
|
# Multi‑agent research pipeline (Plan → Search → Evaluate → Synthesize)
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
),
|
|
136
|
-
context=ResearchContext(original_question="…"),
|
|
137
|
-
max_iterations=2,
|
|
138
|
-
confidence_threshold=0.8,
|
|
139
|
-
max_concurrency=3,
|
|
139
|
+
# Graph settings (provider, model, max_iterations, etc.) come from config
|
|
140
|
+
graph = build_research_graph(config=Config)
|
|
141
|
+
question = (
|
|
142
|
+
"What are the main drivers and trends of global temperature "
|
|
143
|
+
"anomalies since 1990?"
|
|
140
144
|
)
|
|
145
|
+
context = ResearchContext(original_question=question)
|
|
146
|
+
state = ResearchState.from_config(context=context, config=Config)
|
|
141
147
|
deps = ResearchDeps(client=client)
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
report =
|
|
148
|
+
|
|
149
|
+
# Blocking run (final result only)
|
|
150
|
+
report = await graph.run(state=state, deps=deps)
|
|
145
151
|
print(report.title)
|
|
146
|
-
|
|
152
|
+
|
|
153
|
+
# Streaming progress (AG-UI events)
|
|
154
|
+
async for event in stream_graph(graph, state, deps):
|
|
155
|
+
if event["type"] == "STEP_STARTED":
|
|
156
|
+
print(f"Starting step: {event['stepName']}")
|
|
157
|
+
elif event["type"] == "ACTIVITY_SNAPSHOT":
|
|
158
|
+
print(f" {event['content']}")
|
|
159
|
+
elif event["type"] == "RUN_FINISHED":
|
|
160
|
+
print("\nResearch complete!\n")
|
|
161
|
+
result = event["result"]
|
|
162
|
+
print(result["title"])
|
|
163
|
+
print(result["executive_summary"])
|
|
147
164
|
```
|
|
148
165
|
|
|
149
166
|
## MCP Server
|
|
@@ -156,13 +173,24 @@ haiku-rag serve --stdio
|
|
|
156
173
|
|
|
157
174
|
Provides tools for document management and search directly in your AI assistant.
|
|
158
175
|
|
|
176
|
+
## Examples
|
|
177
|
+
|
|
178
|
+
See the [examples directory](examples/) for working examples:
|
|
179
|
+
|
|
180
|
+
- **[Interactive Research Assistant](examples/ag-ui-research/)** - Full-stack research assistant with Pydantic AI and AG-UI featuring human-in-the-loop approval and real-time state synchronization
|
|
181
|
+
- **[Docker Setup](examples/docker/)** - Complete Docker deployment with file monitoring and MCP server
|
|
182
|
+
- **[A2A Server](examples/a2a-server/)** - Self-contained A2A protocol server package with conversational agent interface
|
|
183
|
+
|
|
159
184
|
## Documentation
|
|
160
185
|
|
|
161
186
|
Full documentation at: https://ggozad.github.io/haiku.rag/
|
|
162
187
|
|
|
163
188
|
- [Installation](https://ggozad.github.io/haiku.rag/installation/) - Provider setup
|
|
164
|
-
- [Configuration](https://ggozad.github.io/haiku.rag/configuration/) -
|
|
189
|
+
- [Configuration](https://ggozad.github.io/haiku.rag/configuration/) - YAML configuration
|
|
165
190
|
- [CLI](https://ggozad.github.io/haiku.rag/cli/) - Command reference
|
|
166
191
|
- [Python API](https://ggozad.github.io/haiku.rag/python/) - Complete API docs
|
|
167
192
|
- [Agents](https://ggozad.github.io/haiku.rag/agents/) - QA agent and multi-agent research
|
|
193
|
+
- [MCP Server](https://ggozad.github.io/haiku.rag/mcp/) - Model Context Protocol integration
|
|
168
194
|
- [Benchmarks](https://ggozad.github.io/haiku.rag/benchmarks/) - Performance Benchmarks
|
|
195
|
+
|
|
196
|
+
mcp-name: io.github.ggozad/haiku-rag
|
|
@@ -0,0 +1,6 @@
|
|
|
1
|
+
README.md,sha256=3Y4bUYJ-gnWPH_zeCdHK_NZcej9JvA2JdnKoSY0eu6o,6377
|
|
2
|
+
haiku_rag-0.19.3.dist-info/METADATA,sha256=BUpAqkIkKsZTQ1mGopYaSKlvlUC6cRBQtpvLfz-5h5M,7338
|
|
3
|
+
haiku_rag-0.19.3.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
|
|
4
|
+
haiku_rag-0.19.3.dist-info/entry_points.txt,sha256=G1U3nAkNd5YDYd4v0tuYFbriz0i-JheCsFuT9kIoGCI,48
|
|
5
|
+
haiku_rag-0.19.3.dist-info/licenses/LICENSE,sha256=eXZrWjSk9PwYFNK9yUczl3oPl95Z4V9UXH7bPN46iPo,1065
|
|
6
|
+
haiku_rag-0.19.3.dist-info/RECORD,,
|
haiku/rag/__init__.py
DELETED
|
File without changes
|