@blackdrome/open-deepresearch 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,18 @@
1
+ # Changelog
2
+
3
+ ## 0.1.0 - 2026-03-02
4
+
5
+ - Initial standalone release of `@upai/open-deepresearch`.
6
+ - Added full open DeepResearch pipeline:
7
+ - planning
8
+ - adaptive retrieval rounds
9
+ - ranking + domain diversification
10
+ - claim graphing and contradiction checks
11
+ - citation-grounded synthesis
12
+ - truth-contract critique/repair pass
13
+ - Added provider adapters:
14
+ - OpenRouter (`OpenRouterAdapter`)
15
+ - NVIDIA NIM (`NimAdapter`)
16
+ - custom function fallback (`FunctionLlmAdapter`)
17
+ - Added pluggable HTTP JSON search adapter (`HttpJsonSearchAdapter`).
18
+ - Added public factory API (`createOpenDeepResearchEngine`).
package/LICENSE ADDED
@@ -0,0 +1,17 @@
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ Copyright 2026 BLACKDROME UPAI
6
+
7
+ Licensed under the Apache License, Version 2.0 (the "License");
8
+ you may not use this file except in compliance with the License.
9
+ You may obtain a copy of the License at
10
+
11
+ http://www.apache.org/licenses/LICENSE-2.0
12
+
13
+ Unless required by applicable law or agreed to in writing, software
14
+ distributed under the License is distributed on an "AS IS" BASIS,
15
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16
+ See the License for the specific language governing permissions and
17
+ limitations under the License.
package/README.md ADDED
@@ -0,0 +1,133 @@
1
+ # UPAI Open DeepResearch Engine
2
+
3
+ Standalone open-source DeepResearch engine with:
4
+
5
+ - adaptive multi-round retrieval
6
+ - ranking + domain diversification
7
+ - claim clustering + contradiction checks
8
+ - citation-grounded synthesis with critique pass
9
+ - pluggable adapters for search + LLM providers
10
+ - built-in providers for **OpenRouter** and **NVIDIA NIM**
11
+
12
+ ## Install
13
+
14
+ ```bash
15
+ npm install @upai/open-deepresearch
16
+ ```
17
+
18
+ ## Quick Start
19
+
20
+ ```ts
21
+ import {
22
+ HttpJsonSearchAdapter,
23
+ createOpenDeepResearchEngine,
24
+ } from "@upai/open-deepresearch";
25
+
26
+ const searchAdapter = new HttpJsonSearchAdapter({
27
+ endpoint: process.env.SEARCH_ENDPOINT!,
28
+ apiKey: process.env.SEARCH_API_KEY,
29
+ apiKeyHeader: "Authorization",
30
+ staticPayload: { provider: "free", mode: "web", num: 10 },
31
+ });
32
+
33
+ const engine = createOpenDeepResearchEngine({
34
+ searchAdapter,
35
+ defaultProvider: "openrouter",
36
+ openRouter: {
37
+ apiKey: process.env.OPENROUTER_API_KEY!,
38
+ model: process.env.OPENROUTER_MODEL || "openai/gpt-4o-mini",
39
+ },
40
+ nim: {
41
+ apiKey: process.env.NIM_API_KEY!,
42
+ model: process.env.NIM_MODEL || "meta/llama-3.1-70b-instruct",
43
+ },
44
+ fallbackComplete: async (prompt) => {
45
+ return `Fallback handler received prompt length=${prompt.length}`;
46
+ },
47
+ });
48
+
49
+ const run = await engine.run("Best RAG architecture for support chat in 2026", {
50
+ depth: "deep",
51
+ providerHint: "openrouter",
52
+ onProgress: (p) => console.log(`[${p.stage}] ${p.message}`),
53
+ });
54
+
55
+ console.log(run.finalAnswer);
56
+ console.log(run.sourcesForMessage.slice(0, 5));
57
+ ```
58
+
59
+ ## Required API Keys
60
+
61
+ At minimum, configure one LLM provider and one search endpoint.
62
+
63
+ ### OpenRouter
64
+
65
+ - `OPENROUTER_API_KEY`
66
+ - `OPENROUTER_MODEL` (example: `openai/gpt-4o-mini`)
67
+
68
+ ### NVIDIA NIM
69
+
70
+ - `NIM_API_KEY`
71
+ - `NIM_MODEL` (example: `meta/llama-3.1-70b-instruct`)
72
+
73
+ ### Search
74
+
75
+ Use any endpoint that returns JSON with either:
76
+
77
+ - `results: [{ title, snippet, url }]`
78
+ - OR `items: [{ title, snippet, url|link }]`
79
+
80
+ Then configure:
81
+
82
+ - `SEARCH_ENDPOINT`
83
+ - `SEARCH_API_KEY` (optional, depending on your backend)
84
+
85
+ ## Engine Pipeline
86
+
87
+ 1. **Plan**: Creates multi-angle query set (facts, recency, benchmarks, counterpoints, docs).
88
+ 2. **Retrieve**: Executes search with domain/forum constraints.
89
+ 3. **Rank**: Scores by relevance + authority + recency + constraint matching.
90
+ 4. **Diversify**: Caps per-domain results to avoid over-concentration.
91
+ 5. **Verify**: Builds claim graph and finds likely contradictions.
92
+ 6. **Synthesize**: Produces direct answer with citations.
93
+ 7. **Critique**: Enforces truth contract and repairs response when possible.
94
+
95
+ ## Domain / Forum Constraints
96
+
97
+ The engine auto-detects constraints from user query language:
98
+
99
+ - `site:reddit.com best vector db`
100
+ - `only from stack overflow`
101
+ - `discussion on hacker news`
102
+
103
+ You can also pass explicit constraints in `run(..., { constraints })`.
104
+
105
+ ## Progress Events
106
+
107
+ `run` supports progress hooks for UI and logs:
108
+
109
+ ```ts
110
+ onProgress: (progress) => {
111
+ // progress.stage: planning | retrieving | ranking | verifying | synthesizing | critiquing
112
+ }
113
+ ```
114
+
115
+ ## Exports
116
+
117
+ - `OpenDeepResearchEngine`
118
+ - `createOpenDeepResearchEngine`
119
+ - `OpenRouterAdapter`
120
+ - `NimAdapter`
121
+ - `HttpJsonSearchAdapter`
122
+ - `FunctionLlmAdapter`
123
+ - all core types and constraints utilities
124
+
125
+ ## Notes
126
+
127
+ - Node 20+ recommended.
128
+ - The library uses `fetch` and standard Web APIs.
129
+ - For strict security, keep provider keys server-side.
130
+
131
+ ## License
132
+
133
+ Apache-2.0