@researai/deepscientist 1.5.15 → 1.5.17
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +385 -104
- package/bin/ds.js +1241 -110
- package/docs/en/00_QUICK_START.md +100 -19
- package/docs/en/01_SETTINGS_REFERENCE.md +34 -1
- package/docs/en/02_START_RESEARCH_GUIDE.md +7 -0
- package/docs/en/05_TUI_GUIDE.md +6 -0
- package/docs/en/06_RUNTIME_AND_CANVAS.md +4 -3
- package/docs/en/09_DOCTOR.md +25 -8
- package/docs/en/14_PROMPT_SKILLS_AND_MCP_GUIDE.md +63 -13
- package/docs/en/15_CODEX_PROVIDER_SETUP.md +37 -11
- package/docs/en/19_EXTERNAL_CONTROLLER_GUIDE.md +226 -0
- package/docs/en/19_LOCAL_BROWSER_AUTH.md +70 -0
- package/docs/en/20_WORKSPACE_MODES_GUIDE.md +250 -0
- package/docs/en/21_LOCAL_MODEL_BACKENDS_GUIDE.md +283 -0
- package/docs/en/91_DEVELOPMENT.md +237 -0
- package/docs/en/README.md +24 -2
- package/docs/zh/00_QUICK_START.md +89 -19
- package/docs/zh/01_SETTINGS_REFERENCE.md +34 -1
- package/docs/zh/02_START_RESEARCH_GUIDE.md +7 -0
- package/docs/zh/05_TUI_GUIDE.md +6 -0
- package/docs/zh/09_DOCTOR.md +26 -9
- package/docs/zh/14_PROMPT_SKILLS_AND_MCP_GUIDE.md +63 -13
- package/docs/zh/15_CODEX_PROVIDER_SETUP.md +37 -11
- package/docs/zh/19_EXTERNAL_CONTROLLER_GUIDE.md +226 -0
- package/docs/zh/19_LOCAL_BROWSER_AUTH.md +68 -0
- package/docs/zh/20_WORKSPACE_MODES_GUIDE.md +251 -0
- package/docs/zh/21_LOCAL_MODEL_BACKENDS_GUIDE.md +281 -0
- package/docs/zh/README.md +24 -2
- package/install.sh +46 -4
- package/package.json +2 -1
- package/pyproject.toml +1 -1
- package/src/deepscientist/__init__.py +1 -1
- package/src/deepscientist/acp/envelope.py +6 -0
- package/src/deepscientist/artifact/service.py +647 -22
- package/src/deepscientist/bash_exec/service.py +234 -9
- package/src/deepscientist/bridges/connectors.py +8 -2
- package/src/deepscientist/cli.py +115 -19
- package/src/deepscientist/codex_cli_compat.py +367 -22
- package/src/deepscientist/config/models.py +2 -1
- package/src/deepscientist/config/service.py +183 -13
- package/src/deepscientist/daemon/api/handlers.py +255 -31
- package/src/deepscientist/daemon/api/router.py +9 -0
- package/src/deepscientist/daemon/app.py +1146 -105
- package/src/deepscientist/diagnostics/__init__.py +6 -0
- package/src/deepscientist/diagnostics/runner_failures.py +130 -0
- package/src/deepscientist/doctor.py +207 -3
- package/src/deepscientist/gitops/__init__.py +10 -1
- package/src/deepscientist/gitops/diff.py +129 -0
- package/src/deepscientist/gitops/service.py +4 -1
- package/src/deepscientist/mcp/server.py +39 -0
- package/src/deepscientist/prompts/builder.py +275 -34
- package/src/deepscientist/quest/layout.py +15 -2
- package/src/deepscientist/quest/service.py +707 -55
- package/src/deepscientist/quest/stage_views.py +6 -1
- package/src/deepscientist/runners/codex.py +143 -43
- package/src/deepscientist/shared.py +19 -0
- package/src/deepscientist/skills/__init__.py +2 -2
- package/src/deepscientist/skills/installer.py +196 -5
- package/src/deepscientist/skills/registry.py +66 -0
- package/src/prompts/connectors/qq.md +18 -8
- package/src/prompts/connectors/weixin.md +16 -6
- package/src/prompts/contracts/shared_interaction.md +14 -2
- package/src/prompts/system.md +23 -5
- package/src/prompts/system_copilot.md +56 -0
- package/src/skills/analysis-campaign/SKILL.md +1 -0
- package/src/skills/baseline/SKILL.md +8 -0
- package/src/skills/decision/SKILL.md +8 -0
- package/src/skills/experiment/SKILL.md +8 -0
- package/src/skills/figure-polish/SKILL.md +1 -0
- package/src/skills/finalize/SKILL.md +1 -0
- package/src/skills/idea/SKILL.md +1 -0
- package/src/skills/intake-audit/SKILL.md +8 -0
- package/src/skills/mentor/SKILL.md +217 -0
- package/src/skills/mentor/references/correction-rules.md +210 -0
- package/src/skills/mentor/references/knowledge-profile.md +91 -0
- package/src/skills/mentor/references/persona-profile.md +138 -0
- package/src/skills/mentor/references/taste-profile.md +128 -0
- package/src/skills/mentor/references/thought-style-profile.md +138 -0
- package/src/skills/mentor/references/work-profile.md +289 -0
- package/src/skills/mentor/references/workflow-profile.md +240 -0
- package/src/skills/optimize/SKILL.md +1 -0
- package/src/skills/rebuttal/SKILL.md +1 -0
- package/src/skills/review/SKILL.md +1 -0
- package/src/skills/scout/SKILL.md +8 -0
- package/src/skills/write/SKILL.md +1 -0
- package/src/tui/dist/app/AppContainer.js +19 -11
- package/src/tui/dist/index.js +4 -1
- package/src/tui/dist/lib/api.js +33 -3
- package/src/tui/package.json +1 -1
- package/src/ui/dist/assets/AiManusChatView-Bv-Z8YpU.js +204 -0
- package/src/ui/dist/assets/AnalysisPlugin-BCKAfjba.js +1 -0
- package/src/ui/dist/assets/CliPlugin-BCKcpc35.js +109 -0
- package/src/ui/dist/assets/CodeEditorPlugin-DbOfSJ8K.js +2 -0
- package/src/ui/dist/assets/CodeViewerPlugin-CbaFRrUU.js +270 -0
- package/src/ui/dist/assets/DocViewerPlugin-DAjLVeQD.js +7 -0
- package/src/ui/dist/assets/GitCommitViewerPlugin-CIUqbUDO.js +1 -0
- package/src/ui/dist/assets/GitDiffViewerPlugin-CQACjoAA.js +6 -0
- package/src/ui/dist/assets/GitSnapshotViewer-0r4nLPke.js +30 -0
- package/src/ui/dist/assets/ImageViewerPlugin-nBOmI2v_.js +26 -0
- package/src/ui/dist/assets/LabCopilotPanel-BHxOxF4z.js +14 -0
- package/src/ui/dist/assets/LabPlugin-BKoZGs95.js +22 -0
- package/src/ui/dist/assets/LatexPlugin-ZwtV8pIp.js +25 -0
- package/src/ui/dist/assets/MarkdownViewerPlugin-DKqVfKyW.js +128 -0
- package/src/ui/dist/assets/MarketplacePlugin-BwxStZ9D.js +13 -0
- package/src/ui/dist/assets/NotebookEditor-BEQhaQbt.js +81 -0
- package/src/ui/dist/assets/{NotebookEditor-CccQYZjX.css → NotebookEditor-BHH8rdGj.css} +1 -1
- package/src/ui/dist/assets/NotebookEditor-BOr3x3Ej.css +1 -0
- package/src/ui/dist/assets/NotebookEditor-DB9N_T9q.js +361 -0
- package/src/ui/dist/assets/PdfLoader-Cy5jtWrr.css +1 -0
- package/src/ui/dist/assets/PdfLoader-eWBONbQP.js +16 -0
- package/src/ui/dist/assets/PdfMarkdownPlugin-D22YOZL3.js +1 -0
- package/src/ui/dist/assets/PdfViewerPlugin-c-RK9DLM.js +17 -0
- package/src/ui/dist/assets/PdfViewerPlugin-nwwE-fjJ.css +1 -0
- package/src/ui/dist/assets/SearchPlugin-CxF9ytAx.js +16 -0
- package/src/ui/dist/assets/SearchPlugin-DA4en4hK.css +1 -0
- package/src/ui/dist/assets/TextViewerPlugin-C5xqeeUH.js +54 -0
- package/src/ui/dist/assets/VNCViewer-BoLGLnHz.js +11 -0
- package/src/ui/dist/assets/bot-DREQOxzP.js +6 -0
- package/src/ui/dist/assets/browser-CTB2jwNe.js +8 -0
- package/src/ui/dist/assets/chevron-up-C9Qpx4DE.js +6 -0
- package/src/ui/dist/assets/code-WlFHE7z_.js +6 -0
- package/src/ui/dist/assets/file-content-BZMz3RYp.js +1 -0
- package/src/ui/dist/assets/file-diff-panel-CQhw0jS2.js +1 -0
- package/src/ui/dist/assets/file-jump-queue-DA-SdG__.js +1 -0
- package/src/ui/dist/assets/file-socket-CfQPKQKj.js +1 -0
- package/src/ui/dist/assets/git-commit-horizontal-DxZ8DCZh.js +6 -0
- package/src/ui/dist/assets/image-Bgl4VIyx.js +6 -0
- package/src/ui/dist/assets/index-BpV6lusQ.css +33 -0
- package/src/ui/dist/assets/index-CBNVuWcP.js +2496 -0
- package/src/ui/dist/assets/index-CwNu1aH4.js +11 -0
- package/src/ui/dist/assets/index-DrUnlf6K.js +1 -0
- package/src/ui/dist/assets/index-NW-h8VzN.js +1 -0
- package/src/ui/dist/assets/monaco-CiHMMNH_.js +1 -0
- package/src/ui/dist/assets/pdf-effect-queue-J8OnM0jE.js +6 -0
- package/src/ui/dist/assets/plugin-monaco-C8UgLomw.js +19 -0
- package/src/ui/dist/assets/plugin-notebook-HbW2K-1c.js +169 -0
- package/src/ui/dist/assets/plugin-pdf-CR8hgQBV.js +357 -0
- package/src/ui/dist/assets/plugin-terminal-MXFIPun8.js +227 -0
- package/src/ui/dist/assets/popover-CLc0pPP8.js +1 -0
- package/src/ui/dist/assets/project-sync-C9IdzdZW.js +1 -0
- package/src/ui/dist/assets/select-Cs2PmzwL.js +11 -0
- package/src/ui/dist/assets/sigma-ClKcHAXm.js +6 -0
- package/src/ui/dist/assets/trash-DwpbFr3w.js +11 -0
- package/src/ui/dist/assets/useCliAccess-NQ8m0Let.js +1 -0
- package/src/ui/dist/assets/useFileDiffOverlay-FuhcnKiw.js +1 -0
- package/src/ui/dist/assets/wrap-text-BC-Hltpd.js +11 -0
- package/src/ui/dist/assets/zoom-out-E_gaeAxL.js +11 -0
- package/src/ui/dist/index.html +5 -2
- package/src/ui/dist/assets/AiManusChatView-DDjbFnbt.js +0 -26597
- package/src/ui/dist/assets/AnalysisPlugin-Yb5IdmaU.js +0 -123
- package/src/ui/dist/assets/CliPlugin-e64sreyu.js +0 -31037
- package/src/ui/dist/assets/CodeEditorPlugin-C4D2TIkU.js +0 -427
- package/src/ui/dist/assets/CodeViewerPlugin-BVoNZIvC.js +0 -905
- package/src/ui/dist/assets/DocViewerPlugin-CLChbllo.js +0 -278
- package/src/ui/dist/assets/GitDiffViewerPlugin-C4xeFyFQ.js +0 -2661
- package/src/ui/dist/assets/ImageViewerPlugin-OiMUAcLi.js +0 -500
- package/src/ui/dist/assets/LabCopilotPanel-BjD2ThQF.js +0 -4104
- package/src/ui/dist/assets/LabPlugin-DQPg-NrB.js +0 -2677
- package/src/ui/dist/assets/LatexPlugin-CI05XAV9.js +0 -1792
- package/src/ui/dist/assets/MarkdownViewerPlugin-DpeBLYZf.js +0 -308
- package/src/ui/dist/assets/MarketplacePlugin-DolE58Q2.js +0 -413
- package/src/ui/dist/assets/NotebookEditor-7Qm2rSWD.js +0 -4214
- package/src/ui/dist/assets/NotebookEditor-C1kWaxKi.js +0 -84873
- package/src/ui/dist/assets/NotebookEditor-C3VQ7ylN.css +0 -1405
- package/src/ui/dist/assets/PdfLoader-BfOHw8Zw.js +0 -25468
- package/src/ui/dist/assets/PdfLoader-C-Y707R3.css +0 -49
- package/src/ui/dist/assets/PdfMarkdownPlugin-BulDREv1.js +0 -409
- package/src/ui/dist/assets/PdfViewerPlugin-C-daaOaL.js +0 -3095
- package/src/ui/dist/assets/PdfViewerPlugin-DQ11QcSf.css +0 -3627
- package/src/ui/dist/assets/SearchPlugin-CjpaiJ3A.js +0 -741
- package/src/ui/dist/assets/SearchPlugin-DDMrGDkh.css +0 -379
- package/src/ui/dist/assets/TextViewerPlugin-BxIyqPQC.js +0 -472
- package/src/ui/dist/assets/VNCViewer-HAg9mF7M.js +0 -18821
- package/src/ui/dist/assets/awareness-C0NPR2Dj.js +0 -292
- package/src/ui/dist/assets/bot-0DYntytV.js +0 -21
- package/src/ui/dist/assets/browser-BAcuE0Xj.js +0 -2895
- package/src/ui/dist/assets/code-B20Slj_w.js +0 -17
- package/src/ui/dist/assets/file-content-DT24KFma.js +0 -377
- package/src/ui/dist/assets/file-diff-panel-DK13YPql.js +0 -92
- package/src/ui/dist/assets/file-jump-queue-r5XKgJEV.js +0 -16
- package/src/ui/dist/assets/file-socket-B4T2o4nR.js +0 -58
- package/src/ui/dist/assets/function-B5QZkkHC.js +0 -1895
- package/src/ui/dist/assets/image-DSeR_sDS.js +0 -18
- package/src/ui/dist/assets/index-BrFje2Uk.js +0 -120
- package/src/ui/dist/assets/index-BwRJaoTl.js +0 -25
- package/src/ui/dist/assets/index-D_E4281X.js +0 -221322
- package/src/ui/dist/assets/index-DnYB3xb1.js +0 -159
- package/src/ui/dist/assets/index-G7AcWcMu.css +0 -12594
- package/src/ui/dist/assets/monaco-LExaAN3Y.js +0 -623
- package/src/ui/dist/assets/pdf-effect-queue-BJk5okWJ.js +0 -47
- package/src/ui/dist/assets/pdf_viewer-e0g1is2C.js +0 -8206
- package/src/ui/dist/assets/popover-D3Gg_FoV.js +0 -476
- package/src/ui/dist/assets/project-sync-C_ygLlVU.js +0 -297
- package/src/ui/dist/assets/select-CpAK6uWm.js +0 -1690
- package/src/ui/dist/assets/sigma-DEccaSgk.js +0 -22
- package/src/ui/dist/assets/square-check-big-uUfyVsbD.js +0 -17
- package/src/ui/dist/assets/trash-CXvwwSe8.js +0 -32
- package/src/ui/dist/assets/useCliAccess-Bnop4mgR.js +0 -957
- package/src/ui/dist/assets/useFileDiffOverlay-B8eUAX0I.js +0 -53
- package/src/ui/dist/assets/wrap-text-9vbOBpkW.js +0 -35
- package/src/ui/dist/assets/yjs-DncrqiZ8.js +0 -11243
- package/src/ui/dist/assets/zoom-out-BgVMmOW4.js +0 -34
|
@@ -0,0 +1,283 @@
|
|
|
1
|
+
# 21 Local Model Backends Guide: vLLM, Ollama, and SGLang
|
|
2
|
+
|
|
3
|
+
This guide explains how to run DeepScientist against a local OpenAI-compatible model backend through Codex.
|
|
4
|
+
|
|
5
|
+
The key point is simple:
|
|
6
|
+
|
|
7
|
+
- current Codex CLI requires `wire_api = "responses"`
|
|
8
|
+
- a backend that only works through `/v1/chat/completions` is not enough
|
|
9
|
+
- you must verify `/v1/responses` before expecting `ds` or `ds doctor` to succeed
|
|
10
|
+
|
|
11
|
+
There is one practical fallback:
|
|
12
|
+
|
|
13
|
+
- if your backend is chat-only, you may still be able to use it through **Codex CLI `0.57.0`**
|
|
14
|
+
- that older path can still work with `wire_api = "chat"` when the provider is configured at the top level
|
|
15
|
+
- DeepScientist now checks this automatically during the Codex startup probe; if it sees `wire_api = "chat"` on any active provider config, it requires `codex-cli 0.57.0` before continuing
|
|
16
|
+
|
|
17
|
+
## 1. What DeepScientist actually depends on
|
|
18
|
+
|
|
19
|
+
DeepScientist does not talk to vLLM, Ollama, or SGLang directly.
|
|
20
|
+
|
|
21
|
+
It talks to:
|
|
22
|
+
|
|
23
|
+
- `codex`
|
|
24
|
+
- and `codex` talks to your configured provider profile in `~/.codex/config.toml`
|
|
25
|
+
|
|
26
|
+
So the compatibility chain is:
|
|
27
|
+
|
|
28
|
+
1. your local backend
|
|
29
|
+
2. Codex profile
|
|
30
|
+
3. Codex startup probe
|
|
31
|
+
4. DeepScientist runner
|
|
32
|
+
|
|
33
|
+
If step 2 or step 3 fails, DeepScientist cannot start the Codex runner successfully.
|
|
34
|
+
|
|
35
|
+
## 2. The current Codex rule you must know
|
|
36
|
+
|
|
37
|
+
On the current Codex CLI:
|
|
38
|
+
|
|
39
|
+
- `wire_api = "responses"` is supported
|
|
40
|
+
- `wire_api = "chat"` is rejected
|
|
41
|
+
|
|
42
|
+
In practice that means:
|
|
43
|
+
|
|
44
|
+
- `vLLM`: recommended if its OpenAI-compatible server exposes `/v1/responses`
|
|
45
|
+
- `Ollama`: only use it if your installed version exposes `/v1/responses`
|
|
46
|
+
- `SGLang`: if your deployment only supports `/v1/chat/completions`, it is not compatible with the latest Codex runner
|
|
47
|
+
|
|
48
|
+
## 2.1 Support summary
|
|
49
|
+
|
|
50
|
+
| Backend | `/v1/chat/completions` | `/v1/responses` | Latest Codex | Codex `0.57.0` fallback |
|
|
51
|
+
|---|---|---|---|---|
|
|
52
|
+
| vLLM | yes | yes | supported | usually unnecessary |
|
|
53
|
+
| Ollama | yes | depends on version | supported only when `/v1/responses` works | possible if it is chat-only |
|
|
54
|
+
| SGLang | yes | often missing or incomplete | not supported when it is chat-only | possible fallback path |
|
|
55
|
+
|
|
56
|
+
## 3. Test the backend first
|
|
57
|
+
|
|
58
|
+
Before touching DeepScientist, verify the backend directly.
|
|
59
|
+
|
|
60
|
+
### Step 1: list models
|
|
61
|
+
|
|
62
|
+
```bash
|
|
63
|
+
curl http://127.0.0.1:8004/v1/models \
|
|
64
|
+
-H "Authorization: Bearer 1234"
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
You need one real model id from this output, for example:
|
|
68
|
+
|
|
69
|
+
```text
|
|
70
|
+
/model/gpt-oss-120b
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
### Step 2: test chat completions
|
|
74
|
+
|
|
75
|
+
```bash
|
|
76
|
+
curl http://127.0.0.1:8004/v1/chat/completions \
|
|
77
|
+
-H "Content-Type: application/json" \
|
|
78
|
+
-H "Authorization: Bearer 1234" \
|
|
79
|
+
-d '{
|
|
80
|
+
"model": "/model/gpt-oss-120b",
|
|
81
|
+
"messages": [
|
|
82
|
+
{ "role": "user", "content": "Reply with exactly HELLO." }
|
|
83
|
+
]
|
|
84
|
+
}'
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
If this works, the backend is at least OpenAI-chat-compatible.
|
|
88
|
+
|
|
89
|
+
### Step 3: test Responses API
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
curl http://127.0.0.1:8004/v1/responses \
|
|
93
|
+
-H "Content-Type: application/json" \
|
|
94
|
+
-H "Authorization: Bearer 1234" \
|
|
95
|
+
-d '{
|
|
96
|
+
"model": "/model/gpt-oss-120b",
|
|
97
|
+
"input": "Reply with exactly HELLO."
|
|
98
|
+
}'
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
This is the decisive test.
|
|
102
|
+
|
|
103
|
+
If `/v1/responses` fails, the latest Codex CLI will not work with this backend profile.
|
|
104
|
+
|
|
105
|
+
## 4. What we actually observed on this server
|
|
106
|
+
|
|
107
|
+
We tested the local backend at `http://127.0.0.1:8004/v1`.
|
|
108
|
+
|
|
109
|
+
Observed behavior:
|
|
110
|
+
|
|
111
|
+
- `GET /v1/models` succeeded
|
|
112
|
+
- `POST /v1/chat/completions` succeeded
|
|
113
|
+
- `POST /v1/responses` returned `500 Internal Server Error`
|
|
114
|
+
- the `/v1/models` payload reported `owned_by: "sglang"`
|
|
115
|
+
|
|
116
|
+
So this specific `8004` deployment behaves like a chat-compatible SGLang-style server, not a Codex-compatible Responses backend.
|
|
117
|
+
|
|
118
|
+
That means:
|
|
119
|
+
|
|
120
|
+
- it can answer raw chat requests
|
|
121
|
+
- but it cannot currently be used by the latest Codex runner
|
|
122
|
+
- and therefore DeepScientist cannot use it through the normal Codex path
|
|
123
|
+
|
|
124
|
+
We also tested an older Codex path:
|
|
125
|
+
|
|
126
|
+
- latest Codex + `wire_api = "responses"` failed against this backend
|
|
127
|
+
- Codex `0.57.0` + top-level `model_provider` / `model` + `wire_api = "chat"` succeeded
|
|
128
|
+
|
|
129
|
+
So for this server specifically:
|
|
130
|
+
|
|
131
|
+
- **latest Codex path**: no
|
|
132
|
+
- **Codex `0.57.0` fallback**: yes
|
|
133
|
+
|
|
134
|
+
## 5. Codex profile example for a local Responses-compatible backend
|
|
135
|
+
|
|
136
|
+
If your backend really supports `/v1/responses`, create a profile like this:
|
|
137
|
+
|
|
138
|
+
```toml
|
|
139
|
+
[model_providers.local_vllm]
|
|
140
|
+
name = "local_vllm"
|
|
141
|
+
base_url = "http://127.0.0.1:8004/v1"
|
|
142
|
+
env_key = "LOCAL_API_KEY"
|
|
143
|
+
wire_api = "responses"
|
|
144
|
+
requires_openai_auth = false
|
|
145
|
+
|
|
146
|
+
[profiles.local_vllm]
|
|
147
|
+
model = "/model/gpt-oss-120b"
|
|
148
|
+
model_provider = "local_vllm"
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
Then test Codex directly first:
|
|
152
|
+
|
|
153
|
+
```bash
|
|
154
|
+
export LOCAL_API_KEY=1234
|
|
155
|
+
codex exec --profile local_vllm --json --cd /tmp --skip-git-repo-check - <<'EOF'
|
|
156
|
+
Reply with exactly HELLO.
|
|
157
|
+
EOF
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
If this fails, do not continue to DeepScientist yet.
|
|
161
|
+
|
|
162
|
+
## 5.1 Chat-only fallback for Codex `0.57.0`
|
|
163
|
+
|
|
164
|
+
If your backend only supports `/v1/chat/completions`, you can try this fallback path:
|
|
165
|
+
|
|
166
|
+
1. install Codex `0.57.0`
|
|
167
|
+
2. use `wire_api = "chat"`
|
|
168
|
+
3. put `model_provider` and `model` at the top level
|
|
169
|
+
|
|
170
|
+
Example:
|
|
171
|
+
|
|
172
|
+
```toml
|
|
173
|
+
model = "/model/gpt-oss-120b"
|
|
174
|
+
model_provider = "localchat"
|
|
175
|
+
approval_policy = "never"
|
|
176
|
+
sandbox_mode = "workspace-write"
|
|
177
|
+
|
|
178
|
+
[model_providers.localchat]
|
|
179
|
+
name = "localchat"
|
|
180
|
+
base_url = "http://127.0.0.1:8004/v1"
|
|
181
|
+
env_key = "LOCAL_API_KEY"
|
|
182
|
+
wire_api = "chat"
|
|
183
|
+
requires_openai_auth = false
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
Then test:
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
export LOCAL_API_KEY=1234
|
|
190
|
+
codex exec --json --cd /tmp --skip-git-repo-check - <<'EOF'
|
|
191
|
+
Reply with exactly HELLO.
|
|
192
|
+
EOF
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
If this older Codex path works, DeepScientist can usually reuse it with the same runner binary and profile strategy.
|
|
196
|
+
|
|
197
|
+
## 6. DeepScientist commands after Codex works
|
|
198
|
+
|
|
199
|
+
Once the direct Codex check works, run:
|
|
200
|
+
|
|
201
|
+
```bash
|
|
202
|
+
ds doctor --codex-profile local_vllm
|
|
203
|
+
ds --codex-profile local_vllm
|
|
204
|
+
```
|
|
205
|
+
|
|
206
|
+
`ds doctor` is the canonical command.
|
|
207
|
+
|
|
208
|
+
`ds docker` is only a legacy alias for `ds doctor`; it is not a Docker deployment command.
|
|
209
|
+
|
|
210
|
+
If you want to persist it in DeepScientist:
|
|
211
|
+
|
|
212
|
+
```yaml
|
|
213
|
+
codex:
|
|
214
|
+
enabled: true
|
|
215
|
+
binary: codex
|
|
216
|
+
config_dir: ~/.codex
|
|
217
|
+
profile: local_vllm
|
|
218
|
+
model: inherit
|
|
219
|
+
model_reasoning_effort: high
|
|
220
|
+
approval_policy: never
|
|
221
|
+
sandbox_mode: danger-full-access
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
## 7. Backend compatibility summary
|
|
225
|
+
|
|
226
|
+
### vLLM
|
|
227
|
+
|
|
228
|
+
Recommended.
|
|
229
|
+
|
|
230
|
+
Use it when:
|
|
231
|
+
|
|
232
|
+
- `/v1/models` works
|
|
233
|
+
- `/v1/responses` works
|
|
234
|
+
- the model id is visible and stable
|
|
235
|
+
|
|
236
|
+
If those are true, vLLM is the cleanest current local path for Codex + DeepScientist.
|
|
237
|
+
|
|
238
|
+
### Ollama
|
|
239
|
+
|
|
240
|
+
Conditionally supported.
|
|
241
|
+
|
|
242
|
+
Use it only when:
|
|
243
|
+
|
|
244
|
+
- your Ollama version exposes `/v1/responses`
|
|
245
|
+
- your target model works through that endpoint
|
|
246
|
+
|
|
247
|
+
If Ollama only gives you chat-completions semantics, it is not enough for the latest Codex CLI, but it may still be usable through Codex `0.57.0`.
|
|
248
|
+
|
|
249
|
+
### SGLang
|
|
250
|
+
|
|
251
|
+
Be careful.
|
|
252
|
+
|
|
253
|
+
If your SGLang deployment behaves like this:
|
|
254
|
+
|
|
255
|
+
- `/v1/chat/completions` works
|
|
256
|
+
- `/v1/responses` fails
|
|
257
|
+
|
|
258
|
+
then it is not currently compatible with the latest Codex runner.
|
|
259
|
+
|
|
260
|
+
If you must use that backend anyway, the realistic fallback is Codex `0.57.0` with `wire_api = "chat"`.
|
|
261
|
+
|
|
262
|
+
## 8. What to do if you only have chat-completions
|
|
263
|
+
|
|
264
|
+
If your backend only supports `/v1/chat/completions`, you currently have three practical options:
|
|
265
|
+
|
|
266
|
+
1. switch to a Responses-compatible backend such as vLLM
|
|
267
|
+
2. upgrade to an Ollama release that really exposes `/v1/responses`
|
|
268
|
+
3. downgrade the Codex CLI path to `0.57.0` and use `wire_api = "chat"`
|
|
269
|
+
4. place a Responses-compatible proxy in front of the backend
|
|
270
|
+
|
|
271
|
+
Right now, this is a Codex CLI limitation, not a DeepScientist-only setting mistake.
|
|
272
|
+
|
|
273
|
+
## 9. Recommended workflow
|
|
274
|
+
|
|
275
|
+
Use this order every time:
|
|
276
|
+
|
|
277
|
+
1. test `/v1/models`
|
|
278
|
+
2. test `/v1/responses`
|
|
279
|
+
3. test `codex exec --profile <name>`
|
|
280
|
+
4. test `ds doctor --codex-profile <name>`
|
|
281
|
+
5. only then launch `ds --codex-profile <name>`
|
|
282
|
+
|
|
283
|
+
If step 2 fails, stop there. Do not expect DeepScientist to succeed through the latest Codex path.
|
|
@@ -179,6 +179,243 @@ match = service.resolve_binary("example-binary", preferred_tools=("example",))
|
|
|
179
179
|
- keep clear source reporting such as `tinytex` versus `path`
|
|
180
180
|
- add tests for registration, status, and binary resolution
|
|
181
181
|
|
|
182
|
+
## Extending Core Runtime
|
|
183
|
+
|
|
184
|
+
This section is the maintainer checklist for adding one new built-in MCP tool, one new skill, or one new connector.
|
|
185
|
+
|
|
186
|
+
Keep the extension shape close to the repository's existing registries and contracts. Do not invent a parallel path when the current registry or prompt system already covers the need.
|
|
187
|
+
|
|
188
|
+
### Add a built-in MCP tool
|
|
189
|
+
|
|
190
|
+
Public MCP surface must stay limited to:
|
|
191
|
+
|
|
192
|
+
- `memory`
|
|
193
|
+
- `artifact`
|
|
194
|
+
- `bash_exec`
|
|
195
|
+
|
|
196
|
+
Do not add a new public namespace such as `git`, `connector`, or `runtime_tool`.
|
|
197
|
+
|
|
198
|
+
If you need new Git behavior, add it under `artifact`.
|
|
199
|
+
If you need new durable shell behavior, add it under `bash_exec`.
|
|
200
|
+
|
|
201
|
+
#### Files to change
|
|
202
|
+
|
|
203
|
+
- `src/deepscientist/mcp/server.py`
|
|
204
|
+
- add the new `@server.tool(...)` handler under `build_memory_server(...)`, `build_artifact_server(...)`, or `build_bash_exec_server(...)`
|
|
205
|
+
- `src/deepscientist/mcp/context.py`
|
|
206
|
+
- only if the new tool needs extra quest/runtime context wiring
|
|
207
|
+
- `src/deepscientist/runners/codex.py`
|
|
208
|
+
- if the tool should appear in built-in MCP approval policy defaults, add it under `_BUILTIN_MCP_TOOL_APPROVALS`
|
|
209
|
+
- `docs/en/07_MEMORY_AND_MCP.md`
|
|
210
|
+
- update user-visible semantics if the tool changes how the namespace should be used
|
|
211
|
+
- `docs/en/14_PROMPT_SKILLS_AND_MCP_GUIDE.md`
|
|
212
|
+
- update the built-in MCP description if the tool meaning materially changes
|
|
213
|
+
|
|
214
|
+
#### Implementation rules
|
|
215
|
+
|
|
216
|
+
1. Keep the handler thin. Put durable state changes in the underlying service layer such as `ArtifactService`, `MemoryService`, or `BashExecService`.
|
|
217
|
+
2. Use `McpContext` for quest-local paths instead of reconstructing runtime state ad hoc.
|
|
218
|
+
3. Use read-only annotations for non-mutating tools.
|
|
219
|
+
4. If the tool changes durable quest state but does not itself send a user-visible message, follow the existing artifact watchdog pattern in `mcp/server.py`.
|
|
220
|
+
5. Do not bypass the current namespace meaning. Example: do not hide shell execution inside `artifact`.
|
|
221
|
+
|
|
222
|
+
#### Minimum test checklist
|
|
223
|
+
|
|
224
|
+
- `tests/test_mcp_servers.py`
|
|
225
|
+
- namespace wiring and tool call behavior
|
|
226
|
+
- `tests/test_memory_and_artifact.py`
|
|
227
|
+
- if the tool changes artifact or memory semantics
|
|
228
|
+
- `tests/test_daemon_api.py`
|
|
229
|
+
- if API payloads or quest projections depend on the new tool's outputs
|
|
230
|
+
|
|
231
|
+
### Add a skill
|
|
232
|
+
|
|
233
|
+
Skills are discovered from disk. The canonical location is:
|
|
234
|
+
|
|
235
|
+
- `src/skills/<skill_id>/SKILL.md`
|
|
236
|
+
|
|
237
|
+
The runtime discovers skills through:
|
|
238
|
+
|
|
239
|
+
- `src/deepscientist/skills/registry.py`
|
|
240
|
+
|
|
241
|
+
The prompt builder projects them through:
|
|
242
|
+
|
|
243
|
+
- `src/deepscientist/prompts/builder.py`
|
|
244
|
+
- `src/deepscientist/skills/installer.py`
|
|
245
|
+
|
|
246
|
+
#### Minimal skill shape
|
|
247
|
+
|
|
248
|
+
Create a directory:
|
|
249
|
+
|
|
250
|
+
```text
|
|
251
|
+
src/skills/<skill_id>/
|
|
252
|
+
```
|
|
253
|
+
|
|
254
|
+
and add:
|
|
255
|
+
|
|
256
|
+
```md
|
|
257
|
+
---
|
|
258
|
+
name: my-skill
|
|
259
|
+
description: One-line purpose statement.
|
|
260
|
+
skill_role: stage
|
|
261
|
+
skill_order: 60
|
|
262
|
+
---
|
|
263
|
+
|
|
264
|
+
# My Skill
|
|
265
|
+
...
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
Supported `skill_role` values are:
|
|
269
|
+
|
|
270
|
+
- `stage`
|
|
271
|
+
- `companion`
|
|
272
|
+
- `custom`
|
|
273
|
+
|
|
274
|
+
Optional projected agent files:
|
|
275
|
+
|
|
276
|
+
- `src/skills/<skill_id>/agents/openai.yaml`
|
|
277
|
+
- `src/skills/<skill_id>/agents/claude.md`
|
|
278
|
+
|
|
279
|
+
#### Files to change
|
|
280
|
+
|
|
281
|
+
- `src/skills/<skill_id>/SKILL.md`
|
|
282
|
+
- required
|
|
283
|
+
- `src/deepscientist/skills/registry.py`
|
|
284
|
+
- update `_DEFAULT_STAGE_SKILLS` or `_DEFAULT_COMPANION_SKILLS` if this is a canonical built-in stage or companion skill rather than an ad hoc custom one
|
|
285
|
+
- `src/deepscientist/prompts/builder.py`
|
|
286
|
+
- update `STAGE_MEMORY_PLAN` if the skill is a real stage that needs a first-class memory retrieval plan
|
|
287
|
+
- `docs/en/14_PROMPT_SKILLS_AND_MCP_GUIDE.md`
|
|
288
|
+
- update the skill guide if the public workflow shape changed
|
|
289
|
+
|
|
290
|
+
#### Skill rules
|
|
291
|
+
|
|
292
|
+
1. Put execution discipline inside `SKILL.md`, not inside a central Python stage scheduler.
|
|
293
|
+
2. Reuse the existing prompt contract language:
|
|
294
|
+
- interaction discipline
|
|
295
|
+
- tool discipline
|
|
296
|
+
- stage purpose
|
|
297
|
+
- completion or handoff rules
|
|
298
|
+
3. If the skill is a canonical stage, give it a stable place in the stage order and memory plan.
|
|
299
|
+
4. If the skill is only auxiliary, prefer `skill_role: companion` or `custom` rather than bloating the canonical stage chain.
|
|
300
|
+
5. Remember that `SkillInstaller` mirrors skills into the active Codex/Claude projection trees. Keep paths and file names stable.
|
|
301
|
+
|
|
302
|
+
#### Minimum test checklist
|
|
303
|
+
|
|
304
|
+
- `tests/test_stage_skills.py`
|
|
305
|
+
- stage ordering and canonical skill availability
|
|
306
|
+
- `tests/test_skill_contracts.py`
|
|
307
|
+
- frontmatter and skill contract expectations
|
|
308
|
+
- `tests/test_prompt_builder.py`
|
|
309
|
+
- prompt builder output and visible skill path blocks
|
|
310
|
+
|
|
311
|
+
### Add a connector
|
|
312
|
+
|
|
313
|
+
Connectors have three distinct layers:
|
|
314
|
+
|
|
315
|
+
1. config and validation
|
|
316
|
+
2. inbound / outbound transport adaptation
|
|
317
|
+
3. optional background runtime lifecycle
|
|
318
|
+
|
|
319
|
+
For a simple connector, changing only one layer is usually not enough.
|
|
320
|
+
|
|
321
|
+
#### Core files to inspect first
|
|
322
|
+
|
|
323
|
+
- config defaults:
|
|
324
|
+
- `src/deepscientist/config/models.py`
|
|
325
|
+
- config validation and live test behavior:
|
|
326
|
+
- `src/deepscientist/config/service.py`
|
|
327
|
+
- connector profile support:
|
|
328
|
+
- `src/deepscientist/connector/connector_profiles.py`
|
|
329
|
+
- inbound/outbound adaptation:
|
|
330
|
+
- `src/deepscientist/bridges/base.py`
|
|
331
|
+
- `src/deepscientist/bridges/connectors.py`
|
|
332
|
+
- `src/deepscientist/bridges/builtins.py`
|
|
333
|
+
- channel delivery:
|
|
334
|
+
- `src/deepscientist/channels/base.py`
|
|
335
|
+
- `src/deepscientist/channels/builtins.py`
|
|
336
|
+
- daemon lifecycle:
|
|
337
|
+
- `src/deepscientist/daemon/app.py`
|
|
338
|
+
- API endpoints:
|
|
339
|
+
- `src/deepscientist/daemon/api/router.py`
|
|
340
|
+
- `src/deepscientist/daemon/api/handlers.py`
|
|
341
|
+
- optional prompt fragment:
|
|
342
|
+
- `src/prompts/connectors/<connector>.md`
|
|
343
|
+
|
|
344
|
+
#### Step-by-step connector checklist
|
|
345
|
+
|
|
346
|
+
1. Add default config in `src/deepscientist/config/models.py`.
|
|
347
|
+
- If it is a system connector, add it to `SYSTEM_CONNECTOR_NAMES`.
|
|
348
|
+
- Add its default payload under `default_connectors()`.
|
|
349
|
+
2. Add validation and config test behavior in `src/deepscientist/config/service.py`.
|
|
350
|
+
- validate required tokens, ids, transport, and live probe behavior
|
|
351
|
+
3. Decide whether it is profileable.
|
|
352
|
+
- If yes, add a spec in `src/deepscientist/connector/connector_profiles.py`
|
|
353
|
+
4. Add or extend a bridge in `src/deepscientist/bridges/connectors.py`.
|
|
354
|
+
- subclass `BaseConnectorBridge`
|
|
355
|
+
- implement inbound parsing and outbound formatting / delivery as needed
|
|
356
|
+
5. Register the bridge in `src/deepscientist/bridges/builtins.py`.
|
|
357
|
+
6. Register a channel in `src/deepscientist/channels/builtins.py`.
|
|
358
|
+
- use `GenericRelayChannel` when the standard relay flow is enough
|
|
359
|
+
- add a dedicated channel class only when the connector needs special outbound behavior
|
|
360
|
+
7. If the connector needs a long-running runtime such as polling, gateway, QR session, or long connection:
|
|
361
|
+
- add the service class under `src/deepscientist/channels/`
|
|
362
|
+
- add daemon state fields in `DaemonApp.__init__`
|
|
363
|
+
- wire startup in `DaemonApp._start_background_connectors()`
|
|
364
|
+
- wire shutdown in `DaemonApp._stop_background_connectors()`
|
|
365
|
+
8. If the connector needs custom web/API flows beyond the generic inbound route:
|
|
366
|
+
- update `src/deepscientist/daemon/api/router.py`
|
|
367
|
+
- update `src/deepscientist/daemon/api/handlers.py`
|
|
368
|
+
9. If the connector needs connector-specific prompt behavior:
|
|
369
|
+
- add `src/prompts/connectors/<connector>.md`
|
|
370
|
+
- the prompt builder will load it automatically when the connector is active or bound
|
|
371
|
+
10. Add user docs if it is a public connector.
|
|
372
|
+
|
|
373
|
+
#### Connector rules
|
|
374
|
+
|
|
375
|
+
1. Prefer the generic bridge/channel model. Do not special-case a connector inside unrelated quest logic.
|
|
376
|
+
2. Keep transport adaptation in bridges and delivery/runtime behavior in channels.
|
|
377
|
+
3. Do not add a connector-specific public MCP namespace.
|
|
378
|
+
4. If the connector is public, keep route and status behavior consistent with the existing `/api/connectors/...` surface.
|
|
379
|
+
5. If it is only a relay connector, try `GenericRelayChannel` first before writing a bespoke channel class.
|
|
380
|
+
|
|
381
|
+
#### Minimum test checklist
|
|
382
|
+
|
|
383
|
+
- `tests/test_connector_config_validation.py`
|
|
384
|
+
- config shape and required fields
|
|
385
|
+
- `tests/test_connector_bridges.py`
|
|
386
|
+
- inbound parse and outbound formatting
|
|
387
|
+
- connector-specific tests such as:
|
|
388
|
+
- `tests/test_telegram_connector.py`
|
|
389
|
+
- `tests/test_weixin_connector.py`
|
|
390
|
+
- `tests/test_qq_connector.py`
|
|
391
|
+
- `tests/test_whatsapp_local_session.py`
|
|
392
|
+
- or a new connector-specific test file
|
|
393
|
+
- `tests/test_daemon_api.py`
|
|
394
|
+
- if new routes, status payloads, or connector availability behavior changed
|
|
395
|
+
|
|
396
|
+
### Quick extension map
|
|
397
|
+
|
|
398
|
+
Use this when you only need the shortest file-level reminder:
|
|
399
|
+
|
|
400
|
+
- new MCP tool:
|
|
401
|
+
- `src/deepscientist/mcp/server.py`
|
|
402
|
+
- maybe `src/deepscientist/runners/codex.py`
|
|
403
|
+
- tests: `test_mcp_servers.py`
|
|
404
|
+
- new skill:
|
|
405
|
+
- `src/skills/<skill_id>/SKILL.md`
|
|
406
|
+
- maybe `src/deepscientist/skills/registry.py`
|
|
407
|
+
- maybe `src/deepscientist/prompts/builder.py`
|
|
408
|
+
- tests: `test_stage_skills.py`, `test_skill_contracts.py`, `test_prompt_builder.py`
|
|
409
|
+
- new connector:
|
|
410
|
+
- `src/deepscientist/config/models.py`
|
|
411
|
+
- `src/deepscientist/config/service.py`
|
|
412
|
+
- maybe `src/deepscientist/connector/connector_profiles.py`
|
|
413
|
+
- `src/deepscientist/bridges/*`
|
|
414
|
+
- `src/deepscientist/channels/*`
|
|
415
|
+
- maybe `src/deepscientist/daemon/app.py`
|
|
416
|
+
- maybe `src/prompts/connectors/<connector>.md`
|
|
417
|
+
- tests: connector validation + bridge + daemon/API coverage
|
|
418
|
+
|
|
182
419
|
## Documentation Rules
|
|
183
420
|
|
|
184
421
|
When behavior changes:
|
package/docs/en/README.md
CHANGED
|
@@ -4,6 +4,8 @@ DeepScientist is not just a long-running autonomous scientific discovery system.
|
|
|
4
4
|
|
|
5
5
|
2 minutes to install. 2 minutes to bind Weixin. 2 minutes to launch. Extremely fast and easy to use.
|
|
6
6
|
|
|
7
|
+
Local Web access now starts without a password gate by default. If you want a generated 16-character browser password for one launch, run `ds --auth true`; DeepScientist then prints the password in the terminal and the browser can reuse the stored login after the first successful entry.
|
|
8
|
+
|
|
7
9
|
It is also a workshop-style collaboration environment: let it keep moving autonomously, or step in anytime to collaborate, edit code, run the terminal yourself, or keep notes and plans in a Notion-style workspace.
|
|
8
10
|
|
|
9
11
|
Use DeepScientist anywhere: on the server through TUI, in the browser through Web, on the phone through Weixin or QQ, and even on glasses through Rokid Glasses.
|
|
@@ -30,10 +32,16 @@ This page is the shortest path to the right document.
|
|
|
30
32
|
|
|
31
33
|
- [00 Quick Start](./00_QUICK_START.md)
|
|
32
34
|
Start here if you want to install DeepScientist, launch it locally, and create your first project.
|
|
35
|
+
- [20 Workspace Modes Guide](./20_WORKSPACE_MODES_GUIDE.md)
|
|
36
|
+
Read this if you want to choose correctly between Copilot and Autonomous before creating a project.
|
|
37
|
+
- [19 Local Browser Auth](./19_LOCAL_BROWSER_AUTH.md)
|
|
38
|
+
Read this if you want to understand the local password prompt, where to find the password, and how to disable it.
|
|
33
39
|
- [05 TUI Guide](./05_TUI_GUIDE.md)
|
|
34
40
|
Read this if your main surface is the terminal and you want one end-to-end path through `ds --tui`, quests, connectors, and cross-surface work.
|
|
35
41
|
- [15 Codex Provider Setup](./15_CODEX_PROVIDER_SETUP.md)
|
|
36
|
-
Read this when you want to run DeepScientist through MiniMax, GLM, Volcengine Ark, Alibaba Bailian, or another Codex profile.
|
|
42
|
+
Read this when you want to run DeepScientist through MiniMax, GLM, Volcengine Ark, Alibaba Bailian Coding Plan, or another Codex profile.
|
|
43
|
+
- [21 Local Model Backends Guide](./21_LOCAL_MODEL_BACKENDS_GUIDE.md)
|
|
44
|
+
Read this if you want to run DeepScientist through local OpenAI-compatible backends such as vLLM, Ollama, or SGLang.
|
|
37
45
|
- [12 Guided Workflow Tour](./12_GUIDED_WORKFLOW_TOUR.md)
|
|
38
46
|
Follow the real product flow from landing page to workspace, step by step.
|
|
39
47
|
- [02 Start Research Guide](./02_START_RESEARCH_GUIDE.md)
|
|
@@ -43,6 +51,8 @@ This page is the shortest path to the right document.
|
|
|
43
51
|
|
|
44
52
|
- [02 Start Research Guide](./02_START_RESEARCH_GUIDE.md)
|
|
45
53
|
Explains the current frontend fields, derived contract fields, and practical examples.
|
|
54
|
+
- [20 Workspace Modes Guide](./20_WORKSPACE_MODES_GUIDE.md)
|
|
55
|
+
Use this when the main question is not “how do I fill the form?” but “should this project start as Copilot or Autonomous?”.
|
|
46
56
|
- [01 Settings Reference](./01_SETTINGS_REFERENCE.md)
|
|
47
57
|
Use this when you need to configure runners, connectors, runtime defaults, or home paths.
|
|
48
58
|
- [11 License And Risk Notice](./11_LICENSE_AND_RISK.md)
|
|
@@ -73,6 +83,8 @@ This page is the shortest path to the right document.
|
|
|
73
83
|
Explains how the daemon, workspace, canvas, and connector views fit together.
|
|
74
84
|
- [07 Memory and MCP](./07_MEMORY_AND_MCP.md)
|
|
75
85
|
Explains memory, artifacts, and built-in MCP behavior.
|
|
86
|
+
- [19 External Controller Guide](./19_EXTERNAL_CONTROLLER_GUIDE.md)
|
|
87
|
+
Shows how to build optional outer-orchestration guards on top of mailbox and `quest_control` without patching core runtime code.
|
|
76
88
|
|
|
77
89
|
## If something is broken
|
|
78
90
|
|
|
@@ -80,6 +92,8 @@ This page is the shortest path to the right document.
|
|
|
80
92
|
Start here for diagnostics and common runtime problems.
|
|
81
93
|
- [15 Codex Provider Setup](./15_CODEX_PROVIDER_SETUP.md)
|
|
82
94
|
Check this if the problem is likely in your Codex profile, provider endpoint, API key, or model configuration.
|
|
95
|
+
- [21 Local Model Backends Guide](./21_LOCAL_MODEL_BACKENDS_GUIDE.md)
|
|
96
|
+
Check this if the problem is specifically about local OpenAI-compatible backends and whether they support `/v1/responses`.
|
|
83
97
|
- [01 Settings Reference](./01_SETTINGS_REFERENCE.md)
|
|
84
98
|
Check this if the problem is likely caused by config, credentials, or connector setup.
|
|
85
99
|
|
|
@@ -88,4 +102,12 @@ This page is the shortest path to the right document.
|
|
|
88
102
|
- [90 Architecture](./90_ARCHITECTURE.md)
|
|
89
103
|
High-level system contracts and repository structure.
|
|
90
104
|
- [91 Development](./91_DEVELOPMENT.md)
|
|
91
|
-
Maintainer-facing workflow and
|
|
105
|
+
Maintainer-facing workflow, implementation notes, and the concrete checklists for adding MCP tools, skills, and connectors.
|
|
106
|
+
|
|
107
|
+
## Community
|
|
108
|
+
|
|
109
|
+
Welcome to join the WeChat group for discussion.
|
|
110
|
+
|
|
111
|
+
<p align="center">
|
|
112
|
+
<img src="../../assets/readme/wechat4.jpg" alt="DeepScientist WeChat group" width="360" />
|
|
113
|
+
</p>
|