@researai/deepscientist 1.5.16 → 1.5.17

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (82) hide show
  1. package/README.md +66 -23
  2. package/bin/ds.js +550 -19
  3. package/docs/en/00_QUICK_START.md +65 -5
  4. package/docs/en/01_SETTINGS_REFERENCE.md +1 -1
  5. package/docs/en/09_DOCTOR.md +14 -3
  6. package/docs/en/15_CODEX_PROVIDER_SETUP.md +12 -3
  7. package/docs/en/21_LOCAL_MODEL_BACKENDS_GUIDE.md +283 -0
  8. package/docs/en/91_DEVELOPMENT.md +237 -0
  9. package/docs/en/README.md +7 -3
  10. package/docs/zh/00_QUICK_START.md +54 -5
  11. package/docs/zh/01_SETTINGS_REFERENCE.md +1 -1
  12. package/docs/zh/09_DOCTOR.md +15 -4
  13. package/docs/zh/15_CODEX_PROVIDER_SETUP.md +12 -3
  14. package/docs/zh/21_LOCAL_MODEL_BACKENDS_GUIDE.md +281 -0
  15. package/docs/zh/README.md +7 -3
  16. package/install.sh +46 -4
  17. package/package.json +2 -1
  18. package/pyproject.toml +1 -1
  19. package/src/deepscientist/__init__.py +1 -1
  20. package/src/deepscientist/bridges/connectors.py +8 -2
  21. package/src/deepscientist/codex_cli_compat.py +185 -72
  22. package/src/deepscientist/config/service.py +154 -6
  23. package/src/deepscientist/daemon/api/handlers.py +130 -25
  24. package/src/deepscientist/daemon/api/router.py +5 -0
  25. package/src/deepscientist/daemon/app.py +446 -22
  26. package/src/deepscientist/diagnostics/__init__.py +6 -0
  27. package/src/deepscientist/diagnostics/runner_failures.py +130 -0
  28. package/src/deepscientist/doctor.py +207 -3
  29. package/src/deepscientist/prompts/builder.py +22 -4
  30. package/src/deepscientist/quest/service.py +413 -13
  31. package/src/deepscientist/runners/codex.py +59 -14
  32. package/src/deepscientist/shared.py +19 -0
  33. package/src/prompts/contracts/shared_interaction.md +3 -2
  34. package/src/prompts/system.md +13 -0
  35. package/src/prompts/system_copilot.md +13 -0
  36. package/src/tui/package.json +1 -1
  37. package/src/ui/dist/assets/{AiManusChatView-COFACy7V.js → AiManusChatView-Bv-Z8YpU.js} +44 -44
  38. package/src/ui/dist/assets/{AnalysisPlugin-DnSm0GZn.js → AnalysisPlugin-BCKAfjba.js} +1 -1
  39. package/src/ui/dist/assets/{CliPlugin-CvwCmDQ5.js → CliPlugin-BCKcpc35.js} +4 -4
  40. package/src/ui/dist/assets/{CodeEditorPlugin-cOqSa0xq.js → CodeEditorPlugin-DbOfSJ8K.js} +1 -1
  41. package/src/ui/dist/assets/{CodeViewerPlugin-itb0tltR.js → CodeViewerPlugin-CbaFRrUU.js} +3 -3
  42. package/src/ui/dist/assets/{DocViewerPlugin-DqKkiCI6.js → DocViewerPlugin-DAjLVeQD.js} +3 -3
  43. package/src/ui/dist/assets/{GitCommitViewerPlugin-DVgNHBCS.js → GitCommitViewerPlugin-CIUqbUDO.js} +1 -1
  44. package/src/ui/dist/assets/{GitDiffViewerPlugin-DxL2ezFG.js → GitDiffViewerPlugin-CQACjoAA.js} +1 -1
  45. package/src/ui/dist/assets/{GitSnapshotViewer-B_RQm1YZ.js → GitSnapshotViewer-0r4nLPke.js} +1 -1
  46. package/src/ui/dist/assets/{ImageViewerPlugin-tHqlXY3n.js → ImageViewerPlugin-nBOmI2v_.js} +3 -3
  47. package/src/ui/dist/assets/{LabCopilotPanel-ClMbq5Yu.js → LabCopilotPanel-BHxOxF4z.js} +1 -1
  48. package/src/ui/dist/assets/{LabPlugin-L_SuE8ow.js → LabPlugin-BKoZGs95.js} +1 -1
  49. package/src/ui/dist/assets/{LatexPlugin-B495DTXC.js → LatexPlugin-ZwtV8pIp.js} +1 -1
  50. package/src/ui/dist/assets/{MarkdownViewerPlugin-DG28-61B.js → MarkdownViewerPlugin-DKqVfKyW.js} +3 -3
  51. package/src/ui/dist/assets/{MarketplacePlugin-BiOGT-Kj.js → MarketplacePlugin-BwxStZ9D.js} +1 -1
  52. package/src/ui/dist/assets/{NotebookEditor-C-4Kt1p9.js → NotebookEditor-BEQhaQbt.js} +1 -1
  53. package/src/ui/dist/assets/{NotebookEditor-CVsj8h_T.js → NotebookEditor-DB9N_T9q.js} +23 -23
  54. package/src/ui/dist/assets/{PdfLoader-CASDQmxJ.js → PdfLoader-eWBONbQP.js} +1 -1
  55. package/src/ui/dist/assets/{PdfMarkdownPlugin-BFhwoKsY.js → PdfMarkdownPlugin-D22YOZL3.js} +1 -1
  56. package/src/ui/dist/assets/{PdfViewerPlugin-DcOzU9vd.js → PdfViewerPlugin-c-RK9DLM.js} +3 -3
  57. package/src/ui/dist/assets/{SearchPlugin-CHj7M58O.js → SearchPlugin-CxF9ytAx.js} +1 -1
  58. package/src/ui/dist/assets/{TextViewerPlugin-CB4DYfWO.js → TextViewerPlugin-C5xqeeUH.js} +2 -2
  59. package/src/ui/dist/assets/{VNCViewer-CjlbyCB3.js → VNCViewer-BoLGLnHz.js} +1 -1
  60. package/src/ui/dist/assets/{bot-CFkZY-JP.js → bot-DREQOxzP.js} +1 -1
  61. package/src/ui/dist/assets/{chevron-up-Dq5ofbht.js → chevron-up-C9Qpx4DE.js} +1 -1
  62. package/src/ui/dist/assets/{code-DLC6G24T.js → code-WlFHE7z_.js} +1 -1
  63. package/src/ui/dist/assets/{file-content-Dv4LoZec.js → file-content-BZMz3RYp.js} +1 -1
  64. package/src/ui/dist/assets/{file-diff-panel-Denq-lC3.js → file-diff-panel-CQhw0jS2.js} +1 -1
  65. package/src/ui/dist/assets/{file-socket-Cu4Qln7Y.js → file-socket-CfQPKQKj.js} +1 -1
  66. package/src/ui/dist/assets/{git-commit-horizontal-BUh6G52n.js → git-commit-horizontal-DxZ8DCZh.js} +1 -1
  67. package/src/ui/dist/assets/{image-B9HUUddG.js → image-Bgl4VIyx.js} +1 -1
  68. package/src/ui/dist/assets/{index-Cgla8biy.css → index-BpV6lusQ.css} +1 -1
  69. package/src/ui/dist/assets/{index-Gbl53BNp.js → index-CBNVuWcP.js} +363 -363
  70. package/src/ui/dist/assets/{index-wQ7RIIRd.js → index-CwNu1aH4.js} +1 -1
  71. package/src/ui/dist/assets/{index-B2B1sg-M.js → index-DrUnlf6K.js} +1 -1
  72. package/src/ui/dist/assets/{index-DRyx7vAc.js → index-NW-h8VzN.js} +1 -1
  73. package/src/ui/dist/assets/{pdf-effect-queue-ZtnHFCAi.js → pdf-effect-queue-J8OnM0jE.js} +1 -1
  74. package/src/ui/dist/assets/{popover-DL6h35vr.js → popover-CLc0pPP8.js} +1 -1
  75. package/src/ui/dist/assets/{project-sync-CsX08Qno.js → project-sync-C9IdzdZW.js} +1 -1
  76. package/src/ui/dist/assets/{select-DvmXt1yY.js → select-Cs2PmzwL.js} +1 -1
  77. package/src/ui/dist/assets/{sigma-7jpXazui.js → sigma-ClKcHAXm.js} +1 -1
  78. package/src/ui/dist/assets/{trash-xA7kFt8i.js → trash-DwpbFr3w.js} +1 -1
  79. package/src/ui/dist/assets/{useCliAccess-DsMwDjOp.js → useCliAccess-NQ8m0Let.js} +1 -1
  80. package/src/ui/dist/assets/{wrap-text-CwMn-iqb.js → wrap-text-BC-Hltpd.js} +1 -1
  81. package/src/ui/dist/assets/{zoom-out-R-GWEhzS.js → zoom-out-E_gaeAxL.js} +1 -1
  82. package/src/ui/dist/index.html +2 -2
@@ -37,7 +37,7 @@ Prepare these first:
37
37
 
38
38
  - Node.js `>=18.18` and npm `>=9`; install them from the official download page: https://nodejs.org/en/download
39
39
  - one working Codex path:
40
- - default OpenAI login path: `codex --login` (or `codex`)
40
+ - default OpenAI login path: `codex login` (or just `codex`)
41
41
  - provider-backed path: one working Codex profile such as `minimax`, `glm`, `ark`, or `bailian`
42
42
  - a model or API credential if your project needs external inference
43
43
  - GPU or server access if your experiments are compute-heavy
@@ -47,6 +47,7 @@ Prepare these first:
47
47
 
48
48
  If you are still choosing a coding plan or subscription, these are practical starting points:
49
49
 
50
+ - If you just want one simple starting recommendation, start with GPT-5.4 using `xhigh` reasoning effort, or Gemini 3 Pro using `gemini-3-pro-preview`.
50
51
  - ChatGPT pricing: https://openai.com/chatgpt/pricing/
51
52
  - ChatGPT Plus help: https://help.openai.com/en/articles/6950777-what-is-chatgpt-plus%3F.eps
52
53
  - MiniMax Coding Plan: https://platform.minimaxi.com/docs/guides/pricing-codingplan
@@ -54,6 +55,8 @@ If you are still choosing a coding plan or subscription, these are practical sta
54
55
  - Alibaba Cloud Bailian Coding Plan: https://help.aliyun.com/zh/model-studio/coding-plan
55
56
  - Volcengine Ark Coding Plan: https://www.volcengine.com/docs/82379/1925115?lang=zh
56
57
 
58
+ If you plan to use Qwen through Alibaba Bailian, use the Bailian **Coding Plan** endpoint only. The generic Bailian or DashScope Qwen API is not supported in the Codex-backed DeepScientist path.
59
+
57
60
  If you plan to use a provider-backed Codex profile instead of the default OpenAI login flow, read this next:
58
61
 
59
62
  - [15 Codex Provider Setup](./15_CODEX_PROVIDER_SETUP.md)
@@ -89,7 +92,7 @@ If you want the most reliable path, verify the command immediately:
89
92
 
90
93
  ```bash
91
94
  which codex
92
- codex --login
95
+ codex login
93
96
  ```
94
97
 
95
98
  If `which codex` prints nothing, the issue is usually the npm global bin path rather than DeepScientist itself. Fix the shell PATH first, then rerun `npm install -g @openai/codex`.
@@ -111,10 +114,10 @@ Choose one of these two paths.
111
114
  Run:
112
115
 
113
116
  ```bash
114
- codex --login
117
+ codex login
115
118
  ```
116
119
 
117
- If your Codex CLI version does not expose `--login`, run:
120
+ If you prefer the interactive first-run flow, run:
118
121
 
119
122
  ```bash
120
123
  codex
@@ -130,7 +133,7 @@ ds doctor
130
133
 
131
134
  ### 2.2 Provider-backed Codex profile path
132
135
 
133
- If you already use a named Codex profile for MiniMax, GLM, Volcengine Ark, Alibaba Bailian, or another provider-backed path, verify that profile first in a terminal:
136
+ If you already use a named Codex profile for MiniMax, GLM, Volcengine Ark, Alibaba Bailian Coding Plan, or another provider-backed path, verify that profile first in a terminal:
134
137
 
135
138
  ```bash
136
139
  codex --profile m27
@@ -204,6 +207,18 @@ ds --here
204
207
 
205
208
  This is equivalent to `ds --home "$PWD/DeepScientist"`.
206
209
 
210
+ Important:
211
+ * if you start DeepScientist with `ds --here` or an explicit `--home <path>`, later management commands such as `ds --status` and `ds --stop` should use the same DeepScientist home
212
+ * using the same `DEEPSCIENTIST_HOME` or `DS_HOME` environment variable for those commands is also fine
213
+ * otherwise, the CLI may fall back to the default `~/DeepScientist`, which can make a reachable daemon look like an unverified one
214
+
215
+ For example, when using a non-default home, run:
216
+
217
+ ```bash
218
+ ds --status --home /path/to/DeepScientist
219
+ ds --stop --home /path/to/DeepScientist
220
+ ```
221
+
207
222
  If you want another port, run:
208
223
 
209
224
  ```bash
@@ -421,6 +436,12 @@ Check status:
421
436
  ds --status
422
437
  ```
423
438
 
439
+ If you started DeepScientist with a non-default home, specify it explicitly:
440
+
441
+ ```bash
442
+ ds --status --home /path/to/DeepScientist
443
+ ```
444
+
424
445
  This shows whether the local runtime is up.
425
446
 
426
447
  Stop the daemon:
@@ -429,8 +450,47 @@ Stop the daemon:
429
450
  ds --stop
430
451
  ```
431
452
 
453
+ If you started DeepScientist with a non-default home, specify it explicitly:
454
+
455
+ ```bash
456
+ ds --stop --home /path/to/DeepScientist
457
+ ```
458
+
432
459
  This stops the local DeepScientist daemon.
433
460
 
461
+ Uninstall code and runtime, but keep local data:
462
+
463
+ ```bash
464
+ ds uninstall
465
+ ```
466
+
467
+ If you started DeepScientist with a non-default home, specify it explicitly:
468
+
469
+ ```bash
470
+ ds uninstall --home /path/to/DeepScientist --yes
471
+ ```
472
+
473
+ This removes launcher wrappers, local runtime code, and install-local code trees, but preserves:
474
+
475
+ - `quests/`
476
+ - `memory/`
477
+ - `config/`
478
+ - `logs/`
479
+ - `plugins/`
480
+ - `cache/`
481
+
482
+ If you installed DeepScientist from npm and also want to remove the global npm package itself, run this after `ds uninstall`:
483
+
484
+ ```bash
485
+ npm uninstall -g @researai/deepscientist
486
+ ```
487
+
488
+ If you really want to delete local data too, remove the DeepScientist home manually after uninstall:
489
+
490
+ ```bash
491
+ rm -rf /path/to/DeepScientist
492
+ ```
493
+
434
494
  Run diagnostics:
435
495
 
436
496
  ```bash
@@ -465,7 +465,7 @@ claude:
465
465
  - `Test` behavior: checks whether the binary is on `PATH`.
466
466
  - Resolution order for `codex`: env override, explicit path, local `PATH`, then bundled fallback.
467
467
  - One-off note: you can temporarily override this with `ds --codex /absolute/path/to/codex`.
468
- - First-run note: DeepScientist does not finish Codex authentication for you. Before the first `ds`, make sure `codex --login` (or `codex`) has completed successfully.
468
+ - First-run note: DeepScientist does not finish Codex authentication for you. Before the first `ds`, make sure `codex login` (or just `codex`) has completed successfully.
469
469
  - Repair note: if the bundled dependency is missing after `npm install -g @researai/deepscientist`, install Codex explicitly with `npm install -g @openai/codex`.
470
470
 
471
471
  **`config_dir`**
@@ -15,7 +15,7 @@ Use `ds doctor` when DeepScientist does not start cleanly after installation.
15
15
  Default OpenAI path:
16
16
 
17
17
  ```bash
18
- codex --login
18
+ codex login
19
19
  ```
20
20
 
21
21
  Provider-backed profile path:
@@ -55,10 +55,18 @@ Use `ds doctor` when DeepScientist does not start cleanly after installation.
55
55
  - whether required config files are valid
56
56
  - whether the current release is still using `codex` as the runnable runner
57
57
  - whether the Codex CLI can be found and passes a startup probe
58
+ - whether a recent quest runtime failure already points to a known provider / protocol / retry problem
58
59
  - whether an optional local `pdflatex` runtime is available for paper PDF compilation
59
60
  - whether the web and TUI bundles exist
60
61
  - whether the configured web port is free or already running the correct daemon
61
62
 
63
+ `ds doctor` now tries to render failed checks in a more operational form:
64
+
65
+ - `Problem`: what failed
66
+ - `Why`: why DeepScientist believes it failed
67
+ - `Fix`: the concrete next steps to try
68
+ - `Evidence`: the quest/run/request clues that matched the diagnosis
69
+
62
70
  ## Common fixes
63
71
 
64
72
  ### Codex is missing
@@ -80,10 +88,10 @@ npm install -g @openai/codex
80
88
  Run:
81
89
 
82
90
  ```bash
83
- codex --login
91
+ codex login
84
92
  ```
85
93
 
86
- If your Codex CLI version does not expose `--login`, run `codex` and finish the interactive setup there.
94
+ If you prefer the interactive first-run flow, run `codex` and finish the setup there.
87
95
 
88
96
  Finish login once, then rerun `ds doctor`.
89
97
 
@@ -109,6 +117,7 @@ Also check:
109
117
 
110
118
  - the same shell still exports the provider API key
111
119
  - the profile points at the provider's Coding Plan endpoint, not the generic API endpoint
120
+ - if you are using Qwen through Alibaba Bailian, use the Bailian Coding Plan endpoint only; the generic Bailian or DashScope Qwen API is not supported here
112
121
  - `~/DeepScientist/config/runners.yaml` uses `model: inherit` if the provider expects the model to come from the profile itself
113
122
 
114
123
  MiniMax-specific note:
@@ -125,6 +134,8 @@ MiniMax-specific note:
125
134
  - DeepScientist also strips conflicting `OPENAI_*` auth variables automatically for providers that set `requires_openai_auth = false`
126
135
  - if you also want plain terminal `codex --profile <name>` to work directly, put `model_provider = "minimax"` and the matching top-level model such as `MiniMax-M2.7` or `MiniMax-M2.5` in `~/.codex/config.toml`
127
136
  - DeepScientist automatically downgrades `xhigh` to `high` when it detects a Codex CLI older than `0.63.0`
137
+ - if the provider returns `tool call result does not follow tool call (2013)`, treat it as a request-ordering/protocol error rather than a transient network failure
138
+ - if the provider returns malformed tool-call argument errors such as `invalid function arguments json string` or `failed to parse tool call arguments`, fix the tool-call serialization path before retrying again
128
139
 
129
140
  ### The configured Codex model is unavailable
130
141
 
@@ -2,6 +2,8 @@
2
2
 
3
3
  DeepScientist does not implement separate provider adapters for MiniMax, GLM, Volcengine Ark, or Alibaba Bailian.
4
4
 
5
+ For Qwen on Alibaba Bailian, DeepScientist only supports the **Coding Plan** path. The generic Bailian or DashScope Qwen platform API is not supported here.
6
+
5
7
  Instead, it reuses the same Codex CLI setup that already works in your terminal.
6
8
 
7
9
  The recommended order is always:
@@ -18,7 +20,7 @@ The recommended order is always:
18
20
  Use this when your Codex CLI works through the standard OpenAI login flow.
19
21
 
20
22
  ```bash
21
- codex --login
23
+ codex login
22
24
  ds doctor
23
25
  ds
24
26
  ```
@@ -62,6 +64,7 @@ Important:
62
64
 
63
65
  - keep `model: inherit` for provider-backed Codex profiles unless you are certain the provider accepts the explicit model id you plan to send
64
66
  - DeepScientist now launches Codex from an isolated runtime home under `.ds/codex-home`, but that runtime copy inherits your configured `~/.codex` auth, config, skills, agents, and prompts first
67
+ - if the active provider uses `wire_api = "chat"`, DeepScientist now auto-checks that the selected Codex binary is exactly `0.57.0` during startup probe
65
68
 
66
69
  ## Provider matrix
67
70
 
@@ -71,14 +74,14 @@ Important:
71
74
  | MiniMax | [MiniMax Codex CLI](https://platform.minimaxi.com/docs/coding-plan/codex-cli) | No | your Codex profile, for example `ds --codex-profile m27` |
72
75
  | GLM | [GLM Coding Plan: Other Tools](https://docs.bigmodel.cn/cn/coding-plan/tool/others) | No | a Codex profile that targets the GLM coding endpoint |
73
76
  | Volcengine Ark | [Ark Coding Plan Overview](https://www.volcengine.com/docs/82379/1925114?lang=zh) | No | a Codex profile that targets the Ark coding endpoint |
74
- | Alibaba Bailian | [Bailian Coding Plan: Other Tools](https://help.aliyun.com/zh/model-studio/other-tools-coding-plan) | No | a Codex profile that targets the Bailian coding endpoint |
77
+ | Alibaba Bailian | [Bailian Coding Plan: Other Tools](https://help.aliyun.com/zh/model-studio/other-tools-coding-plan) | No | a Codex profile that targets the Bailian Coding Plan endpoint; do not use the generic Bailian or DashScope Qwen API |
75
78
 
76
79
  ## OpenAI
77
80
 
78
81
  ### What to prepare
79
82
 
80
83
  - a normal Codex CLI install
81
- - a successful `codex --login` or `codex` interactive first-run setup
84
+ - a successful `codex login` or `codex` interactive first-run setup
82
85
 
83
86
  ### DeepScientist commands
84
87
 
@@ -201,6 +204,7 @@ What DeepScientist supports now:
201
204
  - if you use this profile-only MiniMax config with Codex CLI `0.57.0`, DeepScientist automatically promotes the selected profile's `model_provider` and `model` to the top level inside its probe/runtime copy of `config.toml`
202
205
  - DeepScientist forces provider-backed MiniMax runs to use `model: inherit`, so it does not accidentally override the profile with a hard-coded OpenAI model
203
206
  - when `requires_openai_auth = false`, DeepScientist strips conflicting `OPENAI_API_KEY` and `OPENAI_BASE_URL` values from the probe/runtime environment
207
+ - for chat-wire provider sessions such as MiniMax on Codex CLI `0.57.0`, DeepScientist now injects a compatibility guard that tells Codex to serialize MCP tool calls one at a time instead of bundling multiple tool calls into the same response
204
208
  - this means DeepScientist can start even when plain terminal `codex --profile m27` still fails on that exact profile-only shape
205
209
 
206
210
  If you want plain terminal `codex --profile <name>` to work too, use the explicit top-level compatibility form instead:
@@ -350,6 +354,11 @@ codex:
350
354
 
351
355
  Bailian documents Coding Plan as an OpenAI-compatible coding endpoint. It requires the Coding Plan-specific key and endpoint, not the generic platform endpoint.
352
356
 
357
+ For Qwen specifically:
358
+
359
+ - supported: Qwen through the Bailian **Coding Plan** endpoint
360
+ - not supported: the generic Bailian or DashScope Qwen platform API
361
+
353
362
  Official docs:
354
363
 
355
364
  - <https://help.aliyun.com/zh/model-studio/other-tools-coding-plan>
@@ -0,0 +1,283 @@
1
+ # 21 Local Model Backends Guide: vLLM, Ollama, and SGLang
2
+
3
+ This guide explains how to run DeepScientist against a local OpenAI-compatible model backend through Codex.
4
+
5
+ The key point is simple:
6
+
7
+ - current Codex CLI requires `wire_api = "responses"`
8
+ - a backend that only works through `/v1/chat/completions` is not enough
9
+ - you must verify `/v1/responses` before expecting `ds` or `ds doctor` to succeed
10
+
11
+ There is one practical fallback:
12
+
13
+ - if your backend is chat-only, you may still be able to use it through **Codex CLI `0.57.0`**
14
+ - that older path can still work with `wire_api = "chat"` when the provider is configured at the top level
15
+ - DeepScientist now checks this automatically during the Codex startup probe; if it sees `wire_api = "chat"` on any active provider config, it requires `codex-cli 0.57.0` before continuing
16
+
17
+ ## 1. What DeepScientist actually depends on
18
+
19
+ DeepScientist does not talk to vLLM, Ollama, or SGLang directly.
20
+
21
+ It talks to:
22
+
23
+ - `codex`
24
+ - and `codex` talks to your configured provider profile in `~/.codex/config.toml`
25
+
26
+ So the compatibility chain is:
27
+
28
+ 1. your local backend
29
+ 2. Codex profile
30
+ 3. Codex startup probe
31
+ 4. DeepScientist runner
32
+
33
+ If step 2 or step 3 fails, DeepScientist cannot start the Codex runner successfully.
34
+
35
+ ## 2. The current Codex rule you must know
36
+
37
+ On the current Codex CLI:
38
+
39
+ - `wire_api = "responses"` is supported
40
+ - `wire_api = "chat"` is rejected
41
+
42
+ In practice that means:
43
+
44
+ - `vLLM`: recommended if its OpenAI-compatible server exposes `/v1/responses`
45
+ - `Ollama`: only use it if your installed version exposes `/v1/responses`
46
+ - `SGLang`: if your deployment only supports `/v1/chat/completions`, it is not compatible with the latest Codex runner
47
+
48
+ ## 2.1 Support summary
49
+
50
+ | Backend | `/v1/chat/completions` | `/v1/responses` | Latest Codex | Codex `0.57.0` fallback |
51
+ |---|---|---|---|---|
52
+ | vLLM | yes | yes | supported | usually unnecessary |
53
+ | Ollama | yes | depends on version | supported only when `/v1/responses` works | possible if it is chat-only |
54
+ | SGLang | yes | often missing or incomplete | not supported when it is chat-only | possible fallback path |
55
+
56
+ ## 3. Test the backend first
57
+
58
+ Before touching DeepScientist, verify the backend directly.
59
+
60
+ ### Step 1: list models
61
+
62
+ ```bash
63
+ curl http://127.0.0.1:8004/v1/models \
64
+ -H "Authorization: Bearer 1234"
65
+ ```
66
+
67
+ You need one real model id from this output, for example:
68
+
69
+ ```text
70
+ /model/gpt-oss-120b
71
+ ```
72
+
73
+ ### Step 2: test chat completions
74
+
75
+ ```bash
76
+ curl http://127.0.0.1:8004/v1/chat/completions \
77
+ -H "Content-Type: application/json" \
78
+ -H "Authorization: Bearer 1234" \
79
+ -d '{
80
+ "model": "/model/gpt-oss-120b",
81
+ "messages": [
82
+ { "role": "user", "content": "Reply with exactly HELLO." }
83
+ ]
84
+ }'
85
+ ```
86
+
87
+ If this works, the backend is at least OpenAI-chat-compatible.
88
+
89
+ ### Step 3: test Responses API
90
+
91
+ ```bash
92
+ curl http://127.0.0.1:8004/v1/responses \
93
+ -H "Content-Type: application/json" \
94
+ -H "Authorization: Bearer 1234" \
95
+ -d '{
96
+ "model": "/model/gpt-oss-120b",
97
+ "input": "Reply with exactly HELLO."
98
+ }'
99
+ ```
100
+
101
+ This is the decisive test.
102
+
103
+ If `/v1/responses` fails, the latest Codex CLI will not work with this backend profile.
104
+
105
+ ## 4. What we actually observed on this server
106
+
107
+ We tested the local backend at `http://127.0.0.1:8004/v1`.
108
+
109
+ Observed behavior:
110
+
111
+ - `GET /v1/models` succeeded
112
+ - `POST /v1/chat/completions` succeeded
113
+ - `POST /v1/responses` returned `500 Internal Server Error`
114
+ - the `/v1/models` payload reported `owned_by: "sglang"`
115
+
116
+ So this specific `8004` deployment behaves like a chat-compatible SGLang-style server, not a Codex-compatible Responses backend.
117
+
118
+ That means:
119
+
120
+ - it can answer raw chat requests
121
+ - but it cannot currently be used by the latest Codex runner
122
+ - and therefore DeepScientist cannot use it through the normal Codex path
123
+
124
+ We also tested an older Codex path:
125
+
126
+ - latest Codex + `wire_api = "responses"` failed against this backend
127
+ - Codex `0.57.0` + top-level `model_provider` / `model` + `wire_api = "chat"` succeeded
128
+
129
+ So for this server specifically:
130
+
131
+ - **latest Codex path**: no
132
+ - **Codex `0.57.0` fallback**: yes
133
+
134
+ ## 5. Codex profile example for a local Responses-compatible backend
135
+
136
+ If your backend really supports `/v1/responses`, create a profile like this:
137
+
138
+ ```toml
139
+ [model_providers.local_vllm]
140
+ name = "local_vllm"
141
+ base_url = "http://127.0.0.1:8004/v1"
142
+ env_key = "LOCAL_API_KEY"
143
+ wire_api = "responses"
144
+ requires_openai_auth = false
145
+
146
+ [profiles.local_vllm]
147
+ model = "/model/gpt-oss-120b"
148
+ model_provider = "local_vllm"
149
+ ```
150
+
151
+ Then test Codex directly first:
152
+
153
+ ```bash
154
+ export LOCAL_API_KEY=1234
155
+ codex exec --profile local_vllm --json --cd /tmp --skip-git-repo-check - <<'EOF'
156
+ Reply with exactly HELLO.
157
+ EOF
158
+ ```
159
+
160
+ If this fails, do not continue to DeepScientist yet.
161
+
162
+ ## 5.1 Chat-only fallback for Codex `0.57.0`
163
+
164
+ If your backend only supports `/v1/chat/completions`, you can try this fallback path:
165
+
166
+ 1. install Codex `0.57.0`
167
+ 2. use `wire_api = "chat"`
168
+ 3. put `model_provider` and `model` at the top level
169
+
170
+ Example:
171
+
172
+ ```toml
173
+ model = "/model/gpt-oss-120b"
174
+ model_provider = "localchat"
175
+ approval_policy = "never"
176
+ sandbox_mode = "workspace-write"
177
+
178
+ [model_providers.localchat]
179
+ name = "localchat"
180
+ base_url = "http://127.0.0.1:8004/v1"
181
+ env_key = "LOCAL_API_KEY"
182
+ wire_api = "chat"
183
+ requires_openai_auth = false
184
+ ```
185
+
186
+ Then test:
187
+
188
+ ```bash
189
+ export LOCAL_API_KEY=1234
190
+ codex exec --json --cd /tmp --skip-git-repo-check - <<'EOF'
191
+ Reply with exactly HELLO.
192
+ EOF
193
+ ```
194
+
195
+ If this older Codex path works, DeepScientist can usually reuse it with the same runner binary and profile strategy.
196
+
197
+ ## 6. DeepScientist commands after Codex works
198
+
199
+ Once the direct Codex check works, run:
200
+
201
+ ```bash
202
+ ds doctor --codex-profile local_vllm
203
+ ds --codex-profile local_vllm
204
+ ```
205
+
206
+ `ds doctor` is the canonical command.
207
+
208
+ `ds docker` is only a legacy alias for `ds doctor`; it is not a Docker deployment command.
209
+
210
+ If you want to persist it in DeepScientist:
211
+
212
+ ```yaml
213
+ codex:
214
+ enabled: true
215
+ binary: codex
216
+ config_dir: ~/.codex
217
+ profile: local_vllm
218
+ model: inherit
219
+ model_reasoning_effort: high
220
+ approval_policy: never
221
+ sandbox_mode: danger-full-access
222
+ ```
223
+
224
+ ## 7. Backend compatibility summary
225
+
226
+ ### vLLM
227
+
228
+ Recommended.
229
+
230
+ Use it when:
231
+
232
+ - `/v1/models` works
233
+ - `/v1/responses` works
234
+ - the model id is visible and stable
235
+
236
+ If those are true, vLLM is the cleanest current local path for Codex + DeepScientist.
237
+
238
+ ### Ollama
239
+
240
+ Conditionally supported.
241
+
242
+ Use it only when:
243
+
244
+ - your Ollama version exposes `/v1/responses`
245
+ - your target model works through that endpoint
246
+
247
+ If Ollama only gives you chat-completions semantics, it is not enough for the latest Codex CLI, but it may still be usable through Codex `0.57.0`.
248
+
249
+ ### SGLang
250
+
251
+ Be careful.
252
+
253
+ If your SGLang deployment behaves like this:
254
+
255
+ - `/v1/chat/completions` works
256
+ - `/v1/responses` fails
257
+
258
+ then it is not currently compatible with the latest Codex runner.
259
+
260
+ If you must use that backend anyway, the realistic fallback is Codex `0.57.0` with `wire_api = "chat"`.
261
+
262
+ ## 8. What to do if you only have chat-completions
263
+
264
+ If your backend only supports `/v1/chat/completions`, you currently have three practical options:
265
+
266
+ 1. switch to a Responses-compatible backend such as vLLM
267
+ 2. upgrade to an Ollama release that really exposes `/v1/responses`
268
+ 3. downgrade the Codex CLI path to `0.57.0` and use `wire_api = "chat"`
269
+ 4. place a Responses-compatible proxy in front of the backend
270
+
271
+ Right now, this is a Codex CLI limitation, not a DeepScientist-only setting mistake.
272
+
273
+ ## 9. Recommended workflow
274
+
275
+ Use this order every time:
276
+
277
+ 1. test `/v1/models`
278
+ 2. test `/v1/responses`
279
+ 3. test `codex exec --profile <name>`
280
+ 4. test `ds doctor --codex-profile <name>`
281
+ 5. only then launch `ds --codex-profile <name>`
282
+
283
+ If step 2 fails, stop there. Do not expect DeepScientist to succeed through the latest Codex path.