@dobby.ai/dobby 0.3.0 → 0.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +51 -431
- package/dist/src/agent/event-forwarder.js +6 -6
- package/dist/src/cli/commands/connector.js +77 -0
- package/dist/src/cli/commands/cron.js +95 -36
- package/dist/src/cli/commands/start.js +52 -0
- package/dist/src/cli/program.js +13 -0
- package/dist/src/core/connector-status.js +132 -0
- package/dist/src/core/connector-supervisor.js +24 -3
- package/dist/src/core/gateway.js +79 -20
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,482 +1,102 @@
|
|
|
1
1
|
# dobby
|
|
2
2
|
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
当前仓库内维护的扩展包:
|
|
6
|
-
|
|
7
|
-
- `@dobby.ai/connector-discord`
|
|
8
|
-
- `@dobby.ai/connector-feishu`
|
|
9
|
-
- `@dobby.ai/provider-pi`
|
|
10
|
-
- `@dobby.ai/provider-codex-cli`
|
|
11
|
-
- `@dobby.ai/provider-claude-cli`
|
|
12
|
-
- `@dobby.ai/provider-claude`
|
|
13
|
-
- `@dobby.ai/sandbox-core`
|
|
14
|
-
|
|
15
|
-
文档默认以 `@dobby.ai/*` 为准,不再把旧 `@dobby/*` 作为推荐配置。
|
|
16
|
-
|
|
17
|
-
## 核心能力
|
|
18
|
-
|
|
19
|
-
- connector source -> binding -> route -> provider / sandbox
|
|
20
|
-
- Discord 频道 / 线程接入;线程消息继续按父频道命中 binding
|
|
21
|
-
- Feishu 长连接消息接入(self-built app)
|
|
22
|
-
- Feishu 出站支持普通文本和 Markdown 卡片;默认群内直发,不走 reply thread
|
|
23
|
-
- conversation 级 runtime 复用与串行化
|
|
24
|
-
- 扩展 store 安装、启用、列举与 schema 驱动配置
|
|
25
|
-
- Discord 流式回复、typing、附件下载与图片输入
|
|
26
|
-
- cron 调度:一次性、固定间隔、cron expression
|
|
27
|
-
- 交互式初始化:`dobby init`(支持多 provider / 多 connector starter)
|
|
28
|
-
- 配置检查与 schema inspect:`dobby config show|list|schema`
|
|
29
|
-
- 诊断与保守修复:`dobby doctor [--fix]`
|
|
30
|
-
|
|
31
|
-
## 架构概览
|
|
32
|
-
|
|
33
|
-
```text
|
|
34
|
-
Discord / Cron
|
|
35
|
-
-> Connector
|
|
36
|
-
-> Gateway
|
|
37
|
-
-> Dedup / Control Commands / Binding Resolver / Route Resolver
|
|
38
|
-
-> Runtime Registry
|
|
39
|
-
-> Provider Runtime
|
|
40
|
-
-> Sandbox Executor
|
|
41
|
-
-> Event Forwarder
|
|
42
|
-
-> Connector Reply
|
|
43
|
-
```
|
|
44
|
-
|
|
45
|
-
主要目录:
|
|
46
|
-
|
|
47
|
-
- `src/cli`:CLI 程序和各子命令
|
|
48
|
-
- `src/core`:gateway 主流程、路由、去重、runtime registry
|
|
49
|
-
- `src/extension`:扩展 store、manifest 解析、扩展加载与实例化
|
|
50
|
-
- `src/cron`:计划任务配置、持久化与调度
|
|
51
|
-
- `src/sandbox`:宿主执行器接口与 `HostExecutor`
|
|
52
|
-
- `plugins/*`:本地维护的扩展源码
|
|
53
|
-
- `config/*.example.json`:示例配置
|
|
54
|
-
|
|
55
|
-
注意:运行时只从 `<data.rootDir>/extensions/node_modules` 加载扩展,不会从 `plugins/*` 源码目录 fallback。
|
|
56
|
-
|
|
57
|
-
## 环境要求
|
|
58
|
-
|
|
59
|
-
- Node.js `>=20`
|
|
60
|
-
- npm
|
|
61
|
-
- 对应 provider / connector 的外部运行条件
|
|
62
|
-
- 例如 Discord bot token
|
|
63
|
-
- Codex CLI、Claude CLI 或 Claude Agent SDK 所需认证
|
|
64
|
-
- 可选的 Docker / Boxlite 运行环境
|
|
65
|
-
|
|
66
|
-
## 快速开始
|
|
67
|
-
|
|
68
|
-
1. 安装依赖
|
|
69
|
-
|
|
70
|
-
```bash
|
|
71
|
-
npm install
|
|
72
|
-
```
|
|
73
|
-
|
|
74
|
-
2. 构建
|
|
75
|
-
|
|
76
|
-
```bash
|
|
77
|
-
npm run build
|
|
78
|
-
```
|
|
79
|
-
|
|
80
|
-
3. 初始化模板配置
|
|
81
|
-
|
|
82
|
-
```bash
|
|
83
|
-
npm run start -- init
|
|
84
|
-
```
|
|
85
|
-
|
|
86
|
-
`init` 会做这些事情:
|
|
87
|
-
|
|
88
|
-
- 交互选择 provider 和 connector(均可多选)
|
|
89
|
-
- 自动安装所选扩展到运行时 extension store
|
|
90
|
-
- 写入一份带占位符的 `gateway.json` 模板
|
|
91
|
-
- 把 `routes.default.projectRoot` 设为当前工作目录
|
|
92
|
-
- 为 direct message 生成 `bindings.default`,回落到默认 route
|
|
93
|
-
- 为每个所选 connector 生成一个默认 binding 到同一条 route
|
|
94
|
-
- 生成 `gateway.json`
|
|
95
|
-
- `provider.pi` 默认写入最小 inline 配置,不再依赖 `models.custom.json`
|
|
96
|
-
|
|
97
|
-
说明:当前 `init` 内建这些 starter 选择:
|
|
98
|
-
|
|
99
|
-
- provider:`provider.pi`、`provider.claude-cli`
|
|
100
|
-
- connector:`connector.discord`、`connector.feishu`
|
|
101
|
-
|
|
102
|
-
4. 编辑 `gateway.json`
|
|
103
|
-
|
|
104
|
-
把 `REPLACE_WITH_*` / `YOUR_*` 占位值替换成你的真实配置,例如:
|
|
105
|
-
|
|
106
|
-
- `connectors.items[*]` 中的 token / appId / appSecret
|
|
107
|
-
- `bindings.items[*].source.id`
|
|
108
|
-
- `routes.items[*].projectRoot`(如需覆盖默认 project root)
|
|
3
|
+
[](https://www.npmjs.com/package/@dobby.ai/dobby)
|
|
4
|
+
[](https://www.npmjs.com/package/@dobby.ai/dobby)
|
|
109
5
|
|
|
110
|
-
|
|
6
|
+
> Discord-first 本地 Agent Gateway,把聊天频道和 cron 任务变成你机器上 Agent 的统一入口。
|
|
111
7
|
|
|
112
|
-
|
|
113
|
-
npm run start -- doctor
|
|
114
|
-
```
|
|
115
|
-
|
|
116
|
-
`doctor` 会同时检查:
|
|
117
|
-
|
|
118
|
-
- 配置结构 / 引用关系
|
|
119
|
-
- 缺失的扩展安装
|
|
120
|
-
- `REPLACE_WITH_*` / `YOUR_*` 这类 init 占位值是否还未替换
|
|
8
|
+
`dobby` 让 Agent 继续跑在本机,直接使用本地仓库、凭据和工具链。你把一个频道或群聊绑定到一个项目目录,再按 route 选择 Provider、Sandbox 和工具权限;宿主本身保持轻量,只负责 CLI、路由、扩展加载、会话复用和调度。
|
|
121
9
|
|
|
122
|
-
|
|
10
|
+
## What is dobby
|
|
123
11
|
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
12
|
+
- 本地执行,不把代码仓库搬到远端中控。
|
|
13
|
+
- IM 入口统一成 `binding -> route -> runtime`,按频道切项目、切 Provider。
|
|
14
|
+
- Provider / Connector / Sandbox 都走扩展;当前仓库维护的包使用 `@dobby.ai/*`。
|
|
15
|
+
- 同一套链路同时支持聊天消息和 cron 计划任务。
|
|
127
16
|
|
|
128
|
-
|
|
17
|
+
## Quickstart
|
|
129
18
|
|
|
130
|
-
|
|
131
|
-
- `dobby --version` 可直接查看当前 CLI 版本
|
|
132
|
-
- 在仓库内直接运行时,CLI 会自动使用 `./config/gateway.json`
|
|
133
|
-
- 在仓库内执行 `init` / `extension install` 时,会优先安装 `plugins/*` 的本地构建产物
|
|
134
|
-
- 也可以通过环境变量覆盖配置路径:
|
|
19
|
+
要求:Node.js `>=20`、npm,以及对应 Connector / Provider 的认证环境。
|
|
135
20
|
|
|
136
21
|
```bash
|
|
137
|
-
|
|
138
|
-
```
|
|
139
|
-
|
|
140
|
-
## 配置文件路径
|
|
141
|
-
|
|
142
|
-
gateway 配置路径优先级:
|
|
143
|
-
|
|
144
|
-
1. `DOBBY_CONFIG_PATH`
|
|
145
|
-
2. 当前目录向上查找 dobby 仓库时的 `./config/gateway.json`
|
|
146
|
-
3. 默认 `~/.dobby/gateway.json`
|
|
147
|
-
|
|
148
|
-
cron 配置路径优先级:
|
|
149
|
-
|
|
150
|
-
1. `--cron-config`
|
|
151
|
-
2. `DOBBY_CRON_CONFIG_PATH`
|
|
152
|
-
3. 与 gateway 配置同目录的 `cron.json`
|
|
153
|
-
4. `<data.rootDir>/state/cron.config.json`
|
|
154
|
-
|
|
155
|
-
如果 cron 配置文件不存在,启动时会自动生成默认文件。
|
|
156
|
-
|
|
157
|
-
## 运行时目录
|
|
158
|
-
|
|
159
|
-
`data.rootDir` 默认是 `./data`。如果配置文件是仓库内的 `./config/gateway.json`,它会相对仓库根目录解析;否则相对配置文件所在目录解析。加载后会生成这些目录:
|
|
160
|
-
|
|
161
|
-
- `sessions/`
|
|
162
|
-
- `attachments/`
|
|
163
|
-
- `logs/`
|
|
164
|
-
- `state/`
|
|
165
|
-
- `extensions/`
|
|
166
|
-
|
|
167
|
-
扩展 store 实际路径是:
|
|
168
|
-
|
|
169
|
-
```text
|
|
170
|
-
<data.rootDir>/extensions/node_modules/*
|
|
171
|
-
```
|
|
172
|
-
|
|
173
|
-
## CLI 概览
|
|
174
|
-
|
|
175
|
-
顶层命令:
|
|
176
|
-
|
|
177
|
-
```bash
|
|
178
|
-
dobby --version
|
|
179
|
-
dobby start
|
|
22
|
+
npm install -g @dobby.ai/dobby
|
|
180
23
|
dobby init
|
|
181
|
-
dobby doctor [--fix]
|
|
182
|
-
```
|
|
183
|
-
|
|
184
|
-
配置检查:
|
|
185
|
-
|
|
186
|
-
```bash
|
|
187
|
-
dobby config show [section] [--json]
|
|
188
|
-
dobby config list [section] [--json]
|
|
189
|
-
dobby config schema list [--json]
|
|
190
|
-
dobby config schema show <contributionId> [--json]
|
|
191
|
-
```
|
|
192
|
-
|
|
193
|
-
配置变更建议直接编辑 `gateway.json`,再通过 `dobby doctor` 或 `dobby start` 做校验。
|
|
194
|
-
|
|
195
|
-
扩展管理:
|
|
196
|
-
|
|
197
|
-
```bash
|
|
198
|
-
dobby extension install <packageSpec>
|
|
199
|
-
dobby extension install <packageSpec> --enable
|
|
200
|
-
dobby extension uninstall <packageName>
|
|
201
|
-
dobby extension list [--json]
|
|
202
|
-
```
|
|
203
|
-
|
|
204
|
-
计划任务:
|
|
205
|
-
|
|
206
|
-
```bash
|
|
207
|
-
dobby cron add <name> --prompt <text> --connector <id> --route <id> --channel <id> [--thread <id>] [--at <iso> | --every-ms <ms> | --cron <expr>] [--tz <tz>]
|
|
208
|
-
dobby cron list [--json]
|
|
209
|
-
dobby cron status [jobId] [--json]
|
|
210
|
-
dobby cron run <jobId>
|
|
211
|
-
dobby cron update <jobId> ...
|
|
212
|
-
dobby cron pause <jobId>
|
|
213
|
-
dobby cron resume <jobId>
|
|
214
|
-
dobby cron remove <jobId>
|
|
215
24
|
```
|
|
216
25
|
|
|
217
|
-
|
|
26
|
+
`init` 当前内建 starter:
|
|
218
27
|
|
|
219
|
-
|
|
28
|
+
- Provider: `provider.pi`、`provider.claude-cli`
|
|
29
|
+
- Connector: `connector.discord`、`connector.feishu`
|
|
220
30
|
|
|
221
|
-
|
|
222
|
-
- 在 PR / push 到 `main` 时执行 `npm ci`、`npm run plugins:install`、`npm run check`、`npm run build`、`npm run test:cli`、`npm run plugins:check`、`npm run plugins:build`
|
|
223
|
-
- `.github/workflows/release.yml`
|
|
224
|
-
- 在 push 到 `main` 时运行 Release Please
|
|
225
|
-
- 有 releasable commit 时自动维护 release PR
|
|
226
|
-
- release PR 合并后自动发布对应 npm 包,并为每个包生成独立 GitHub release / tag
|
|
31
|
+
然后编辑 `config/gateway.json`,至少替换这些占位值:
|
|
227
32
|
|
|
228
|
-
|
|
33
|
+
- `botToken` / Feishu 凭据
|
|
34
|
+
- 频道或群聊 ID
|
|
35
|
+
- route 的 `projectRoot`
|
|
36
|
+
- Provider 的模型、地址、认证信息
|
|
229
37
|
|
|
230
|
-
|
|
231
|
-
2. 合并到 `main`
|
|
232
|
-
3. 等待 Release Please 自动更新或创建 release PR
|
|
233
|
-
4. review 并合并 release PR
|
|
234
|
-
5. 合并后由 `release.yml` 自动执行 npm trusted publishing
|
|
235
|
-
|
|
236
|
-
注意:
|
|
237
|
-
|
|
238
|
-
- 首次启用前,需要在 npm 后台为每个 `@dobby.ai/*` 包配置 GitHub trusted publisher,指向当前仓库和 `.github/workflows/release.yml`
|
|
239
|
-
- 建议在 GitHub 仓库里创建 `npm-publish` environment,后续若需要人工审批可以直接加保护规则
|
|
240
|
-
- 进入自动发版流程后,后续版本号应由 Release Please 维护,不再手动执行 `npm version`
|
|
241
|
-
- 本地手动兜底发布仍然保留,可用:
|
|
38
|
+
启动前先做一次诊断:
|
|
242
39
|
|
|
243
40
|
```bash
|
|
244
|
-
|
|
245
|
-
|
|
41
|
+
dobby doctor
|
|
42
|
+
dobby start
|
|
246
43
|
```
|
|
247
44
|
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
顶层结构:
|
|
251
|
-
|
|
252
|
-
- `extensions`
|
|
253
|
-
- `providers`
|
|
254
|
-
- `connectors`
|
|
255
|
-
- `sandboxes`
|
|
256
|
-
- `routes`
|
|
257
|
-
- `bindings`
|
|
258
|
-
- `data`
|
|
259
|
-
|
|
260
|
-
关键语义:
|
|
261
|
-
|
|
262
|
-
- `extensions.allowList`
|
|
263
|
-
- 只声明启用状态,不负责安装
|
|
264
|
-
- `providers.default`
|
|
265
|
-
- 默认 provider instance ID
|
|
266
|
-
- `providers.items[*].type` / `connectors.items[*].type` / `sandboxes.items[*].type`
|
|
267
|
-
- 指向某个 contribution,实例配置直接内联在对象里
|
|
268
|
-
- `routes.default`
|
|
269
|
-
- 统一提供 route 默认的 `projectRoot`、`provider`、`sandbox`、`tools`、`mentions`
|
|
270
|
-
- `routes.items[*]`
|
|
271
|
-
- route 是可复用的执行 profile,可继承默认 `projectRoot`,并按需覆盖 `provider`、`sandbox`、`tools`、`mentions`、`systemPromptFile`
|
|
272
|
-
- `bindings.default`
|
|
273
|
-
- direct message 未命中显式 binding 时使用的默认 route fallback
|
|
274
|
-
- `bindings.items[*]`
|
|
275
|
-
- `(connector, source.type, source.id) -> route` 的入口绑定
|
|
276
|
-
- `sandboxes.default`
|
|
277
|
-
- 未指定时默认使用 `host.builtin`
|
|
278
|
-
- 未匹配 binding 的入站消息会被直接忽略;仅 direct message 可回落到 `bindings.default`
|
|
279
|
-
|
|
280
|
-
示例配置:
|
|
281
|
-
|
|
282
|
-
- gateway:[`config/gateway.example.json`](config/gateway.example.json)
|
|
283
|
-
- cron:[`config/cron.example.json`](config/cron.example.json)
|
|
284
|
-
|
|
285
|
-
`provider.pi` 现在使用 inline custom provider 配置。最小常用字段是:
|
|
286
|
-
|
|
287
|
-
- `model`
|
|
288
|
-
- `baseUrl`
|
|
289
|
-
- `apiKey`
|
|
290
|
-
|
|
291
|
-
这些字段默认自动补齐:
|
|
292
|
-
|
|
293
|
-
- `provider = "custom-openai"`
|
|
294
|
-
- `api = "openai-completions"`
|
|
295
|
-
- `authHeader = false`
|
|
296
|
-
- `thinkingLevel = "off"`
|
|
297
|
-
- `models = [{ id: model }]`
|
|
298
|
-
|
|
299
|
-
只有在你需要多模型元数据或覆盖能力参数时,才需要手工展开 `models`。
|
|
300
|
-
|
|
301
|
-
`apiKey` 支持直接写 literal,也支持写环境变量名,由 `pi` 的 `AuthStorage` / `ModelRegistry` 按上游规则解析。
|
|
302
|
-
|
|
303
|
-
## 扩展包与 contribution
|
|
304
|
-
|
|
305
|
-
仓库内现有 contribution:
|
|
306
|
-
|
|
307
|
-
- `connector.discord`
|
|
308
|
-
- `provider.pi`
|
|
309
|
-
- `provider.codex-cli`
|
|
310
|
-
- `provider.claude-cli`
|
|
311
|
-
- `provider.claude`
|
|
312
|
-
- `sandbox.boxlite`
|
|
313
|
-
- `sandbox.docker`
|
|
314
|
-
|
|
315
|
-
`dobby init` 当前只内建这些 starter 选择:
|
|
316
|
-
|
|
317
|
-
- provider:`provider.pi`、`provider.claude-cli`
|
|
318
|
-
- connector:`connector.discord`、`connector.feishu`
|
|
319
|
-
|
|
320
|
-
`provider.codex-cli`、`provider.claude` 与 sandbox 相关扩展需要手工安装和配置,例如:
|
|
45
|
+
也可以显式指定配置路径:
|
|
321
46
|
|
|
322
47
|
```bash
|
|
323
|
-
|
|
324
|
-
npm run start -- extension install @dobby.ai/provider-claude --enable
|
|
325
|
-
npm run start -- extension install @dobby.ai/sandbox-core --enable
|
|
48
|
+
DOBBY_CONFIG_PATH=./config/gateway.json dobby start
|
|
326
49
|
```
|
|
327
50
|
|
|
328
|
-
|
|
51
|
+
如果你希望把仓库内维护的 `skills/` 同步到本机 dobby 根目录,直接运行:
|
|
329
52
|
|
|
330
53
|
```bash
|
|
331
|
-
|
|
332
|
-
codex login status
|
|
54
|
+
./scripts/install-project-skills.sh
|
|
333
55
|
```
|
|
334
56
|
|
|
335
|
-
|
|
336
|
-
|
|
337
|
-
- `command`(默认 `codex`)
|
|
338
|
-
- `commandArgs`(默认 `[]`)
|
|
339
|
-
- `model`(可选;未设置时沿用 Codex CLI 当前 profile / `~/.codex/config.toml` 的默认模型)
|
|
340
|
-
- `profile`(可选;等价于 `codex -p <profile>`)
|
|
341
|
-
- `approvalPolicy`(可选;默认 `never`)
|
|
342
|
-
- `sandboxMode`(可选;不填时按 route 的 `tools` 推导:`readonly -> read-only`,`full -> workspace-write`)
|
|
343
|
-
- `configOverrides`(可选;字符串数组,按原样转成重复的 `codex -c key=value`)
|
|
344
|
-
- `skipGitRepoCheck`(默认 `false`)
|
|
345
|
-
|
|
346
|
-
例如希望网关里的 Codex 会话复用本机 profile,并显式打开无人值守执行:
|
|
347
|
-
|
|
348
|
-
```json
|
|
349
|
-
{
|
|
350
|
-
"type": "provider.codex-cli",
|
|
351
|
-
"command": "codex",
|
|
352
|
-
"profile": "background",
|
|
353
|
-
"approvalPolicy": "never",
|
|
354
|
-
"sandboxMode": "danger-full-access",
|
|
355
|
-
"configOverrides": [
|
|
356
|
-
"model_reasoning_effort = \"xhigh\""
|
|
357
|
-
]
|
|
358
|
-
}
|
|
359
|
-
```
|
|
360
|
-
|
|
361
|
-
注意:`provider.codex-cli` 当前是 host-only,`danger-full-access` 会直接作用在宿主机上。
|
|
362
|
-
|
|
363
|
-
`--enable` 的行为:
|
|
364
|
-
|
|
365
|
-
- 把包写入 `extensions.allowList`
|
|
366
|
-
- 按 manifest contribution 生成默认实例模板
|
|
367
|
-
- 在需要时补默认 provider
|
|
368
|
-
|
|
369
|
-
## 计划任务 / Cron
|
|
370
|
-
|
|
371
|
-
job 支持三种调度方式:
|
|
372
|
-
|
|
373
|
-
- `--at <ISO timestamp>`
|
|
374
|
-
- `--every-ms <ms>`
|
|
375
|
-
- `--cron "<expr>" [--tz <timezone>]`
|
|
376
|
-
|
|
377
|
-
示例:
|
|
57
|
+
默认会把当前仓库下的 `skills/*` 同步到 `~/.dobby/skills/*`;如果设置了 `DOBBY_CONFIG_PATH`,则会同步到该配置文件所在目录的 `skills/`。也可以直接传目标根目录:
|
|
378
58
|
|
|
379
59
|
```bash
|
|
380
|
-
|
|
381
|
-
--prompt "Summarize open issues in this repo" \
|
|
382
|
-
--connector discord.main \
|
|
383
|
-
--route projectA \
|
|
384
|
-
--channel 1234567890 \
|
|
385
|
-
--cron "0 9 * * 1-5" \
|
|
386
|
-
--tz "Asia/Shanghai"
|
|
60
|
+
./scripts/install-project-skills.sh ~/.dobby
|
|
387
61
|
```
|
|
388
62
|
|
|
389
|
-
|
|
390
|
-
|
|
391
|
-
- `cron run <jobId>` 会额外排队一次立即执行,不会恢复 paused 状态,也不会改写原有 `nextRunAtMs`
|
|
392
|
-
- 需要已有一个正在运行的 `dobby start`
|
|
393
|
-
- 当前 scheduled run 一律按 stateless / ephemeral 执行
|
|
394
|
-
|
|
395
|
-
## Discord 连接器的当前行为
|
|
396
|
-
|
|
397
|
-
- 所有 connector 都会经过宿主侧 health supervisor 包装
|
|
398
|
-
- 统一暴露 `starting / ready / degraded / reconnecting / failed / stopped` 状态
|
|
399
|
-
- 若 connector 长时间停留在 `starting`、`degraded`、`reconnecting` 或 `failed`,宿主会 stop 并重建实例
|
|
400
|
-
- guild channel 仍按显式 binding 匹配
|
|
401
|
-
- DM 可通过 `bindings.default` 回落到默认 route
|
|
402
|
-
- 线程消息使用父频道 ID 做 binding 查找
|
|
403
|
-
- 会自动下载附件到本地
|
|
404
|
-
- 图片会作为 image input 传给 provider
|
|
405
|
-
- 非图片附件会把路径注入 prompt
|
|
406
|
-
- 内置 reconnect watchdog
|
|
407
|
-
- `reconnectStaleMs` 默认 `60000`
|
|
408
|
-
- `reconnectCheckIntervalMs` 默认 `10000`
|
|
409
|
-
|
|
410
|
-
## 会话控制命令
|
|
411
|
-
|
|
412
|
-
在 Discord 频道内可用:
|
|
413
|
-
|
|
414
|
-
- `stop`
|
|
415
|
-
- `/stop`
|
|
416
|
-
- `/cancel`
|
|
417
|
-
- `/new`
|
|
418
|
-
- `/reset`
|
|
419
|
-
|
|
420
|
-
当前语义:
|
|
63
|
+
对应的 `provider.pi.agentDir` 建议指向 dobby 根目录本身,例如 `~/.dobby`。
|
|
421
64
|
|
|
422
|
-
|
|
423
|
-
- `/new` / `/reset`:重置当前会话,并在 provider 支持时归档旧 session
|
|
65
|
+
## What you can plug in
|
|
424
66
|
|
|
425
|
-
|
|
67
|
+
- Entrypoints: `connector.discord`、`connector.feishu`、cron
|
|
68
|
+
- Providers: `provider.pi`、`provider.codex-cli`、`provider.claude-cli`、`provider.claude`
|
|
69
|
+
- Sandboxes: `host.builtin`、`sandbox.boxlite`、`sandbox.docker`
|
|
426
70
|
|
|
427
|
-
|
|
71
|
+
`provider.codex-cli`、`provider.claude` 和 sandbox 扩展默认不在 `init` starter 里,需要手工安装 / 启用:
|
|
428
72
|
|
|
429
73
|
```bash
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
npm run extensions:install:local
|
|
74
|
+
dobby extension install @dobby.ai/provider-codex-cli --enable
|
|
75
|
+
dobby extension install @dobby.ai/provider-claude --enable
|
|
76
|
+
dobby extension install @dobby.ai/sandbox-core --enable
|
|
434
77
|
```
|
|
435
78
|
|
|
436
|
-
|
|
79
|
+
## Docs
|
|
437
80
|
|
|
438
|
-
|
|
439
|
-
|
|
440
|
-
|
|
441
|
-
|
|
442
|
-
|
|
443
|
-
|
|
444
|
-
- `plugins/*` 是扩展源码,不是运行时加载入口
|
|
445
|
-
- 本地扩展安装到 extension store 后,才会被宿主识别
|
|
446
|
-
- `@dobby.ai/plugin-sdk` 在插件里按 `peerDependencies` 暴露,开发期通过 `file:../plugin-sdk` 提供
|
|
81
|
+
- 配置示例:[config/gateway.example.json](config/gateway.example.json)
|
|
82
|
+
- Cron 示例:[config/cron.example.json](config/cron.example.json)
|
|
83
|
+
- 运行与排障:[docs/RUNBOOK.md](docs/RUNBOOK.md)
|
|
84
|
+
- 架构与教程:[docs/tutorials/README.md](docs/tutorials/README.md)
|
|
85
|
+
- 扩展系统:[docs/EXTENSION_SYSTEM_ARCHITECTURE.md](docs/EXTENSION_SYSTEM_ARCHITECTURE.md)
|
|
86
|
+
- Cron 设计:[docs/CRON_SCHEDULER_DESIGN.md](docs/CRON_SCHEDULER_DESIGN.md)
|
|
447
87
|
|
|
448
|
-
##
|
|
88
|
+
## Development
|
|
449
89
|
|
|
450
90
|
最小校验:
|
|
451
91
|
|
|
452
92
|
```bash
|
|
453
|
-
npm run check
|
|
454
|
-
npm run build
|
|
455
|
-
npm run test:cli
|
|
93
|
+
npm run check && npm run build && npm run test:cli
|
|
456
94
|
```
|
|
457
95
|
|
|
458
|
-
|
|
96
|
+
如果你是在仓库里直接运行源码:
|
|
459
97
|
|
|
460
98
|
```bash
|
|
461
|
-
npm
|
|
462
|
-
npm run
|
|
99
|
+
npm install
|
|
100
|
+
npm run build
|
|
101
|
+
npm run start -- init
|
|
463
102
|
```
|
|
464
|
-
|
|
465
|
-
当前测试现状:
|
|
466
|
-
|
|
467
|
-
- 已有 CLI / core 的 focused tests
|
|
468
|
-
- 暂无完整的 e2e 自动化
|
|
469
|
-
- 仍建议做一次手工 Discord 冒烟
|
|
470
|
-
|
|
471
|
-
## 本地运行小提示
|
|
472
|
-
|
|
473
|
-
- `npm run dev:local` 与 `npm run start:local` 会尝试读取 `.env`
|
|
474
|
-
- 普通 `npm run start -- ...` 不会自动载入 `.env`
|
|
475
|
-
- `dobby init` 生成的是模板配置;运行前先替换 `gateway.json` 中的 placeholder
|
|
476
|
-
|
|
477
|
-
## 相关文档
|
|
478
|
-
|
|
479
|
-
- 扩展系统:[`docs/EXTENSION_SYSTEM_ARCHITECTURE.md`](docs/EXTENSION_SYSTEM_ARCHITECTURE.md)
|
|
480
|
-
- cron 设计:[`docs/CRON_SCHEDULER_DESIGN.md`](docs/CRON_SCHEDULER_DESIGN.md)
|
|
481
|
-
- 运行与排障:[`docs/RUNBOOK.md`](docs/RUNBOOK.md)
|
|
482
|
-
- Teamwork handoff:[`docs/TEAMWORK_HANDOFF_DESIGN.md`](docs/TEAMWORK_HANDOFF_DESIGN.md)
|
|
@@ -443,12 +443,12 @@ export class EventForwarder {
|
|
|
443
443
|
}
|
|
444
444
|
async finalizeEdit() {
|
|
445
445
|
await this.flushNow();
|
|
446
|
-
if (this.responseText.trim().length === 0) {
|
|
447
|
-
await this.sendEditPrimary("(completed with no text response)");
|
|
448
|
-
return;
|
|
449
|
-
}
|
|
450
|
-
const chunks = splitForMaxLength(this.responseText, this.maxTextLength);
|
|
451
446
|
try {
|
|
447
|
+
if (this.responseText.trim().length === 0) {
|
|
448
|
+
await this.sendEditPrimary("(completed with no text response)");
|
|
449
|
+
return;
|
|
450
|
+
}
|
|
451
|
+
const chunks = splitForMaxLength(this.responseText, this.maxTextLength);
|
|
452
452
|
await this.sendEditPrimary(chunks[0] ?? "");
|
|
453
453
|
for (const chunk of chunks.slice(1)) {
|
|
454
454
|
await this.sendCreate(chunk);
|
|
@@ -460,7 +460,7 @@ export class EventForwarder {
|
|
|
460
460
|
connectorId: this.inbound.connectorId,
|
|
461
461
|
chatId: this.inbound.chatId,
|
|
462
462
|
targetMessageId: this.rootMessageId,
|
|
463
|
-
}, "Failed to send
|
|
463
|
+
}, "Failed to send final response");
|
|
464
464
|
}
|
|
465
465
|
}
|
|
466
466
|
async finalizeFinalOnly() {
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
import { join } from "node:path";
|
|
2
|
+
import { connectorStatusSnapshotExists, connectorStatusSnapshotPath, isConnectorStatusSnapshotStale, readConnectorStatusSnapshot, } from "../../core/connector-status.js";
|
|
3
|
+
import { requireRawConfig, resolveConfigPath, resolveDataRootDir } from "../shared/config-io.js";
|
|
4
|
+
function formatTimestamp(timestampMs) {
|
|
5
|
+
return new Date(timestampMs).toISOString();
|
|
6
|
+
}
|
|
7
|
+
function pad(value, width) {
|
|
8
|
+
return value.padEnd(width);
|
|
9
|
+
}
|
|
10
|
+
function renderTable(items) {
|
|
11
|
+
const rows = items.map((item) => ({
|
|
12
|
+
connectorId: item.connectorId,
|
|
13
|
+
platform: item.platform,
|
|
14
|
+
availability: item.availability,
|
|
15
|
+
health: item.health.status,
|
|
16
|
+
restarts: String(item.health.restartCount ?? 0),
|
|
17
|
+
updated: formatTimestamp(item.health.updatedAtMs),
|
|
18
|
+
}));
|
|
19
|
+
const widths = {
|
|
20
|
+
connectorId: Math.max("CONNECTOR".length, ...rows.map((row) => row.connectorId.length)),
|
|
21
|
+
platform: Math.max("PLATFORM".length, ...rows.map((row) => row.platform.length)),
|
|
22
|
+
availability: Math.max("AVAILABILITY".length, ...rows.map((row) => row.availability.length)),
|
|
23
|
+
health: Math.max("HEALTH".length, ...rows.map((row) => row.health.length)),
|
|
24
|
+
restarts: Math.max("RESTARTS".length, ...rows.map((row) => row.restarts.length)),
|
|
25
|
+
updated: Math.max("UPDATED".length, ...rows.map((row) => row.updated.length)),
|
|
26
|
+
};
|
|
27
|
+
const lines = [
|
|
28
|
+
[
|
|
29
|
+
pad("CONNECTOR", widths.connectorId),
|
|
30
|
+
pad("PLATFORM", widths.platform),
|
|
31
|
+
pad("AVAILABILITY", widths.availability),
|
|
32
|
+
pad("HEALTH", widths.health),
|
|
33
|
+
pad("RESTARTS", widths.restarts),
|
|
34
|
+
pad("UPDATED", widths.updated),
|
|
35
|
+
].join(" "),
|
|
36
|
+
];
|
|
37
|
+
for (const row of rows) {
|
|
38
|
+
lines.push([
|
|
39
|
+
pad(row.connectorId, widths.connectorId),
|
|
40
|
+
pad(row.platform, widths.platform),
|
|
41
|
+
pad(row.availability, widths.availability),
|
|
42
|
+
pad(row.health, widths.health),
|
|
43
|
+
pad(row.restarts, widths.restarts),
|
|
44
|
+
pad(row.updated, widths.updated),
|
|
45
|
+
].join(" "));
|
|
46
|
+
}
|
|
47
|
+
return lines;
|
|
48
|
+
}
|
|
49
|
+
export async function runConnectorStatusCommand(options) {
|
|
50
|
+
const configPath = resolveConfigPath();
|
|
51
|
+
const rawConfig = await requireRawConfig(configPath);
|
|
52
|
+
const statusPath = connectorStatusSnapshotPath(join(resolveDataRootDir(configPath, rawConfig), "state"));
|
|
53
|
+
if (!(await connectorStatusSnapshotExists(statusPath))) {
|
|
54
|
+
throw new Error(`Connector status snapshot '${statusPath}' does not exist. Start 'dobby start' first.`);
|
|
55
|
+
}
|
|
56
|
+
const snapshot = await readConnectorStatusSnapshot(statusPath);
|
|
57
|
+
const items = options.connectorId
|
|
58
|
+
? snapshot.items.filter((item) => item.connectorId === options.connectorId)
|
|
59
|
+
: snapshot.items;
|
|
60
|
+
if (options.connectorId && items.length === 0) {
|
|
61
|
+
throw new Error(`Connector '${options.connectorId}' was not found in '${statusPath}'.`);
|
|
62
|
+
}
|
|
63
|
+
if (options.json) {
|
|
64
|
+
console.log(JSON.stringify({ ...snapshot, items }));
|
|
65
|
+
return;
|
|
66
|
+
}
|
|
67
|
+
if (isConnectorStatusSnapshotStale(snapshot)) {
|
|
68
|
+
console.log("Warning: connector status snapshot is stale; the gateway may not be running.");
|
|
69
|
+
}
|
|
70
|
+
if (items.length === 0) {
|
|
71
|
+
console.log("(empty)");
|
|
72
|
+
return;
|
|
73
|
+
}
|
|
74
|
+
for (const line of renderTable(items)) {
|
|
75
|
+
console.log(line);
|
|
76
|
+
}
|
|
77
|
+
}
|
|
@@ -4,6 +4,46 @@ import { computeInitialNextRunAtMs, describeSchedule } from "../../cron/schedule
|
|
|
4
4
|
import { CronStore } from "../../cron/store.js";
|
|
5
5
|
import { resolveConfigPath } from "../shared/config-io.js";
|
|
6
6
|
import { createLogger } from "../shared/runtime.js";
|
|
7
|
+
function printMarkdown(lines) {
|
|
8
|
+
for (const line of lines) {
|
|
9
|
+
console.log(line);
|
|
10
|
+
}
|
|
11
|
+
}
|
|
12
|
+
function formatCode(value) {
|
|
13
|
+
return `\`${value}\``;
|
|
14
|
+
}
|
|
15
|
+
function formatTimestamp(timestampMs) {
|
|
16
|
+
return timestampMs !== undefined ? formatCode(new Date(timestampMs).toISOString()) : "-";
|
|
17
|
+
}
|
|
18
|
+
function formatDelivery(job) {
|
|
19
|
+
const segments = [
|
|
20
|
+
job.delivery.connectorId,
|
|
21
|
+
job.delivery.routeId,
|
|
22
|
+
job.delivery.channelId,
|
|
23
|
+
...(job.delivery.threadId ? [job.delivery.threadId] : []),
|
|
24
|
+
];
|
|
25
|
+
return formatCode(segments.join("/"));
|
|
26
|
+
}
|
|
27
|
+
function buildJobMarkdown(job, headingLevel = 3) {
|
|
28
|
+
const heading = `${"#".repeat(headingLevel)} ${formatCode(job.id)}`;
|
|
29
|
+
const lines = [
|
|
30
|
+
heading,
|
|
31
|
+
`- name: ${job.name}`,
|
|
32
|
+
`- state: ${job.enabled ? "enabled" : "paused"}`,
|
|
33
|
+
`- schedule: ${formatCode(describeSchedule(job.schedule))}`,
|
|
34
|
+
`- next run: ${formatTimestamp(job.state.nextRunAtMs)}`,
|
|
35
|
+
`- last run: ${formatTimestamp(job.state.lastRunAtMs)}`,
|
|
36
|
+
`- last status: ${job.state.lastStatus ?? "-"}`,
|
|
37
|
+
`- delivery: ${formatDelivery(job)}`,
|
|
38
|
+
];
|
|
39
|
+
if (job.state.manualRunRequestedAtMs !== undefined) {
|
|
40
|
+
lines.push(`- manual run queued at: ${formatTimestamp(job.state.manualRunRequestedAtMs)}`);
|
|
41
|
+
}
|
|
42
|
+
if (job.state.lastError) {
|
|
43
|
+
lines.push(`- last error: ${job.state.lastError}`);
|
|
44
|
+
}
|
|
45
|
+
return lines;
|
|
46
|
+
}
|
|
7
47
|
function slugify(value) {
|
|
8
48
|
const normalized = value
|
|
9
49
|
.trim()
|
|
@@ -97,10 +137,14 @@ export async function runCronAddCommand(options) {
|
|
|
97
137
|
},
|
|
98
138
|
};
|
|
99
139
|
await context.store.upsertJob(job);
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
140
|
+
printMarkdown([
|
|
141
|
+
"## Cron Job Added",
|
|
142
|
+
`- id: ${formatCode(job.id)}`,
|
|
143
|
+
`- name: ${job.name}`,
|
|
144
|
+
`- schedule: ${formatCode(describeSchedule(job.schedule))}`,
|
|
145
|
+
`- delivery: ${formatDelivery(job)}`,
|
|
146
|
+
`- cron config: ${formatCode(context.cronConfigPath)}`,
|
|
147
|
+
]);
|
|
104
148
|
}
|
|
105
149
|
export async function runCronListCommand(options) {
|
|
106
150
|
const context = await loadCronContext(options.cronConfigPath ? { cronConfigPath: options.cronConfigPath } : undefined);
|
|
@@ -110,22 +154,25 @@ export async function runCronListCommand(options) {
|
|
|
110
154
|
return;
|
|
111
155
|
}
|
|
112
156
|
if (jobs.length === 0) {
|
|
113
|
-
|
|
157
|
+
printMarkdown([
|
|
158
|
+
"## Cron Jobs",
|
|
159
|
+
"",
|
|
160
|
+
"_No cron jobs configured._",
|
|
161
|
+
"",
|
|
162
|
+
`- cron config: ${formatCode(context.cronConfigPath)}`,
|
|
163
|
+
]);
|
|
114
164
|
return;
|
|
115
165
|
}
|
|
116
|
-
|
|
166
|
+
const lines = [
|
|
167
|
+
"## Cron Jobs",
|
|
168
|
+
"",
|
|
169
|
+
`- cron config: ${formatCode(context.cronConfigPath)}`,
|
|
170
|
+
`- total jobs: ${jobs.length}`,
|
|
171
|
+
];
|
|
117
172
|
for (const job of jobs) {
|
|
118
|
-
|
|
119
|
-
const last = job.state.lastRunAtMs ? new Date(job.state.lastRunAtMs).toISOString() : "-";
|
|
120
|
-
const schedule = describeSchedule(job.schedule);
|
|
121
|
-
console.log(`- ${job.id} [${job.enabled ? "enabled" : "paused"}] ${job.name}`);
|
|
122
|
-
console.log(` schedule=${schedule}`);
|
|
123
|
-
console.log(` next=${next} last=${last} status=${job.state.lastStatus ?? "-"}`);
|
|
124
|
-
if (job.state.manualRunRequestedAtMs !== undefined) {
|
|
125
|
-
console.log(` manualRun=${new Date(job.state.manualRunRequestedAtMs).toISOString()}`);
|
|
126
|
-
}
|
|
127
|
-
console.log(` delivery=${job.delivery.connectorId}/${job.delivery.routeId}/${job.delivery.channelId}`);
|
|
173
|
+
lines.push("", ...buildJobMarkdown(job));
|
|
128
174
|
}
|
|
175
|
+
printMarkdown(lines);
|
|
129
176
|
}
|
|
130
177
|
export async function runCronStatusCommand(options) {
|
|
131
178
|
const context = await loadCronContext(options.cronConfigPath ? { cronConfigPath: options.cronConfigPath } : undefined);
|
|
@@ -142,19 +189,12 @@ export async function runCronStatusCommand(options) {
|
|
|
142
189
|
return;
|
|
143
190
|
}
|
|
144
191
|
if (target) {
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
|
|
148
|
-
|
|
149
|
-
|
|
150
|
-
|
|
151
|
-
console.log(`- lastStatus: ${target.state.lastStatus ?? "-"}`);
|
|
152
|
-
if (target.state.manualRunRequestedAtMs !== undefined) {
|
|
153
|
-
console.log(`- manualRunQueuedAt: ${new Date(target.state.manualRunRequestedAtMs).toISOString()}`);
|
|
154
|
-
}
|
|
155
|
-
if (target.state.lastError) {
|
|
156
|
-
console.log(`- lastError: ${target.state.lastError}`);
|
|
157
|
-
}
|
|
192
|
+
printMarkdown([
|
|
193
|
+
"## Cron Job Status",
|
|
194
|
+
...buildJobMarkdown(target, 3),
|
|
195
|
+
"",
|
|
196
|
+
`- cron config: ${formatCode(context.cronConfigPath)}`,
|
|
197
|
+
]);
|
|
158
198
|
return;
|
|
159
199
|
}
|
|
160
200
|
await runCronListCommand({
|
|
@@ -173,9 +213,14 @@ export async function runCronRunCommand(options) {
|
|
|
173
213
|
manualRunRequestedAtMs: current.state.manualRunRequestedAtMs ?? now,
|
|
174
214
|
},
|
|
175
215
|
}));
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
|
|
216
|
+
printMarkdown([
|
|
217
|
+
"## Cron Job Run Queued",
|
|
218
|
+
`- id: ${formatCode(options.jobId)}`,
|
|
219
|
+
"- action: queued one immediate execution",
|
|
220
|
+
"- enabled state: unchanged",
|
|
221
|
+
"- schedule: unchanged",
|
|
222
|
+
`- note: Ensure ${formatCode("dobby start")} is running so the queued execution can be consumed.`,
|
|
223
|
+
]);
|
|
179
224
|
}
|
|
180
225
|
export async function runCronUpdateCommand(options) {
|
|
181
226
|
const context = await loadCronContext(options.cronConfigPath ? { cronConfigPath: options.cronConfigPath } : undefined);
|
|
@@ -247,7 +292,10 @@ export async function runCronUpdateCommand(options) {
|
|
|
247
292
|
};
|
|
248
293
|
return nextJob;
|
|
249
294
|
});
|
|
250
|
-
|
|
295
|
+
printMarkdown([
|
|
296
|
+
"## Cron Job Updated",
|
|
297
|
+
`- id: ${formatCode(options.jobId)}`,
|
|
298
|
+
]);
|
|
251
299
|
}
|
|
252
300
|
export async function runCronRemoveCommand(options) {
|
|
253
301
|
const context = await loadCronContext(options.cronConfigPath ? { cronConfigPath: options.cronConfigPath } : undefined);
|
|
@@ -255,7 +303,10 @@ export async function runCronRemoveCommand(options) {
|
|
|
255
303
|
if (!removed) {
|
|
256
304
|
throw new Error(`Cron job '${options.jobId}' does not exist`);
|
|
257
305
|
}
|
|
258
|
-
|
|
306
|
+
printMarkdown([
|
|
307
|
+
"## Cron Job Removed",
|
|
308
|
+
`- id: ${formatCode(options.jobId)}`,
|
|
309
|
+
]);
|
|
259
310
|
}
|
|
260
311
|
export async function runCronPauseCommand(options) {
|
|
261
312
|
const context = await loadCronContext(options.cronConfigPath ? { cronConfigPath: options.cronConfigPath } : undefined);
|
|
@@ -265,7 +316,11 @@ export async function runCronPauseCommand(options) {
|
|
|
265
316
|
enabled: false,
|
|
266
317
|
updatedAtMs: now,
|
|
267
318
|
}));
|
|
268
|
-
|
|
319
|
+
printMarkdown([
|
|
320
|
+
"## Cron Job Paused",
|
|
321
|
+
`- id: ${formatCode(options.jobId)}`,
|
|
322
|
+
"- state: paused",
|
|
323
|
+
]);
|
|
269
324
|
}
|
|
270
325
|
export async function runCronResumeCommand(options) {
|
|
271
326
|
const context = await loadCronContext(options.cronConfigPath ? { cronConfigPath: options.cronConfigPath } : undefined);
|
|
@@ -288,5 +343,9 @@ export async function runCronResumeCommand(options) {
|
|
|
288
343
|
state: nextState,
|
|
289
344
|
};
|
|
290
345
|
});
|
|
291
|
-
|
|
346
|
+
printMarkdown([
|
|
347
|
+
"## Cron Job Resumed",
|
|
348
|
+
`- id: ${formatCode(options.jobId)}`,
|
|
349
|
+
"- state: enabled",
|
|
350
|
+
]);
|
|
292
351
|
}
|
|
@@ -2,6 +2,7 @@ import { dirname, join } from "node:path";
|
|
|
2
2
|
import { loadCronConfig } from "../../cron/config.js";
|
|
3
3
|
import { CronService } from "../../cron/service.js";
|
|
4
4
|
import { CronStore } from "../../cron/store.js";
|
|
5
|
+
import { connectorStatusSnapshotPath, DEFAULT_CONNECTOR_STATUS_PUBLISH_INTERVAL_MS, DEFAULT_CONNECTOR_STATUS_STALE_AFTER_MS, writeConnectorStatusSnapshot, } from "../../core/connector-status.js";
|
|
5
6
|
import { DedupStore } from "../../core/dedup-store.js";
|
|
6
7
|
import { Gateway } from "../../core/gateway.js";
|
|
7
8
|
import { BindingResolver, loadGatewayConfig, RouteResolver } from "../../core/routing.js";
|
|
@@ -90,6 +91,8 @@ function selectSandboxInstances(config) {
|
|
|
90
91
|
export async function runStartCommand() {
|
|
91
92
|
const configPath = resolveConfigPath();
|
|
92
93
|
const config = await loadGatewayConfig(configPath);
|
|
94
|
+
const gatewayStartedAtMs = Date.now();
|
|
95
|
+
const connectorStatusPath = connectorStatusSnapshotPath(config.data.stateDir);
|
|
93
96
|
await ensureDataDirs(config.data.rootDir);
|
|
94
97
|
const logger = createLogger();
|
|
95
98
|
const loader = new ExtensionLoader(logger, {
|
|
@@ -151,18 +154,67 @@ export async function runStartCommand() {
|
|
|
151
154
|
gateway,
|
|
152
155
|
logger,
|
|
153
156
|
});
|
|
157
|
+
const publishConnectorStatuses = async () => {
|
|
158
|
+
await writeConnectorStatusSnapshot(connectorStatusPath, {
|
|
159
|
+
schemaVersion: 1,
|
|
160
|
+
generatedAtMs: Date.now(),
|
|
161
|
+
staleAfterMs: DEFAULT_CONNECTOR_STATUS_STALE_AFTER_MS,
|
|
162
|
+
gateway: {
|
|
163
|
+
pid: process.pid,
|
|
164
|
+
startedAtMs: gatewayStartedAtMs,
|
|
165
|
+
},
|
|
166
|
+
items: gateway.listConnectorStatuses(),
|
|
167
|
+
});
|
|
168
|
+
};
|
|
169
|
+
let connectorStatusTimer = null;
|
|
170
|
+
const startConnectorStatusPublisher = async () => {
|
|
171
|
+
try {
|
|
172
|
+
await publishConnectorStatuses();
|
|
173
|
+
}
|
|
174
|
+
catch (error) {
|
|
175
|
+
logger.warn({ err: error, connectorStatusPath }, "Failed to write initial connector status snapshot");
|
|
176
|
+
}
|
|
177
|
+
connectorStatusTimer = setInterval(() => {
|
|
178
|
+
void publishConnectorStatuses().catch((error) => {
|
|
179
|
+
logger.warn({ err: error, connectorStatusPath }, "Failed to refresh connector status snapshot");
|
|
180
|
+
});
|
|
181
|
+
}, DEFAULT_CONNECTOR_STATUS_PUBLISH_INTERVAL_MS);
|
|
182
|
+
};
|
|
183
|
+
const stopConnectorStatusPublisher = async () => {
|
|
184
|
+
if (connectorStatusTimer) {
|
|
185
|
+
clearInterval(connectorStatusTimer);
|
|
186
|
+
connectorStatusTimer = null;
|
|
187
|
+
}
|
|
188
|
+
try {
|
|
189
|
+
await publishConnectorStatuses();
|
|
190
|
+
}
|
|
191
|
+
catch (error) {
|
|
192
|
+
logger.warn({ err: error, connectorStatusPath }, "Failed to write final connector status snapshot");
|
|
193
|
+
}
|
|
194
|
+
};
|
|
154
195
|
await gateway.start();
|
|
155
196
|
await cronService.start();
|
|
197
|
+
await startConnectorStatusPublisher();
|
|
156
198
|
logger.info({
|
|
157
199
|
configPath,
|
|
158
200
|
cronConfigPath: loadedCronConfig.configPath,
|
|
159
201
|
cronConfigSource: loadedCronConfig.source,
|
|
160
202
|
cronEnabled: loadedCronConfig.config.enabled,
|
|
161
203
|
}, "Gateway started");
|
|
204
|
+
let shuttingDown = false;
|
|
162
205
|
const shutdown = async (signal) => {
|
|
206
|
+
if (shuttingDown) {
|
|
207
|
+
return;
|
|
208
|
+
}
|
|
209
|
+
shuttingDown = true;
|
|
163
210
|
logger.info({ signal }, "Shutting down gateway");
|
|
211
|
+
if (connectorStatusTimer) {
|
|
212
|
+
clearInterval(connectorStatusTimer);
|
|
213
|
+
connectorStatusTimer = null;
|
|
214
|
+
}
|
|
164
215
|
await cronService.stop();
|
|
165
216
|
await gateway.stop();
|
|
217
|
+
await stopConnectorStatusPublisher();
|
|
166
218
|
await hostExecutor.close();
|
|
167
219
|
await closeProviderInstances(providers, logger);
|
|
168
220
|
await closeSandboxInstances(sandboxes, logger);
|
package/dist/src/cli/program.js
CHANGED
|
@@ -2,6 +2,7 @@ import { existsSync, readFileSync } from "node:fs";
|
|
|
2
2
|
import { fileURLToPath } from "node:url";
|
|
3
3
|
import { Command } from "commander";
|
|
4
4
|
import { runConfigListCommand, runConfigSchemaListCommand, runConfigSchemaShowCommand, runConfigShowCommand, } from "./commands/config.js";
|
|
5
|
+
import { runConnectorStatusCommand } from "./commands/connector.js";
|
|
5
6
|
import { runCronAddCommand, runCronListCommand, runCronPauseCommand, runCronRemoveCommand, runCronResumeCommand, runCronRunCommand, runCronStatusCommand, runCronUpdateCommand, } from "./commands/cron.js";
|
|
6
7
|
import { runDoctorCommand } from "./commands/doctor.js";
|
|
7
8
|
import { runExtensionInstallCommand, runExtensionListCommand, runExtensionUninstallCommand, } from "./commands/extension.js";
|
|
@@ -127,6 +128,18 @@ export function buildProgram() {
|
|
|
127
128
|
fix: Boolean(opts.fix),
|
|
128
129
|
});
|
|
129
130
|
});
|
|
131
|
+
const connectorCommand = program.command("connector").description("Inspect runtime connector status");
|
|
132
|
+
connectorCommand
|
|
133
|
+
.command("status")
|
|
134
|
+
.description("Show status for all connectors or one connector")
|
|
135
|
+
.argument("[connectorId]", "Connector instance ID")
|
|
136
|
+
.option("--json", "Output JSON", false)
|
|
137
|
+
.action(async (connectorId, opts) => {
|
|
138
|
+
await runConnectorStatusCommand({
|
|
139
|
+
...(typeof connectorId === "string" ? { connectorId } : {}),
|
|
140
|
+
json: Boolean(opts.json),
|
|
141
|
+
});
|
|
142
|
+
});
|
|
130
143
|
const cronCommand = program.command("cron").description("Manage scheduled cron jobs");
|
|
131
144
|
cronCommand
|
|
132
145
|
.command("add")
|
|
@@ -0,0 +1,132 @@
|
|
|
1
|
+
import { access, mkdir, readFile, rename, writeFile } from "node:fs/promises";
|
|
2
|
+
import { dirname, join, resolve } from "node:path";
|
|
3
|
+
export const CONNECTOR_STATUS_SNAPSHOT_FILENAME = "connectors-status.json";
|
|
4
|
+
export const DEFAULT_CONNECTOR_STATUS_PUBLISH_INTERVAL_MS = 5_000;
|
|
5
|
+
export const DEFAULT_CONNECTOR_STATUS_STALE_AFTER_MS = 15_000;
|
|
6
|
+
function isRecord(value) {
|
|
7
|
+
return Boolean(value) && typeof value === "object" && !Array.isArray(value);
|
|
8
|
+
}
|
|
9
|
+
function createFallbackHealth(detail) {
|
|
10
|
+
const now = Date.now();
|
|
11
|
+
return {
|
|
12
|
+
status: "stopped",
|
|
13
|
+
detail,
|
|
14
|
+
statusSinceMs: now,
|
|
15
|
+
updatedAtMs: now,
|
|
16
|
+
};
|
|
17
|
+
}
|
|
18
|
+
export function availabilityFromHealthStatus(status) {
|
|
19
|
+
switch (status) {
|
|
20
|
+
case "ready":
|
|
21
|
+
return "online";
|
|
22
|
+
case "degraded":
|
|
23
|
+
return "degraded";
|
|
24
|
+
case "reconnecting":
|
|
25
|
+
return "reconnecting";
|
|
26
|
+
case "starting":
|
|
27
|
+
case "failed":
|
|
28
|
+
case "stopped":
|
|
29
|
+
default:
|
|
30
|
+
return "offline";
|
|
31
|
+
}
|
|
32
|
+
}
|
|
33
|
+
export function connectorStatusSnapshotPath(stateDir) {
|
|
34
|
+
return join(resolve(stateDir), CONNECTOR_STATUS_SNAPSHOT_FILENAME);
|
|
35
|
+
}
|
|
36
|
+
export function statusItemFromConnector(connector) {
|
|
37
|
+
const health = connector.getHealth?.() ?? createFallbackHealth("Connector health is not available");
|
|
38
|
+
const availability = availabilityFromHealthStatus(health.status);
|
|
39
|
+
return {
|
|
40
|
+
connectorId: connector.id,
|
|
41
|
+
platform: connector.platform,
|
|
42
|
+
connectorName: connector.name,
|
|
43
|
+
availability,
|
|
44
|
+
online: availability === "online",
|
|
45
|
+
health,
|
|
46
|
+
};
|
|
47
|
+
}
|
|
48
|
+
async function writeAtomic(filePath, content) {
|
|
49
|
+
const absolutePath = resolve(filePath);
|
|
50
|
+
await mkdir(dirname(absolutePath), { recursive: true });
|
|
51
|
+
const tempPath = `${absolutePath}.tmp-${Date.now()}-${Math.random().toString(16).slice(2)}`;
|
|
52
|
+
await writeFile(tempPath, content, "utf-8");
|
|
53
|
+
await rename(tempPath, absolutePath);
|
|
54
|
+
}
|
|
55
|
+
export async function writeConnectorStatusSnapshot(filePath, snapshot) {
|
|
56
|
+
await writeAtomic(filePath, `${JSON.stringify(snapshot, null, 2)}\n`);
|
|
57
|
+
}
|
|
58
|
+
export async function connectorStatusSnapshotExists(filePath) {
|
|
59
|
+
try {
|
|
60
|
+
await access(resolve(filePath));
|
|
61
|
+
return true;
|
|
62
|
+
}
|
|
63
|
+
catch {
|
|
64
|
+
return false;
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
function parseHealth(value) {
|
|
68
|
+
if (!isRecord(value)) {
|
|
69
|
+
return null;
|
|
70
|
+
}
|
|
71
|
+
if (typeof value.status !== "string" || typeof value.statusSinceMs !== "number" || typeof value.updatedAtMs !== "number") {
|
|
72
|
+
return null;
|
|
73
|
+
}
|
|
74
|
+
return value;
|
|
75
|
+
}
|
|
76
|
+
function parseStatusItem(value) {
|
|
77
|
+
if (!isRecord(value)) {
|
|
78
|
+
return null;
|
|
79
|
+
}
|
|
80
|
+
if (typeof value.connectorId !== "string"
|
|
81
|
+
|| typeof value.platform !== "string"
|
|
82
|
+
|| typeof value.connectorName !== "string"
|
|
83
|
+
|| typeof value.availability !== "string"
|
|
84
|
+
|| typeof value.online !== "boolean") {
|
|
85
|
+
return null;
|
|
86
|
+
}
|
|
87
|
+
const health = parseHealth(value.health);
|
|
88
|
+
if (!health) {
|
|
89
|
+
return null;
|
|
90
|
+
}
|
|
91
|
+
return {
|
|
92
|
+
connectorId: value.connectorId,
|
|
93
|
+
platform: value.platform,
|
|
94
|
+
connectorName: value.connectorName,
|
|
95
|
+
availability: value.availability,
|
|
96
|
+
online: value.online,
|
|
97
|
+
health,
|
|
98
|
+
};
|
|
99
|
+
}
|
|
100
|
+
export async function readConnectorStatusSnapshot(filePath) {
|
|
101
|
+
const raw = await readFile(resolve(filePath), "utf-8");
|
|
102
|
+
const parsed = JSON.parse(raw);
|
|
103
|
+
if (!isRecord(parsed) || parsed.schemaVersion !== 1 || typeof parsed.generatedAtMs !== "number" || typeof parsed.staleAfterMs !== "number") {
|
|
104
|
+
throw new Error(`Connector status snapshot '${resolve(filePath)}' has invalid metadata`);
|
|
105
|
+
}
|
|
106
|
+
if (!isRecord(parsed.gateway) || typeof parsed.gateway.pid !== "number" || typeof parsed.gateway.startedAtMs !== "number") {
|
|
107
|
+
throw new Error(`Connector status snapshot '${resolve(filePath)}' has invalid gateway metadata`);
|
|
108
|
+
}
|
|
109
|
+
if (!Array.isArray(parsed.items)) {
|
|
110
|
+
throw new Error(`Connector status snapshot '${resolve(filePath)}' must contain an items array`);
|
|
111
|
+
}
|
|
112
|
+
const items = parsed.items.map((item) => {
|
|
113
|
+
const normalized = parseStatusItem(item);
|
|
114
|
+
if (!normalized) {
|
|
115
|
+
throw new Error(`Connector status snapshot '${resolve(filePath)}' contains an invalid connector entry`);
|
|
116
|
+
}
|
|
117
|
+
return normalized;
|
|
118
|
+
});
|
|
119
|
+
return {
|
|
120
|
+
schemaVersion: 1,
|
|
121
|
+
generatedAtMs: parsed.generatedAtMs,
|
|
122
|
+
staleAfterMs: parsed.staleAfterMs,
|
|
123
|
+
gateway: {
|
|
124
|
+
pid: parsed.gateway.pid,
|
|
125
|
+
startedAtMs: parsed.gateway.startedAtMs,
|
|
126
|
+
},
|
|
127
|
+
items,
|
|
128
|
+
};
|
|
129
|
+
}
|
|
130
|
+
export function isConnectorStatusSnapshotStale(snapshot, now = Date.now()) {
|
|
131
|
+
return now - snapshot.generatedAtMs > snapshot.staleAfterMs;
|
|
132
|
+
}
|
|
@@ -217,13 +217,34 @@ export class SupervisedConnector {
|
|
|
217
217
|
return;
|
|
218
218
|
}
|
|
219
219
|
this.noteInbound();
|
|
220
|
-
|
|
220
|
+
try {
|
|
221
|
+
await this.ctx.emitInbound(message);
|
|
222
|
+
}
|
|
223
|
+
catch (error) {
|
|
224
|
+
this.logger.error({
|
|
225
|
+
err: error,
|
|
226
|
+
connectorId: this.id,
|
|
227
|
+
messageId: message.messageId,
|
|
228
|
+
sourceType: message.source.type,
|
|
229
|
+
sourceId: message.source.id,
|
|
230
|
+
}, "Connector inbound handler failed");
|
|
231
|
+
}
|
|
221
232
|
},
|
|
222
233
|
emitControl: async (event) => {
|
|
223
234
|
if (generation !== this.generation || !this.ctx) {
|
|
224
235
|
return;
|
|
225
236
|
}
|
|
226
|
-
|
|
237
|
+
try {
|
|
238
|
+
await this.ctx.emitControl(event);
|
|
239
|
+
}
|
|
240
|
+
catch (error) {
|
|
241
|
+
this.logger.error({
|
|
242
|
+
err: error,
|
|
243
|
+
connectorId: this.id,
|
|
244
|
+
chatId: event.chatId,
|
|
245
|
+
threadId: event.threadId ?? null,
|
|
246
|
+
}, "Connector control handler failed");
|
|
247
|
+
}
|
|
227
248
|
},
|
|
228
249
|
};
|
|
229
250
|
}
|
|
@@ -267,7 +288,7 @@ export class SupervisedConnector {
|
|
|
267
288
|
restartCount: this.restartCount,
|
|
268
289
|
};
|
|
269
290
|
this.health = merged;
|
|
270
|
-
if (statusChanged || merged.detail !== previous.detail) {
|
|
291
|
+
if (statusChanged || merged.lastError !== previous.lastError || (merged.status !== "ready" && merged.detail !== previous.detail)) {
|
|
271
292
|
this.logHealthTransition(previous, merged);
|
|
272
293
|
}
|
|
273
294
|
}
|
package/dist/src/core/gateway.js
CHANGED
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
import { readFile } from "node:fs/promises";
|
|
2
2
|
import { EventForwarder } from "../agent/event-forwarder.js";
|
|
3
|
+
import { statusItemFromConnector } from "./connector-status.js";
|
|
3
4
|
import { parseControlCommand } from "./control-command.js";
|
|
4
5
|
import { createTypingKeepAliveController } from "./typing-controller.js";
|
|
5
6
|
function isImageAttachment(attachment) {
|
|
@@ -15,6 +16,7 @@ export class Gateway {
|
|
|
15
16
|
options;
|
|
16
17
|
connectorsById = new Map();
|
|
17
18
|
started = false;
|
|
19
|
+
stopping = false;
|
|
18
20
|
constructor(options) {
|
|
19
21
|
this.options = options;
|
|
20
22
|
for (const connector of options.connectors) {
|
|
@@ -24,6 +26,7 @@ export class Gateway {
|
|
|
24
26
|
async start() {
|
|
25
27
|
if (this.started)
|
|
26
28
|
return;
|
|
29
|
+
this.stopping = false;
|
|
27
30
|
await this.options.dedupStore.load();
|
|
28
31
|
this.options.dedupStore.startAutoFlush();
|
|
29
32
|
const startedConnectors = [];
|
|
@@ -59,15 +62,47 @@ export class Gateway {
|
|
|
59
62
|
async stop() {
|
|
60
63
|
if (!this.started)
|
|
61
64
|
return;
|
|
65
|
+
this.stopping = true;
|
|
66
|
+
let firstError;
|
|
67
|
+
try {
|
|
68
|
+
await this.options.runtimeRegistry.closeAll();
|
|
69
|
+
}
|
|
70
|
+
catch (error) {
|
|
71
|
+
firstError = error;
|
|
72
|
+
this.options.logger.warn({ err: error }, "Failed to close active runtimes during shutdown");
|
|
73
|
+
}
|
|
62
74
|
for (const connector of this.options.connectors) {
|
|
63
|
-
|
|
75
|
+
try {
|
|
76
|
+
await connector.stop();
|
|
77
|
+
}
|
|
78
|
+
catch (error) {
|
|
79
|
+
firstError ??= error;
|
|
80
|
+
this.options.logger.warn({ err: error, connectorId: connector.id }, "Failed to stop connector during shutdown");
|
|
81
|
+
}
|
|
64
82
|
}
|
|
65
83
|
this.options.dedupStore.stopAutoFlush();
|
|
66
|
-
|
|
67
|
-
|
|
84
|
+
try {
|
|
85
|
+
await this.options.dedupStore.flush();
|
|
86
|
+
}
|
|
87
|
+
catch (error) {
|
|
88
|
+
firstError ??= error;
|
|
89
|
+
this.options.logger.warn({ err: error }, "Failed to flush dedup store during shutdown");
|
|
90
|
+
}
|
|
68
91
|
this.started = false;
|
|
92
|
+
this.stopping = false;
|
|
93
|
+
if (firstError) {
|
|
94
|
+
throw firstError;
|
|
95
|
+
}
|
|
96
|
+
}
|
|
97
|
+
listConnectorStatuses() {
|
|
98
|
+
return this.options.connectors
|
|
99
|
+
.map((connector) => statusItemFromConnector(connector))
|
|
100
|
+
.sort((a, b) => a.connectorId.localeCompare(b.connectorId));
|
|
69
101
|
}
|
|
70
102
|
async handleScheduled(request) {
|
|
103
|
+
if (this.stopping) {
|
|
104
|
+
throw new Error("Gateway is stopping");
|
|
105
|
+
}
|
|
71
106
|
const connector = this.connectorsById.get(request.connectorId);
|
|
72
107
|
if (!connector) {
|
|
73
108
|
throw new Error(`No connector found for scheduled run '${request.runId}' (${request.connectorId})`);
|
|
@@ -131,6 +166,13 @@ export class Gateway {
|
|
|
131
166
|
});
|
|
132
167
|
}
|
|
133
168
|
async handleInbound(message) {
|
|
169
|
+
if (this.stopping) {
|
|
170
|
+
this.options.logger.debug({
|
|
171
|
+
connectorId: message.connectorId,
|
|
172
|
+
messageId: message.messageId,
|
|
173
|
+
}, "Ignoring inbound message while gateway is stopping");
|
|
174
|
+
return;
|
|
175
|
+
}
|
|
134
176
|
await this.handleMessage(message, {
|
|
135
177
|
origin: "connector",
|
|
136
178
|
useDedup: true,
|
|
@@ -319,23 +361,33 @@ export class Gateway {
|
|
|
319
361
|
this.options.logger.error({ err: error, routeId: route.routeId }, "Failed to process inbound message");
|
|
320
362
|
const rootMessageId = forwarder.primaryMessageId();
|
|
321
363
|
const canEditExisting = connector.capabilities.updateStrategy === "edit" && rootMessageId !== null;
|
|
322
|
-
|
|
323
|
-
|
|
324
|
-
|
|
325
|
-
|
|
326
|
-
|
|
327
|
-
|
|
328
|
-
|
|
329
|
-
|
|
330
|
-
|
|
331
|
-
|
|
332
|
-
|
|
333
|
-
|
|
334
|
-
|
|
335
|
-
|
|
336
|
-
|
|
337
|
-
|
|
338
|
-
|
|
364
|
+
try {
|
|
365
|
+
await connector.send(canEditExisting
|
|
366
|
+
? {
|
|
367
|
+
...this.outboundBaseFromInbound(message),
|
|
368
|
+
mode: "update",
|
|
369
|
+
targetMessageId: rootMessageId,
|
|
370
|
+
text: `Error: ${error instanceof Error ? error.message : String(error)}`,
|
|
371
|
+
}
|
|
372
|
+
: {
|
|
373
|
+
...this.outboundBaseFromInbound(message),
|
|
374
|
+
mode: "create",
|
|
375
|
+
...(rootMessageId
|
|
376
|
+
? { replyToMessageId: rootMessageId }
|
|
377
|
+
: options.includeReplyTo
|
|
378
|
+
? { replyToMessageId: message.messageId }
|
|
379
|
+
: {}),
|
|
380
|
+
text: `Error: ${error instanceof Error ? error.message : String(error)}`,
|
|
381
|
+
});
|
|
382
|
+
}
|
|
383
|
+
catch (sendError) {
|
|
384
|
+
this.options.logger.warn({
|
|
385
|
+
err: sendError,
|
|
386
|
+
connectorId: message.connectorId,
|
|
387
|
+
chatId: message.chatId,
|
|
388
|
+
rootMessageId,
|
|
389
|
+
}, "Failed to send error reply");
|
|
390
|
+
}
|
|
339
391
|
}
|
|
340
392
|
finally {
|
|
341
393
|
unsubscribe?.();
|
|
@@ -436,6 +488,13 @@ export class Gateway {
|
|
|
436
488
|
await this.sendCommandReply(connector, message, "_Started a new session._");
|
|
437
489
|
}
|
|
438
490
|
async handleControl(event) {
|
|
491
|
+
if (this.stopping) {
|
|
492
|
+
this.options.logger.debug({
|
|
493
|
+
connectorId: event.connectorId,
|
|
494
|
+
chatId: event.chatId,
|
|
495
|
+
}, "Ignoring control event while gateway is stopping");
|
|
496
|
+
return;
|
|
497
|
+
}
|
|
439
498
|
const convKey = `${event.connectorId}:${event.platform}:${event.accountId}:${event.chatId}:${event.threadId ?? "root"}`;
|
|
440
499
|
const connector = this.connectorsById.get(event.connectorId);
|
|
441
500
|
const cancelled = await this.options.runtimeRegistry.cancel(convKey);
|