@geekbeer/minion 3.17.0 → 3.23.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/core/db.js +24 -1
- package/core/lib/dag-step-poller.js +22 -13
- package/core/lib/llm-checker.js +4 -7
- package/core/lib/template-expander.js +21 -18
- package/core/llm-dispatch/mcp-server.js +185 -0
- package/core/llm-dispatch/session-pool.js +97 -0
- package/core/llm-plugins/claude/index.js +151 -0
- package/core/llm-plugins/claude/stream.js +166 -0
- package/core/llm-plugins/codex/index.js +161 -0
- package/core/llm-plugins/gemini/index.js +104 -0
- package/core/llm-plugins/lib/active.js +23 -0
- package/core/llm-plugins/lib/mcp-registration.js +132 -0
- package/core/llm-plugins/lib/skill-dirs.js +67 -0
- package/core/llm-plugins/lib/spawn-helper.js +88 -0
- package/core/llm-plugins/registry.js +168 -0
- package/core/llm-plugins/types.js +85 -0
- package/core/routes/llm.js +89 -0
- package/core/routes/skills.js +112 -57
- package/core/routes/todos.js +4 -3
- package/core/stores/todo-store.js +65 -5
- package/docs/api-reference.md +470 -0
- package/docs/task-guides.md +220 -0
- package/linux/bin/hq +168 -15
- package/linux/minion-cli.sh +14 -0
- package/linux/routes/chat.js +149 -2
- package/linux/routine-runner.js +22 -7
- package/linux/server.js +2 -0
- package/linux/workflow-runner.js +26 -8
- package/package.json +1 -1
- package/rules/core.md +25 -7
- package/win/bin/hq.ps1 +155 -27
- package/win/routes/chat.js +142 -2
- package/win/routine-runner.js +20 -6
- package/win/server.js +2 -0
- package/win/workflow-runner.js +20 -7
package/docs/task-guides.md
CHANGED
|
@@ -144,6 +144,226 @@ curl -s -X POST "http://localhost:8080/api/workflows/push/<name>" \
|
|
|
144
144
|
-H "Authorization: Bearer $API_TOKEN"
|
|
145
145
|
```
|
|
146
146
|
|
|
147
|
+
> **Note**: `/api/workflows/push|fetch` は線形パイプライン形式(`pipeline_skill_names`)の旧式ワークフロー専用です。DAGワークフロー(ノード/エッジ形式)はHQダッシュボードの DAG エディタで作成・編集します。次節を参照。
|
|
148
|
+
|
|
149
|
+
---
|
|
150
|
+
|
|
151
|
+
## DAG ワークフロー (ノード/エッジ形式)
|
|
152
|
+
|
|
153
|
+
DAG ワークフローは有向非巡回グラフでスキル間の依存関係を表現する新方式のワークフローです。fan-out による並列展開、join による集約、conditional による分岐、transform による LLM データ変換、review によるゲーティングをサポートします。
|
|
154
|
+
|
|
155
|
+
**HQダッシュボードの DAG エディタ(プロジェクト画面 → 「DAG (beta)」タブ)で GUI 編集**できるほか、**ミニオンからは JSON ベースで編集可能**(PMロールのみ)。ミニオンは実行ランタイムも担当し、`dag-step-poller` デーモンが pending ノードを自動で処理します。
|
|
156
|
+
|
|
157
|
+
### ミニオンによる DAG ワークフロー編集 (PM のみ)
|
|
158
|
+
|
|
159
|
+
ミニオンは `hq` CLI で DAG ワークフローの graph JSON を直接編集できる。
|
|
160
|
+
|
|
161
|
+
#### 1. 新規作成
|
|
162
|
+
|
|
163
|
+
```bash
|
|
164
|
+
# body.json を用意
|
|
165
|
+
cat > /tmp/dag-create.json <<'EOF'
|
|
166
|
+
{
|
|
167
|
+
"project_id": "<project-uuid>",
|
|
168
|
+
"name": "my-dag",
|
|
169
|
+
"content": "Description of this DAG",
|
|
170
|
+
"change_summary": "initial draft",
|
|
171
|
+
"graph": {
|
|
172
|
+
"nodes": [
|
|
173
|
+
{ "id": "start", "type": "start", "label": "Start", "position": { "x": 0, "y": 0 } },
|
|
174
|
+
{ "id": "end", "type": "end", "label": "End", "position": { "x": 300, "y": 0 } }
|
|
175
|
+
],
|
|
176
|
+
"edges": [
|
|
177
|
+
{ "id": "e1", "source": "start", "target": "end" }
|
|
178
|
+
]
|
|
179
|
+
}
|
|
180
|
+
}
|
|
181
|
+
EOF
|
|
182
|
+
|
|
183
|
+
# 作成(ドラフトとして保存される、まだ v1 にはならない)
|
|
184
|
+
hq create dag-workflow /tmp/dag-create.json
|
|
185
|
+
```
|
|
186
|
+
|
|
187
|
+
- `name` は `/^[a-z0-9-]+$/`
|
|
188
|
+
- `graph` は省略可。省略時は空で作成してから put で埋めていく
|
|
189
|
+
- レスポンスの `id` を控えて後続の put / publish に使う
|
|
190
|
+
|
|
191
|
+
#### 2. ドラフト更新
|
|
192
|
+
|
|
193
|
+
```bash
|
|
194
|
+
# body.json を用意(フィールドは全て optional)
|
|
195
|
+
cat > /tmp/dag-update.json <<'EOF'
|
|
196
|
+
{
|
|
197
|
+
"graph": {
|
|
198
|
+
"nodes": [
|
|
199
|
+
{ "id": "start", "type": "start", "label": "Start" },
|
|
200
|
+
{ "id": "fetch", "type": "skill", "label": "Fetch",
|
|
201
|
+
"skill_version_id": "<skill-version-uuid>", "assigned_role": "engineer" },
|
|
202
|
+
{ "id": "end", "type": "end", "label": "End" }
|
|
203
|
+
],
|
|
204
|
+
"edges": [
|
|
205
|
+
{ "id": "e1", "source": "start", "target": "fetch" },
|
|
206
|
+
{ "id": "e2", "source": "fetch", "target": "end" }
|
|
207
|
+
]
|
|
208
|
+
},
|
|
209
|
+
"content": "Description updated",
|
|
210
|
+
"change_summary": "add fetch skill node"
|
|
211
|
+
}
|
|
212
|
+
EOF
|
|
213
|
+
|
|
214
|
+
hq put dag-workflow <dag-workflow-id> /tmp/dag-update.json
|
|
215
|
+
```
|
|
216
|
+
|
|
217
|
+
- ドラフト保存時はセマンティックバリデーション無し(構造チェックのみ)。未完成状態でも保存できる。
|
|
218
|
+
- `hq` CLI は送信前にローカルで JSON 構文を検証する。壊れていれば API を叩かずに `Error: invalid JSON syntax in ...` で停止する。
|
|
219
|
+
|
|
220
|
+
#### 3. 公開(バリデーション付き)
|
|
221
|
+
|
|
222
|
+
```bash
|
|
223
|
+
hq publish dag-workflow <dag-workflow-id>
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
- 現在のドラフトを `validateDagGraph` でフル検証(ノードID重複・エッジ参照整合性・サイクル・必須フィールド等)
|
|
227
|
+
- OK なら新バージョンが `dag_workflow_versions` に追加され、`current_version_id` が更新される
|
|
228
|
+
- NG なら 400 で `{ error, details: [...] }` が返る。エラー詳細を見てドラフトを修正し再試行する
|
|
229
|
+
- 公開後、`dag-step-poller` による実行対象になる
|
|
230
|
+
|
|
231
|
+
#### 4. 既存ワークフローの取得と編集のラウンドトリップ
|
|
232
|
+
|
|
233
|
+
```bash
|
|
234
|
+
# 現在の state をダウンロード
|
|
235
|
+
hq fetch dag-workflow <dag-workflow-id> > /tmp/current.json
|
|
236
|
+
|
|
237
|
+
# jq で graph 部分だけ抜き出して編集用ボディを作る
|
|
238
|
+
jq '{ graph: (.draft_graph // .current_version.graph), content: (.draft_content // .current_version.content), change_summary: "…" }' \
|
|
239
|
+
/tmp/current.json > /tmp/dag-update.json
|
|
240
|
+
|
|
241
|
+
# 編集(エディタや AI で graph を変更)
|
|
242
|
+
# ...
|
|
243
|
+
|
|
244
|
+
# 更新 → 公開
|
|
245
|
+
hq put dag-workflow <dag-workflow-id> /tmp/dag-update.json
|
|
246
|
+
hq publish dag-workflow <dag-workflow-id>
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
#### 権限と注意事項
|
|
250
|
+
|
|
251
|
+
- 書き込み(create / put / publish)は **PMロールのみ**。Engineer / accountant では 403 が返る。読み取り(fetch)は全メンバー可。
|
|
252
|
+
- ドラフト保存は軽量(構造チェックのみ)。インクリメンタルに graph を組み立てていくのに適している。
|
|
253
|
+
- セマンティックエラーは publish 時にまとめて検出されるため、途中保存と最終公開を使い分けること。
|
|
254
|
+
|
|
255
|
+
### 実行フロー(ランタイム側、参考)
|
|
256
|
+
|
|
257
|
+
```
|
|
258
|
+
HQ側:
|
|
259
|
+
ユーザーが DAG 実行を開始
|
|
260
|
+
→ dag_executions レコード作成
|
|
261
|
+
→ 初期ノード (start の下流) が status=pending で生成
|
|
262
|
+
|
|
263
|
+
ミニオン側 (dag-step-poller, 30秒ごと):
|
|
264
|
+
1. GET /api/dag/minion/pending-nodes
|
|
265
|
+
→ 自分のロールに一致し、依存が解決済みの pending ノードを取得
|
|
266
|
+
2. POST /api/dag/minion/claim-node
|
|
267
|
+
→ 楽観ロックでノードを取得(409 の場合は他ミニオンが先に処理中)
|
|
268
|
+
3. スキル実行 (skill ノード) または変換実行 (transform ノード)
|
|
269
|
+
4. POST /api/dag/minion/node-complete
|
|
270
|
+
→ output_data と output_summary を報告
|
|
271
|
+
|
|
272
|
+
HQ側 (カスケードエンジン):
|
|
273
|
+
- completed → 下流ノードを pending で生成 (依存解決後)
|
|
274
|
+
- fan_out ノード → template を N 個展開、各インスタンスに scope_path 付与
|
|
275
|
+
- join ノード → fan-out 全インスタンス完了待機・集約
|
|
276
|
+
- conditional ノード → 条件評価で分岐先を決定
|
|
277
|
+
- review ノード → review_pending で停止、レビュー承認で再開
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
### スキルの output_data 出力規約
|
|
281
|
+
|
|
282
|
+
DAGのスキルノードでは、下流ノードに構造化データを渡すため、スキル本文に「## Output Data」セクションを設けて JSON コードブロックで出力を記述すること。ミニオンの `dag-node-executor` がこのブロックを抽出して `output_data` に載せる。
|
|
283
|
+
|
|
284
|
+
例(スキル実行結果の末尾):
|
|
285
|
+
|
|
286
|
+
````markdown
|
|
287
|
+
## 実行サマリー
|
|
288
|
+
|
|
289
|
+
検索結果を3件取得しました。
|
|
290
|
+
|
|
291
|
+
## Output Data
|
|
292
|
+
|
|
293
|
+
```json
|
|
294
|
+
{
|
|
295
|
+
"items": [
|
|
296
|
+
{ "id": "a1", "title": "Item A" },
|
|
297
|
+
{ "id": "a2", "title": "Item B" },
|
|
298
|
+
{ "id": "a3", "title": "Item C" }
|
|
299
|
+
],
|
|
300
|
+
"count": 3
|
|
301
|
+
}
|
|
302
|
+
```
|
|
303
|
+
````
|
|
304
|
+
|
|
305
|
+
- 「## Output Data」セクションが無い、または JSON パースに失敗した場合は `{ _raw: "<出力全文>" }` にフォールバック
|
|
306
|
+
- 失敗時 (`status: failed`) は `output_data` は空オブジェクト `{}` として扱われる
|
|
307
|
+
|
|
308
|
+
### Fan-out / Join の動作
|
|
309
|
+
|
|
310
|
+
fan-out ノードは入力から配列を取り出し、template sub-graph を配列要素数だけ並列展開する。
|
|
311
|
+
|
|
312
|
+
```
|
|
313
|
+
入力 input_data = { "items": [A, B, C] }
|
|
314
|
+
fan_out_source = ".items"
|
|
315
|
+
↓
|
|
316
|
+
3 つのインスタンスを生成:
|
|
317
|
+
scope_path="<fan_out_id>:0", input=A, 対応する template nodes が pending で出現
|
|
318
|
+
scope_path="<fan_out_id>:1", input=B, ...
|
|
319
|
+
scope_path="<fan_out_id>:2", input=C, ...
|
|
320
|
+
↓
|
|
321
|
+
各インスタンスの template 実行完了後、対応する join ノードが集約
|
|
322
|
+
join_mode=all: 全成功で完了
|
|
323
|
+
join_mode=any: いずれか成功で完了
|
|
324
|
+
join_mode=majority: 過半数成功で完了
|
|
325
|
+
on_failure=collect: 失敗も含め結果を集める
|
|
326
|
+
on_failure=ignore: 失敗を除外
|
|
327
|
+
on_failure=fail_all: 1つでも失敗すれば join も失敗
|
|
328
|
+
↓
|
|
329
|
+
集約結果が join の output_data として下流に渡る
|
|
330
|
+
```
|
|
331
|
+
|
|
332
|
+
ミニオンから見ると、fan-out 内の skill/transform ノードも通常どおり `pending-nodes` に返ってくる(`scope_path` が空でない点だけが違い)。
|
|
333
|
+
|
|
334
|
+
### Transform ノード
|
|
335
|
+
|
|
336
|
+
Transform ノードは LLM を使って input_data を output_data に変換する軽量ノード。`transform_instruction` に自然言語で変換指示を書く。ミニオン側では `transform_instruction` を本文とする一時的なスキルを組み立てて実行する。
|
|
337
|
+
|
|
338
|
+
```
|
|
339
|
+
transform_instruction: "items 配列から title が 'Item B' のエントリを除外し、残りを返してください"
|
|
340
|
+
input_data: { "items": [{ "title": "Item A" }, { "title": "Item B" }, { "title": "Item C" }] }
|
|
341
|
+
↓ (LLM)
|
|
342
|
+
output_data: { "items": [{ "title": "Item A" }, { "title": "Item C" }] }
|
|
343
|
+
```
|
|
344
|
+
|
|
345
|
+
### Review ノードとリビジョン
|
|
346
|
+
|
|
347
|
+
Review ノードはレビューゲート。`review_status=review_pending` で下流カスケードが停止する。レビュアーが:
|
|
348
|
+
- `approved` にすると `approved` 種別のエッジで下流に進む
|
|
349
|
+
- `revision_requested` にすると `revision` 種別のエッジで差し戻し先に戻る
|
|
350
|
+
|
|
351
|
+
レガシー線形パイプラインの差し戻しは `revision-watcher` デーモン + `/api/minion/pending-revisions` + `/api/minion/revision-reset` で処理される。DAGの差し戻しはサーバ側カスケードで自動処理。
|
|
352
|
+
|
|
353
|
+
### デバッグ
|
|
354
|
+
|
|
355
|
+
```bash
|
|
356
|
+
# 自分が取得できる pending DAG ノードを確認
|
|
357
|
+
curl -s "$HQ_URL/api/dag/minion/pending-nodes" \
|
|
358
|
+
-H "Authorization: Bearer $API_TOKEN" | jq
|
|
359
|
+
|
|
360
|
+
# DAG 実行の詳細(UI用だがデバッグにも使える、セッション認証が必要なため通常はHQダッシュボード側から)
|
|
361
|
+
# ミニオンから実行詳細を直接取得する手段は現時点では無い。pending-nodes で見える範囲で判断する
|
|
362
|
+
|
|
363
|
+
# dag-step-poller のログを確認
|
|
364
|
+
tail -f ~/.minion/logs/agent.log | grep '\[DAG'
|
|
365
|
+
```
|
|
366
|
+
|
|
147
367
|
---
|
|
148
368
|
|
|
149
369
|
## プロジェクトコンテキストの操作
|
package/linux/bin/hq
CHANGED
|
@@ -10,10 +10,15 @@
|
|
|
10
10
|
# API_TOKEN - Minion API token for authentication
|
|
11
11
|
#
|
|
12
12
|
# Usage:
|
|
13
|
-
# hq fetch skill <name>
|
|
14
|
-
# hq fetch workflow <name>
|
|
15
|
-
# hq fetch project <id>
|
|
16
|
-
# hq fetch project-context <id>
|
|
13
|
+
# hq fetch skill <name> - Get skill details (content, description, files)
|
|
14
|
+
# hq fetch workflow <name> - Get workflow details (pipeline, cron, etc.)
|
|
15
|
+
# hq fetch project <id> - Get project info (name, description, role)
|
|
16
|
+
# hq fetch project-context <id> - Get project context (shared Markdown document)
|
|
17
|
+
# hq fetch dag-workflow <id> - Get DAG workflow details (graph, version)
|
|
18
|
+
# hq fetch dag-execution <id> - Get DAG execution details (nodes, status)
|
|
19
|
+
# hq create dag-workflow <body.json> - Create a DAG workflow (PM only). Body: {project_id, name, graph?, ...}
|
|
20
|
+
# hq put dag-workflow <id> <body.json> - Update DAG workflow draft (PM only). Body: {graph?, content?, ...}
|
|
21
|
+
# hq publish dag-workflow <id> - Publish the draft as a new version (PM only, validated)
|
|
17
22
|
|
|
18
23
|
set -euo pipefail
|
|
19
24
|
|
|
@@ -39,12 +44,36 @@ format_json() {
|
|
|
39
44
|
fi
|
|
40
45
|
}
|
|
41
46
|
|
|
47
|
+
# Validate that a file contains syntactically valid JSON. Exits with error on failure.
|
|
48
|
+
validate_json_file() {
|
|
49
|
+
local file="$1"
|
|
50
|
+
if [ ! -f "$file" ]; then
|
|
51
|
+
echo "Error: file not found: $file" >&2
|
|
52
|
+
exit 1
|
|
53
|
+
fi
|
|
54
|
+
if command -v jq &>/dev/null; then
|
|
55
|
+
if ! jq empty "$file" 2>/dev/null; then
|
|
56
|
+
echo "Error: invalid JSON syntax in $file" >&2
|
|
57
|
+
jq empty "$file" || true
|
|
58
|
+
exit 1
|
|
59
|
+
fi
|
|
60
|
+
elif command -v python3 &>/dev/null; then
|
|
61
|
+
if ! python3 -c "import json,sys; json.load(open(sys.argv[1]))" "$file" 2>/dev/null; then
|
|
62
|
+
echo "Error: invalid JSON syntax in $file" >&2
|
|
63
|
+
python3 -c "import json,sys; json.load(open(sys.argv[1]))" "$file" || true
|
|
64
|
+
exit 1
|
|
65
|
+
fi
|
|
66
|
+
else
|
|
67
|
+
echo "Error: neither jq nor python3 is available to validate JSON" >&2
|
|
68
|
+
exit 1
|
|
69
|
+
fi
|
|
70
|
+
}
|
|
71
|
+
|
|
42
72
|
fetch_resource() {
|
|
43
73
|
local url="$1"
|
|
44
74
|
local response
|
|
45
75
|
local http_code
|
|
46
76
|
|
|
47
|
-
# Fetch with HTTP status code
|
|
48
77
|
response=$(curl -s -w "\n%{http_code}" -H "Authorization: Bearer $API_TOKEN" "$url")
|
|
49
78
|
http_code=$(echo "$response" | tail -1)
|
|
50
79
|
body=$(echo "$response" | sed '$d')
|
|
@@ -58,6 +87,68 @@ fetch_resource() {
|
|
|
58
87
|
fi
|
|
59
88
|
}
|
|
60
89
|
|
|
90
|
+
# Send a JSON request body from a file. Method is POST or PUT.
|
|
91
|
+
send_json_request() {
|
|
92
|
+
local method="$1"
|
|
93
|
+
local url="$2"
|
|
94
|
+
local file="$3"
|
|
95
|
+
local response
|
|
96
|
+
local http_code
|
|
97
|
+
|
|
98
|
+
response=$(curl -s -w "\n%{http_code}" -X "$method" \
|
|
99
|
+
-H "Authorization: Bearer $API_TOKEN" \
|
|
100
|
+
-H "Content-Type: application/json" \
|
|
101
|
+
--data-binary "@$file" \
|
|
102
|
+
"$url")
|
|
103
|
+
http_code=$(echo "$response" | tail -1)
|
|
104
|
+
body=$(echo "$response" | sed '$d')
|
|
105
|
+
|
|
106
|
+
if [ "$http_code" -ge 200 ] && [ "$http_code" -lt 300 ]; then
|
|
107
|
+
echo "$body" | format_json
|
|
108
|
+
else
|
|
109
|
+
echo "Error: HQ API returned HTTP $http_code" >&2
|
|
110
|
+
echo "$body" >&2
|
|
111
|
+
exit 1
|
|
112
|
+
fi
|
|
113
|
+
}
|
|
114
|
+
|
|
115
|
+
# Send a body-less POST (e.g., publish).
|
|
116
|
+
send_empty_post() {
|
|
117
|
+
local url="$1"
|
|
118
|
+
local response
|
|
119
|
+
local http_code
|
|
120
|
+
|
|
121
|
+
response=$(curl -s -w "\n%{http_code}" -X POST \
|
|
122
|
+
-H "Authorization: Bearer $API_TOKEN" \
|
|
123
|
+
-H "Content-Length: 0" \
|
|
124
|
+
"$url")
|
|
125
|
+
http_code=$(echo "$response" | tail -1)
|
|
126
|
+
body=$(echo "$response" | sed '$d')
|
|
127
|
+
|
|
128
|
+
if [ "$http_code" -ge 200 ] && [ "$http_code" -lt 300 ]; then
|
|
129
|
+
echo "$body" | format_json
|
|
130
|
+
else
|
|
131
|
+
echo "Error: HQ API returned HTTP $http_code" >&2
|
|
132
|
+
echo "$body" >&2
|
|
133
|
+
exit 1
|
|
134
|
+
fi
|
|
135
|
+
}
|
|
136
|
+
|
|
137
|
+
print_usage() {
|
|
138
|
+
echo "HQ API helper for minion chat" >&2
|
|
139
|
+
echo "" >&2
|
|
140
|
+
echo "Usage:" >&2
|
|
141
|
+
echo " hq fetch skill <name> - Get skill details" >&2
|
|
142
|
+
echo " hq fetch workflow <name> - Get workflow details" >&2
|
|
143
|
+
echo " hq fetch project <id> - Get project info" >&2
|
|
144
|
+
echo " hq fetch project-context <id> - Get project context" >&2
|
|
145
|
+
echo " hq fetch dag-workflow <id> - Get DAG workflow details" >&2
|
|
146
|
+
echo " hq fetch dag-execution <id> - Get DAG execution details" >&2
|
|
147
|
+
echo " hq create dag-workflow <body.json> - Create a DAG workflow (PM only)" >&2
|
|
148
|
+
echo " hq put dag-workflow <id> <body.json> - Update DAG workflow draft (PM only)" >&2
|
|
149
|
+
echo " hq publish dag-workflow <id> - Publish the draft as a new version (PM only)" >&2
|
|
150
|
+
}
|
|
151
|
+
|
|
61
152
|
# Main command dispatch
|
|
62
153
|
case "${1:-}" in
|
|
63
154
|
fetch)
|
|
@@ -65,7 +156,7 @@ case "${1:-}" in
|
|
|
65
156
|
identifier="${3:-}"
|
|
66
157
|
|
|
67
158
|
if [ -z "$resource" ] || [ -z "$identifier" ]; then
|
|
68
|
-
echo "Usage: hq fetch {skill|workflow|project|project-context} <identifier>" >&2
|
|
159
|
+
echo "Usage: hq fetch {skill|workflow|project|project-context|dag-workflow|dag-execution} <identifier>" >&2
|
|
69
160
|
exit 1
|
|
70
161
|
fi
|
|
71
162
|
|
|
@@ -77,7 +168,6 @@ case "${1:-}" in
|
|
|
77
168
|
fetch_resource "$BASE_URL/workflows/$identifier"
|
|
78
169
|
;;
|
|
79
170
|
project)
|
|
80
|
-
# Fetch all projects and filter by ID
|
|
81
171
|
response=$(curl -s -H "Authorization: Bearer $API_TOKEN" "$BASE_URL/me/projects")
|
|
82
172
|
if command -v jq &>/dev/null; then
|
|
83
173
|
echo "$response" | jq --arg id "$identifier" '.projects[] | select(.id == $id)'
|
|
@@ -88,21 +178,84 @@ case "${1:-}" in
|
|
|
88
178
|
project-context)
|
|
89
179
|
fetch_resource "$BASE_URL/me/project/$identifier/context"
|
|
90
180
|
;;
|
|
181
|
+
dag-workflow)
|
|
182
|
+
fetch_resource "$BASE_URL/dag-workflows/$identifier"
|
|
183
|
+
;;
|
|
184
|
+
dag-execution)
|
|
185
|
+
fetch_resource "$BASE_URL/dag-executions/$identifier"
|
|
186
|
+
;;
|
|
91
187
|
*)
|
|
92
188
|
echo "Unknown resource: $resource" >&2
|
|
93
|
-
echo "Usage: hq fetch {skill|workflow|project|project-context} <identifier>" >&2
|
|
189
|
+
echo "Usage: hq fetch {skill|workflow|project|project-context|dag-workflow|dag-execution} <identifier>" >&2
|
|
190
|
+
exit 1
|
|
191
|
+
;;
|
|
192
|
+
esac
|
|
193
|
+
;;
|
|
194
|
+
|
|
195
|
+
create)
|
|
196
|
+
resource="${2:-}"
|
|
197
|
+
case "$resource" in
|
|
198
|
+
dag-workflow)
|
|
199
|
+
body_file="${3:-}"
|
|
200
|
+
if [ -z "$body_file" ]; then
|
|
201
|
+
echo "Usage: hq create dag-workflow <body.json>" >&2
|
|
202
|
+
echo " body.json must contain at least { project_id, name } and optionally { graph, content, change_summary }" >&2
|
|
203
|
+
exit 1
|
|
204
|
+
fi
|
|
205
|
+
validate_json_file "$body_file"
|
|
206
|
+
send_json_request POST "$BASE_URL/dag-workflows" "$body_file"
|
|
207
|
+
;;
|
|
208
|
+
*)
|
|
209
|
+
echo "Unknown create resource: $resource" >&2
|
|
210
|
+
echo "Usage: hq create dag-workflow <body.json>" >&2
|
|
94
211
|
exit 1
|
|
95
212
|
;;
|
|
96
213
|
esac
|
|
97
214
|
;;
|
|
215
|
+
|
|
216
|
+
put)
|
|
217
|
+
resource="${2:-}"
|
|
218
|
+
case "$resource" in
|
|
219
|
+
dag-workflow)
|
|
220
|
+
id="${3:-}"
|
|
221
|
+
body_file="${4:-}"
|
|
222
|
+
if [ -z "$id" ] || [ -z "$body_file" ]; then
|
|
223
|
+
echo "Usage: hq put dag-workflow <id> <body.json>" >&2
|
|
224
|
+
echo " body.json may contain { graph, content, change_summary, name, is_active, maturity }" >&2
|
|
225
|
+
exit 1
|
|
226
|
+
fi
|
|
227
|
+
validate_json_file "$body_file"
|
|
228
|
+
send_json_request PUT "$BASE_URL/dag-workflows/$id" "$body_file"
|
|
229
|
+
;;
|
|
230
|
+
*)
|
|
231
|
+
echo "Unknown put resource: $resource" >&2
|
|
232
|
+
echo "Usage: hq put dag-workflow <id> <body.json>" >&2
|
|
233
|
+
exit 1
|
|
234
|
+
;;
|
|
235
|
+
esac
|
|
236
|
+
;;
|
|
237
|
+
|
|
238
|
+
publish)
|
|
239
|
+
resource="${2:-}"
|
|
240
|
+
case "$resource" in
|
|
241
|
+
dag-workflow)
|
|
242
|
+
id="${3:-}"
|
|
243
|
+
if [ -z "$id" ]; then
|
|
244
|
+
echo "Usage: hq publish dag-workflow <id>" >&2
|
|
245
|
+
exit 1
|
|
246
|
+
fi
|
|
247
|
+
send_empty_post "$BASE_URL/dag-workflows/$id/publish"
|
|
248
|
+
;;
|
|
249
|
+
*)
|
|
250
|
+
echo "Unknown publish resource: $resource" >&2
|
|
251
|
+
echo "Usage: hq publish dag-workflow <id>" >&2
|
|
252
|
+
exit 1
|
|
253
|
+
;;
|
|
254
|
+
esac
|
|
255
|
+
;;
|
|
256
|
+
|
|
98
257
|
*)
|
|
99
|
-
|
|
100
|
-
echo "" >&2
|
|
101
|
-
echo "Usage:" >&2
|
|
102
|
-
echo " hq fetch skill <name> - Get skill details" >&2
|
|
103
|
-
echo " hq fetch workflow <name> - Get workflow details" >&2
|
|
104
|
-
echo " hq fetch project <id> - Get project info" >&2
|
|
105
|
-
echo " hq fetch project-context <id> - Get project context" >&2
|
|
258
|
+
print_usage
|
|
106
259
|
exit 1
|
|
107
260
|
;;
|
|
108
261
|
esac
|
package/linux/minion-cli.sh
CHANGED
|
@@ -497,10 +497,20 @@ NVNCEOF
|
|
|
497
497
|
supervisord)
|
|
498
498
|
# Build environment line from .env values
|
|
499
499
|
# Include HOME and DISPLAY since supervisord does not set them when switching user
|
|
500
|
+
#
|
|
501
|
+
# NOTE: Runtime-mutable keys (LLM_COMMAND, REFLECTION_TIME) are intentionally
|
|
502
|
+
# excluded. supervisord's environment= parser has known quirks with nested
|
|
503
|
+
# quotes (e.g. LLM_COMMAND="claude -p '{prompt}'" gets the trailing quote
|
|
504
|
+
# stripped), and values baked into the supervisord conf become stale whenever
|
|
505
|
+
# the config API updates .env at runtime. These keys are loaded exclusively
|
|
506
|
+
# by core/config.js loadEnvFile() from the .env file at process startup.
|
|
500
507
|
local ENV_LINE="environment="
|
|
501
508
|
local ENV_PAIRS=("HOME=\"${TARGET_HOME}\"" "DISPLAY=\":99\"")
|
|
502
509
|
while IFS='=' read -r key value; do
|
|
503
510
|
[[ -z "$key" || "$key" == \#* ]] && continue
|
|
511
|
+
case "$key" in
|
|
512
|
+
LLM_COMMAND|REFLECTION_TIME) continue ;;
|
|
513
|
+
esac
|
|
504
514
|
ENV_PAIRS+=("${key}=\"${value}\"")
|
|
505
515
|
done < /opt/minion-agent/.env
|
|
506
516
|
|
|
@@ -1028,8 +1038,12 @@ CFEOF
|
|
|
1028
1038
|
SVC_HOME=$(getent passwd "$SVC_USER" | cut -d: -f6 || echo "$HOME")
|
|
1029
1039
|
fi
|
|
1030
1040
|
local ENV_PAIRS=("HOME=\"${SVC_HOME}\"" "DISPLAY=\":99\"")
|
|
1041
|
+
# Exclude runtime-mutable keys from supervisord env (see setup path comment)
|
|
1031
1042
|
while IFS='=' read -r key value; do
|
|
1032
1043
|
[[ -z "$key" || "$key" == \#* ]] && continue
|
|
1044
|
+
case "$key" in
|
|
1045
|
+
LLM_COMMAND|REFLECTION_TIME) continue ;;
|
|
1046
|
+
esac
|
|
1033
1047
|
ENV_PAIRS+=("${key}=\"${value}\"")
|
|
1034
1048
|
done < /opt/minion-agent/.env
|
|
1035
1049
|
ENV_LINE+="$(IFS=,; echo "${ENV_PAIRS[*]}")"
|
package/linux/routes/chat.js
CHANGED
|
@@ -22,8 +22,10 @@ const path = require('path')
|
|
|
22
22
|
const { verifyToken } = require('../../core/lib/auth')
|
|
23
23
|
const { config } = require('../../core/config')
|
|
24
24
|
const chatStore = require('../../core/stores/chat-store')
|
|
25
|
+
const todoStore = require('../../core/stores/todo-store')
|
|
25
26
|
const { runEndOfDay } = require('../../core/lib/end-of-day')
|
|
26
27
|
const { DATA_DIR } = require('../../core/lib/platform')
|
|
28
|
+
const { getActivePrimary } = require('../../core/llm-plugins/lib/active')
|
|
27
29
|
|
|
28
30
|
/** @type {import('child_process').ChildProcess | null} */
|
|
29
31
|
let activeChatChild = null
|
|
@@ -254,6 +256,33 @@ ${indexed}`
|
|
|
254
256
|
async function buildContextPrefix(message, context, sessionId) {
|
|
255
257
|
const parts = []
|
|
256
258
|
|
|
259
|
+
// Re-inject unfinished todos tied to this session. This is how we survive
|
|
260
|
+
// context compaction: even if Claude forgot the plan, the outstanding
|
|
261
|
+
// todos are re-shown on the next turn. Todos past MAX_INJECTION_COUNT are
|
|
262
|
+
// skipped by the store to prevent infinite loops.
|
|
263
|
+
if (sessionId) {
|
|
264
|
+
parts.push(
|
|
265
|
+
`[現在のチャットセッションID] ${sessionId}`,
|
|
266
|
+
'新規Todoを作成する際は `session_id` にこの値を含めてください(圧縮を跨いだ自動再掲の対象になります)。',
|
|
267
|
+
''
|
|
268
|
+
)
|
|
269
|
+
const activeTodos = todoStore.listActiveForSession(sessionId)
|
|
270
|
+
if (activeTodos.length > 0) {
|
|
271
|
+
parts.push(
|
|
272
|
+
'[未完了のToDo(このセッション起点)]',
|
|
273
|
+
'以下のToDoが未完了のまま残っています。着手前に「既に完了していないか」を確認し、',
|
|
274
|
+
'完了済みなら `PUT /api/todos/:id` で status=done に更新、未完なら続行してください。',
|
|
275
|
+
''
|
|
276
|
+
)
|
|
277
|
+
for (const t of activeTodos) {
|
|
278
|
+
const desc = t.description ? ` — ${t.description}` : ''
|
|
279
|
+
parts.push(`- [${t.id}] (${t.status}/${t.priority}) ${t.title}${desc}`)
|
|
280
|
+
}
|
|
281
|
+
parts.push('')
|
|
282
|
+
todoStore.markInjected(activeTodos.map(t => t.id))
|
|
283
|
+
}
|
|
284
|
+
}
|
|
285
|
+
|
|
257
286
|
// Tell the LLM how to access memory and daily logs via API
|
|
258
287
|
if (!sessionId) {
|
|
259
288
|
const port = require('../../core/config').config.AGENT_PORT
|
|
@@ -336,6 +365,45 @@ async function buildContextPrefix(message, context, sessionId) {
|
|
|
336
365
|
)
|
|
337
366
|
}
|
|
338
367
|
break
|
|
368
|
+
case 'dag-workflow':
|
|
369
|
+
if (context.projectId && context.dagWorkflowId) {
|
|
370
|
+
parts.push(
|
|
371
|
+
`ユーザーはHQダッシュボードで DAG ワークフロー (ID: ${context.dagWorkflowId}) のエディタ/詳細を閲覧しています。`,
|
|
372
|
+
`DAG ワークフローはノード/エッジ形式でスキル間の依存関係を表現し、fan-out / join / conditional / transform / review をサポートします。`,
|
|
373
|
+
`DAG ワークフロー情報を取得するには以下を実行してください:`,
|
|
374
|
+
` hq fetch dag-workflow ${context.dagWorkflowId}`,
|
|
375
|
+
`プロジェクトコンテキスト:`,
|
|
376
|
+
` hq fetch project-context ${context.projectId}`,
|
|
377
|
+
`PMロールの場合、graph JSON を直接編集できます:`,
|
|
378
|
+
` hq put dag-workflow ${context.dagWorkflowId} <body.json> # ドラフト更新 (構造チェックのみ)`,
|
|
379
|
+
` hq publish dag-workflow ${context.dagWorkflowId} # ドラフトを新バージョンとして公開 (フル検証)`,
|
|
380
|
+
`新規作成は: hq create dag-workflow <body.json>`,
|
|
381
|
+
`DAG の構造(nodes/edges/node types/scope_path 等)や実行フローの詳細は ~/.minion/docs/api-reference.md の「DAG Workflows」セクション、および ~/.minion/docs/task-guides.md の「DAG ワークフロー」セクションを参照してください。`,
|
|
382
|
+
`取得した内容をもとに回答してください。`
|
|
383
|
+
)
|
|
384
|
+
}
|
|
385
|
+
break
|
|
386
|
+
case 'dag-execution':
|
|
387
|
+
if (context.projectId && context.dagExecutionId) {
|
|
388
|
+
parts.push(
|
|
389
|
+
`ユーザーはHQダッシュボードで DAG 実行 (ID: ${context.dagExecutionId}) の詳細を閲覧しています。`,
|
|
390
|
+
`DAG 実行の詳細(graph_snapshot + 各 node_executions の状態)を取得するには以下を実行してください:`,
|
|
391
|
+
` hq fetch dag-execution ${context.dagExecutionId}`
|
|
392
|
+
)
|
|
393
|
+
if (context.dagWorkflowId) {
|
|
394
|
+
parts.push(
|
|
395
|
+
`対応するDAGワークフロー定義:`,
|
|
396
|
+
` hq fetch dag-workflow ${context.dagWorkflowId}`
|
|
397
|
+
)
|
|
398
|
+
}
|
|
399
|
+
parts.push(
|
|
400
|
+
`プロジェクトコンテキスト:`,
|
|
401
|
+
` hq fetch project-context ${context.projectId}`,
|
|
402
|
+
`ノード状態の意味(pending/waiting/running/completed/failed/skipped, review_status, scope_path 等)は ~/.minion/docs/api-reference.md の「DAG Workflows」セクションを参照してください。`,
|
|
403
|
+
`取得した内容をもとに回答してください。`
|
|
404
|
+
)
|
|
405
|
+
}
|
|
406
|
+
break
|
|
339
407
|
}
|
|
340
408
|
}
|
|
341
409
|
|
|
@@ -362,7 +430,76 @@ function getLlmBinary() {
|
|
|
362
430
|
* Tracks block types to correctly forward tool_use vs text events
|
|
363
431
|
* and counts turns for session management.
|
|
364
432
|
*/
|
|
365
|
-
function streamLlmResponse(res, prompt, sessionId, workspaceId, originalMessage) {
|
|
433
|
+
async function streamLlmResponse(res, prompt, sessionId, workspaceId, originalMessage) {
|
|
434
|
+
// Plugin system path: Primary is set → delegate to plugin
|
|
435
|
+
const primary = getActivePrimary()
|
|
436
|
+
if (primary) {
|
|
437
|
+
return streamViaPlugin(primary, res, prompt, sessionId, workspaceId, originalMessage)
|
|
438
|
+
}
|
|
439
|
+
return streamViaLegacyLlmCommand(res, prompt, sessionId, workspaceId, originalMessage)
|
|
440
|
+
}
|
|
441
|
+
|
|
442
|
+
async function streamViaPlugin(plugin, res, prompt, sessionId, workspaceId, originalMessage) {
|
|
443
|
+
const input = { prompt }
|
|
444
|
+
const activeRef = { current: null }
|
|
445
|
+
activeChatChild = { kill: () => activeRef.current?.kill?.('SIGTERM') }
|
|
446
|
+
|
|
447
|
+
let fullResponse = ''
|
|
448
|
+
let resolvedSessionId = sessionId || null
|
|
449
|
+
let turnCount = 0
|
|
450
|
+
|
|
451
|
+
const emit = event => {
|
|
452
|
+
if (event.type === 'session') {
|
|
453
|
+
resolvedSessionId = event.sessionId
|
|
454
|
+
} else if (event.type === 'delta') {
|
|
455
|
+
fullResponse += event.content
|
|
456
|
+
res.write(`data: ${JSON.stringify({ type: 'delta', content: event.content })}\n\n`)
|
|
457
|
+
} else if (event.type === 'text') {
|
|
458
|
+
fullResponse += event.content
|
|
459
|
+
res.write(`data: ${JSON.stringify({ type: 'text', content: event.content })}\n\n`)
|
|
460
|
+
turnCount++
|
|
461
|
+
} else {
|
|
462
|
+
res.write(`data: ${JSON.stringify(event)}\n\n`)
|
|
463
|
+
}
|
|
464
|
+
}
|
|
465
|
+
|
|
466
|
+
res.on('close', () => { activeRef.current?.kill?.('SIGTERM') })
|
|
467
|
+
|
|
468
|
+
try {
|
|
469
|
+
let output
|
|
470
|
+
if (plugin.capabilities.streaming && typeof plugin.stream === 'function') {
|
|
471
|
+
output = await plugin.stream(input, emit, { resumeSessionId: sessionId, activeChildRef: activeRef })
|
|
472
|
+
} else {
|
|
473
|
+
output = await plugin.invoke(input)
|
|
474
|
+
if (output.text) {
|
|
475
|
+
fullResponse = output.text
|
|
476
|
+
res.write(`data: ${JSON.stringify({ type: 'text', content: output.text })}\n\n`)
|
|
477
|
+
turnCount = 1
|
|
478
|
+
}
|
|
479
|
+
if (output.error) {
|
|
480
|
+
res.write(`data: ${JSON.stringify({ type: 'error', error: output.error.message })}\n\n`)
|
|
481
|
+
}
|
|
482
|
+
}
|
|
483
|
+
resolvedSessionId = output?.metadata?.sessionId || resolvedSessionId
|
|
484
|
+
} finally {
|
|
485
|
+
activeChatChild = null
|
|
486
|
+
}
|
|
487
|
+
|
|
488
|
+
if (resolvedSessionId) {
|
|
489
|
+
if (!sessionId) {
|
|
490
|
+
await chatStore.addMessage(resolvedSessionId, { role: 'user', content: originalMessage || prompt }, undefined, workspaceId)
|
|
491
|
+
}
|
|
492
|
+
if (fullResponse) {
|
|
493
|
+
await chatStore.addMessage(resolvedSessionId, { role: 'assistant', content: fullResponse }, turnCount, workspaceId)
|
|
494
|
+
}
|
|
495
|
+
}
|
|
496
|
+
|
|
497
|
+
const session = await chatStore.load(workspaceId)
|
|
498
|
+
const totalTurnCount = session?.turn_count || turnCount
|
|
499
|
+
res.write(`data: ${JSON.stringify({ type: 'done', session_id: resolvedSessionId, turn_count: totalTurnCount })}\n\n`)
|
|
500
|
+
}
|
|
501
|
+
|
|
502
|
+
function streamViaLegacyLlmCommand(res, prompt, sessionId, workspaceId, originalMessage) {
|
|
366
503
|
return new Promise((resolve, reject) => {
|
|
367
504
|
const binaryName = getLlmBinary()
|
|
368
505
|
if (!binaryName) {
|
|
@@ -587,7 +724,17 @@ function streamLlmResponse(res, prompt, sessionId, workspaceId, originalMessage)
|
|
|
587
724
|
* @param {string} prompt
|
|
588
725
|
* @returns {Promise<string>} The result text
|
|
589
726
|
*/
|
|
590
|
-
function runQuickLlmCall(prompt) {
|
|
727
|
+
async function runQuickLlmCall(prompt) {
|
|
728
|
+
const primary = getActivePrimary()
|
|
729
|
+
if (primary) {
|
|
730
|
+
const output = await primary.invoke({ prompt, model: primary.name === 'claude' ? 'haiku' : undefined, timeoutMs: 30000 })
|
|
731
|
+
if (output.error) throw new Error(output.error.message)
|
|
732
|
+
return output.text || ''
|
|
733
|
+
}
|
|
734
|
+
return runQuickLlmCallLegacy(prompt)
|
|
735
|
+
}
|
|
736
|
+
|
|
737
|
+
function runQuickLlmCallLegacy(prompt) {
|
|
591
738
|
return new Promise((resolve, reject) => {
|
|
592
739
|
const binaryName = getLlmBinary()
|
|
593
740
|
if (!binaryName) {
|