@exaudeus/workrail 0.12.0 → 0.13.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/application/services/enhanced-loop-validator.js +3 -3
- package/dist/application/services/step-output-decoder.d.ts +6 -0
- package/dist/application/services/step-output-decoder.js +49 -0
- package/dist/application/services/validation-engine.d.ts +9 -0
- package/dist/application/services/validation-engine.js +142 -18
- package/dist/application/services/workflow-interpreter.d.ts +1 -1
- package/dist/application/services/workflow-interpreter.js +147 -81
- package/dist/application/services/workflow-service.d.ts +2 -0
- package/dist/application/services/workflow-service.js +3 -3
- package/dist/application/use-cases/validate-step-output.d.ts +2 -0
- package/dist/di/container.js +19 -3
- package/dist/di/tokens.d.ts +3 -0
- package/dist/di/tokens.js +3 -0
- package/dist/domain/execution/state.d.ts +6 -6
- package/dist/domain/workflow-id-policy.d.ts +17 -0
- package/dist/domain/workflow-id-policy.js +57 -0
- package/dist/infrastructure/storage/enhanced-multi-source-workflow-storage.js +33 -6
- package/dist/infrastructure/storage/file-workflow-storage.js +3 -1
- package/dist/infrastructure/storage/schema-validating-workflow-storage.js +13 -8
- package/dist/manifest.json +261 -109
- package/dist/mcp/handlers/v2-execution.js +62 -79
- package/dist/mcp/handlers/v2-workflow.js +14 -9
- package/dist/mcp/server.js +53 -48
- package/dist/mcp/tool-descriptions.js +15 -15
- package/dist/mcp/tools.js +14 -14
- package/dist/mcp/types/tool-description-types.d.ts +1 -1
- package/dist/mcp/types/tool-description-types.js +5 -5
- package/dist/mcp/types.d.ts +2 -0
- package/dist/v2/durable-core/constants.d.ts +15 -0
- package/dist/v2/durable-core/constants.js +18 -0
- package/dist/v2/durable-core/domain/ack-advance-append-plan.d.ts +32 -0
- package/dist/v2/durable-core/domain/ack-advance-append-plan.js +95 -0
- package/dist/v2/durable-core/domain/loop-runtime.d.ts +50 -0
- package/dist/v2/durable-core/domain/loop-runtime.js +95 -0
- package/dist/v2/durable-core/domain/notes-markdown.d.ts +4 -0
- package/dist/v2/durable-core/domain/notes-markdown.js +46 -0
- package/dist/v2/durable-core/domain/outputs.d.ts +12 -0
- package/dist/v2/durable-core/domain/outputs.js +18 -0
- package/dist/v2/durable-core/schemas/execution-snapshot/execution-snapshot.v1.d.ts +113 -113
- package/dist/v2/durable-core/schemas/execution-snapshot/execution-snapshot.v1.js +11 -10
- package/dist/v2/durable-core/schemas/export-bundle/index.d.ts +7129 -0
- package/dist/v2/durable-core/schemas/export-bundle/index.js +82 -0
- package/dist/v2/durable-core/schemas/lib/decision-trace-ref.d.ts +80 -0
- package/dist/v2/durable-core/schemas/lib/decision-trace-ref.js +38 -0
- package/dist/v2/durable-core/schemas/lib/dedupe-key.d.ts +8 -0
- package/dist/v2/durable-core/schemas/lib/dedupe-key.js +28 -0
- package/dist/v2/durable-core/schemas/lib/utf8-bounded-string.d.ts +6 -0
- package/dist/v2/durable-core/schemas/lib/utf8-bounded-string.js +12 -0
- package/dist/v2/durable-core/schemas/session/events.d.ts +158 -12
- package/dist/v2/durable-core/schemas/session/events.js +47 -20
- package/dist/v2/durable-core/schemas/session/manifest.d.ts +1 -1
- package/dist/v2/durable-core/schemas/session/manifest.js +6 -1
- package/dist/v2/durable-core/schemas/session/preferences.d.ts +5 -0
- package/dist/v2/durable-core/schemas/session/preferences.js +6 -0
- package/dist/v2/durable-core/schemas/session/session-health.d.ts +3 -0
- package/dist/v2/durable-core/tokens/index.d.ts +0 -1
- package/dist/v2/durable-core/tokens/index.js +1 -4
- package/dist/v2/durable-core/tokens/token-codec.d.ts +3 -2
- package/dist/v2/durable-core/tokens/token-codec.js +12 -6
- package/dist/v2/durable-core/tokens/token-signer.d.ts +3 -2
- package/dist/v2/durable-core/tokens/token-signer.js +8 -9
- package/dist/v2/infra/local/base64url/index.d.ts +5 -0
- package/dist/v2/infra/local/base64url/index.js +48 -0
- package/dist/v2/infra/local/fs/index.js +8 -4
- package/dist/v2/infra/local/keyring/index.d.ts +5 -1
- package/dist/v2/infra/local/keyring/index.js +41 -32
- package/dist/v2/infra/local/pinned-workflow-store/index.d.ts +3 -1
- package/dist/v2/infra/local/pinned-workflow-store/index.js +50 -62
- package/dist/v2/infra/local/random-entropy/index.d.ts +4 -0
- package/dist/v2/infra/local/random-entropy/index.js +10 -0
- package/dist/v2/infra/local/session-lock/index.d.ts +3 -1
- package/dist/v2/infra/local/session-lock/index.js +4 -3
- package/dist/v2/infra/local/session-store/index.js +26 -4
- package/dist/v2/infra/local/snapshot-store/index.js +20 -25
- package/dist/v2/infra/local/time-clock/index.d.ts +5 -0
- package/dist/v2/infra/local/time-clock/index.js +12 -0
- package/dist/v2/infra/local/utf8/index.d.ts +5 -0
- package/dist/v2/infra/local/utf8/index.js +12 -0
- package/dist/v2/ports/base64url.port.d.ts +12 -0
- package/dist/v2/ports/base64url.port.js +2 -0
- package/dist/v2/ports/random-entropy.port.d.ts +3 -0
- package/dist/v2/ports/random-entropy.port.js +2 -0
- package/dist/v2/ports/time-clock.port.d.ts +4 -0
- package/dist/v2/ports/time-clock.port.js +2 -0
- package/dist/v2/ports/utf8.port.d.ts +3 -0
- package/dist/v2/ports/utf8.port.js +2 -0
- package/dist/v2/projections/node-outputs.js +28 -11
- package/dist/v2/projections/preferences.d.ts +1 -2
- package/dist/v2/projections/preferences.js +11 -4
- package/dist/v2/projections/run-dag.js +40 -28
- package/dist/v2/projections/run-status-signals.d.ts +1 -2
- package/dist/v2/usecases/execution-session-gate.js +33 -34
- package/package.json +3 -1
- package/spec/workflow.schema.json +2 -2
- package/workflows/coding-task-workflow-agentic.json +213 -50
- package/workflows/relocation-workflow-us.json +430 -0
- package/dist/v2/durable-core/tokens/base64url.d.ts +0 -7
- package/dist/v2/durable-core/tokens/base64url.js +0 -16
|
@@ -49,6 +49,10 @@ const neverthrow_1 = require("neverthrow");
|
|
|
49
49
|
const v1_to_v2_shim_js_1 = require("../../v2/read-only/v1-to-v2-shim.js");
|
|
50
50
|
const hashing_js_1 = require("../../v2/durable-core/canonical/hashing.js");
|
|
51
51
|
const jcs_js_1 = require("../../v2/durable-core/canonical/jcs.js");
|
|
52
|
+
const constants_js_1 = require("../../v2/durable-core/constants.js");
|
|
53
|
+
const notes_markdown_js_1 = require("../../v2/durable-core/domain/notes-markdown.js");
|
|
54
|
+
const outputs_js_1 = require("../../v2/durable-core/domain/outputs.js");
|
|
55
|
+
const ack_advance_append_plan_js_1 = require("../../v2/durable-core/domain/ack-advance-append-plan.js");
|
|
52
56
|
const workflow_source_js_1 = require("../../types/workflow-source.js");
|
|
53
57
|
const workflow_definition_js_1 = require("../../types/workflow-definition.js");
|
|
54
58
|
const workflow_compiler_js_1 = require("../../application/services/workflow-compiler.js");
|
|
@@ -57,10 +61,10 @@ const v2_execution_helpers_js_1 = require("./v2-execution-helpers.js");
|
|
|
57
61
|
function normalizeTokenErrorMessage(message) {
|
|
58
62
|
return message.split(os.homedir()).join('~');
|
|
59
63
|
}
|
|
60
|
-
const MAX_CONTEXT_BYTES_V2 =
|
|
64
|
+
const MAX_CONTEXT_BYTES_V2 = constants_js_1.MAX_CONTEXT_BYTES;
|
|
61
65
|
function validateJsonValueOrIssue(value, path, depth, seen) {
|
|
62
|
-
if (depth >
|
|
63
|
-
return { kind: 'too_deep', path, maxDepth:
|
|
66
|
+
if (depth > constants_js_1.MAX_CONTEXT_DEPTH)
|
|
67
|
+
return { kind: 'too_deep', path, maxDepth: constants_js_1.MAX_CONTEXT_DEPTH };
|
|
64
68
|
if (value === null)
|
|
65
69
|
return null;
|
|
66
70
|
const t = typeof value;
|
|
@@ -210,6 +214,7 @@ function isInternalError(e) {
|
|
|
210
214
|
kind === 'workflow_hash_mismatch' ||
|
|
211
215
|
kind === 'missing_snapshot' ||
|
|
212
216
|
kind === 'no_pending_step' ||
|
|
217
|
+
kind === 'invariant_violation' ||
|
|
213
218
|
kind === 'advance_apply_failed' ||
|
|
214
219
|
kind === 'advance_next_failed');
|
|
215
220
|
}
|
|
@@ -222,6 +227,8 @@ function mapInternalErrorToToolError(e) {
|
|
|
222
227
|
case 'missing_snapshot':
|
|
223
228
|
case 'no_pending_step':
|
|
224
229
|
return internalError('Incomplete execution state.', 'Retry; if this persists, treat as invariant violation.');
|
|
230
|
+
case 'invariant_violation':
|
|
231
|
+
return internalError(normalizeTokenErrorMessage(e.message), 'Treat as invariant violation.');
|
|
225
232
|
case 'advance_apply_failed':
|
|
226
233
|
case 'advance_next_failed':
|
|
227
234
|
return internalError(normalizeTokenErrorMessage(e.message), 'Retry; if this persists, treat as invariant violation.');
|
|
@@ -231,12 +238,13 @@ function mapInternalErrorToToolError(e) {
|
|
|
231
238
|
}
|
|
232
239
|
}
|
|
233
240
|
function replayFromRecordedAdvance(args) {
|
|
234
|
-
const { recordedEvent, truth, sessionId, runId, nodeId, workflowHash, attemptId, inputStateToken, inputAckToken, pinnedWorkflow, snapshotStore, keyring, hmac, } = args;
|
|
241
|
+
const { recordedEvent, truth, sessionId, runId, nodeId, workflowHash, attemptId, inputStateToken, inputAckToken, pinnedWorkflow, snapshotStore, keyring, hmac, base64url, } = args;
|
|
235
242
|
const checkpointTokenRes = signTokenOrErr({
|
|
236
243
|
unsignedPrefix: 'chk.v1.',
|
|
237
244
|
payload: { tokenVersion: 1, tokenKind: 'checkpoint', sessionId, runId, nodeId, attemptId },
|
|
238
245
|
keyring,
|
|
239
246
|
hmac,
|
|
247
|
+
base64url,
|
|
240
248
|
});
|
|
241
249
|
if (checkpointTokenRes.isErr()) {
|
|
242
250
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: checkpointTokenRes.error });
|
|
@@ -293,6 +301,7 @@ function replayFromRecordedAdvance(args) {
|
|
|
293
301
|
payload: { tokenVersion: 1, tokenKind: 'ack', sessionId, runId, nodeId: toNodeIdBranded, attemptId: nextAttemptId },
|
|
294
302
|
keyring,
|
|
295
303
|
hmac,
|
|
304
|
+
base64url,
|
|
296
305
|
});
|
|
297
306
|
if (nextAckTokenRes.isErr()) {
|
|
298
307
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: nextAckTokenRes.error });
|
|
@@ -302,6 +311,7 @@ function replayFromRecordedAdvance(args) {
|
|
|
302
311
|
payload: { tokenVersion: 1, tokenKind: 'checkpoint', sessionId, runId, nodeId: toNodeIdBranded, attemptId: nextAttemptId },
|
|
303
312
|
keyring,
|
|
304
313
|
hmac,
|
|
314
|
+
base64url,
|
|
305
315
|
});
|
|
306
316
|
if (nextCheckpointTokenRes.isErr()) {
|
|
307
317
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: nextCheckpointTokenRes.error });
|
|
@@ -311,6 +321,7 @@ function replayFromRecordedAdvance(args) {
|
|
|
311
321
|
payload: { tokenVersion: 1, tokenKind: 'state', sessionId, runId, nodeId: toNodeIdBranded, workflowHash },
|
|
312
322
|
keyring,
|
|
313
323
|
hmac,
|
|
324
|
+
base64url,
|
|
314
325
|
});
|
|
315
326
|
if (nextStateTokenRes.isErr()) {
|
|
316
327
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: nextStateTokenRes.error });
|
|
@@ -381,74 +392,42 @@ function advanceAndRecord(args) {
|
|
|
381
392
|
const evtEdgeCreated = `evt_${(0, crypto_1.randomUUID)()}`;
|
|
382
393
|
const hasChildren = truth.events.some((e) => e.kind === 'edge_created' && e.data.fromNodeId === String(nodeId));
|
|
383
394
|
const causeKind = hasChildren ? 'non_tip_advance' : 'intentional_fork';
|
|
384
|
-
const baseEvents = [
|
|
385
|
-
{
|
|
386
|
-
v: 1,
|
|
387
|
-
eventId: evtAdvanceRecorded,
|
|
388
|
-
eventIndex: nextEventIndex,
|
|
389
|
-
sessionId,
|
|
390
|
-
kind: 'advance_recorded',
|
|
391
|
-
dedupeKey,
|
|
392
|
-
scope: { runId, nodeId },
|
|
393
|
-
data: {
|
|
394
|
-
attemptId,
|
|
395
|
-
intent: 'ack_pending',
|
|
396
|
-
outcome: { kind: 'advanced', toNodeId },
|
|
397
|
-
},
|
|
398
|
-
},
|
|
399
|
-
{
|
|
400
|
-
v: 1,
|
|
401
|
-
eventId: evtNodeCreated,
|
|
402
|
-
eventIndex: nextEventIndex + 1,
|
|
403
|
-
sessionId,
|
|
404
|
-
kind: 'node_created',
|
|
405
|
-
dedupeKey: `node_created:${sessionId}:${runId}:${toNodeId}`,
|
|
406
|
-
scope: { runId, nodeId: toNodeId },
|
|
407
|
-
data: { nodeKind: 'step', parentNodeId: String(nodeId), workflowHash, snapshotRef: newSnapshotRef },
|
|
408
|
-
},
|
|
409
|
-
{
|
|
410
|
-
v: 1,
|
|
411
|
-
eventId: evtEdgeCreated,
|
|
412
|
-
eventIndex: nextEventIndex + 2,
|
|
413
|
-
sessionId,
|
|
414
|
-
kind: 'edge_created',
|
|
415
|
-
dedupeKey: `edge_created:${sessionId}:${runId}:${String(nodeId)}->${toNodeId}:acked_step`,
|
|
416
|
-
scope: { runId },
|
|
417
|
-
data: {
|
|
418
|
-
edgeKind: 'acked_step',
|
|
419
|
-
fromNodeId: String(nodeId),
|
|
420
|
-
toNodeId,
|
|
421
|
-
cause: { kind: causeKind, eventId: evtAdvanceRecorded },
|
|
422
|
-
},
|
|
423
|
-
},
|
|
424
|
-
];
|
|
425
395
|
const outputId = (0, index_js_1.asOutputId)(`out_recap_${String(attemptId)}`);
|
|
426
|
-
const
|
|
396
|
+
const outputsToAppend = inputOutput?.notesMarkdown
|
|
427
397
|
? [
|
|
428
|
-
...baseEvents,
|
|
429
398
|
{
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
dedupeKey: `node_output_appended:${sessionId}:${outputId}`,
|
|
436
|
-
scope: { runId, nodeId },
|
|
437
|
-
data: {
|
|
438
|
-
outputId: `out_recap_${attemptId}`,
|
|
439
|
-
outputChannel: 'recap',
|
|
440
|
-
payload: {
|
|
441
|
-
payloadKind: 'notes',
|
|
442
|
-
notesMarkdown: inputOutput.notesMarkdown.slice(0, 4096),
|
|
443
|
-
},
|
|
399
|
+
outputId: String(outputId),
|
|
400
|
+
outputChannel: 'recap',
|
|
401
|
+
payload: {
|
|
402
|
+
payloadKind: 'notes',
|
|
403
|
+
notesMarkdown: (0, notes_markdown_js_1.toNotesMarkdownV1)(inputOutput.notesMarkdown),
|
|
444
404
|
},
|
|
445
405
|
},
|
|
446
406
|
]
|
|
447
|
-
:
|
|
448
|
-
|
|
449
|
-
|
|
450
|
-
|
|
407
|
+
: [];
|
|
408
|
+
const normalizedOutputs = (0, outputs_js_1.normalizeOutputsForAppend)(outputsToAppend);
|
|
409
|
+
const outputEventIds = normalizedOutputs.map(() => `evt_${(0, crypto_1.randomUUID)()}`);
|
|
410
|
+
const planRes = (0, ack_advance_append_plan_js_1.buildAckAdvanceAppendPlanV1)({
|
|
411
|
+
sessionId: String(sessionId),
|
|
412
|
+
runId: String(runId),
|
|
413
|
+
fromNodeId: String(nodeId),
|
|
414
|
+
workflowHash,
|
|
415
|
+
attemptId: String(attemptId),
|
|
416
|
+
nextEventIndex,
|
|
417
|
+
toNodeId,
|
|
418
|
+
snapshotRef: newSnapshotRef,
|
|
419
|
+
causeKind,
|
|
420
|
+
minted: {
|
|
421
|
+
advanceRecordedEventId: evtAdvanceRecorded,
|
|
422
|
+
nodeCreatedEventId: evtNodeCreated,
|
|
423
|
+
edgeCreatedEventId: evtEdgeCreated,
|
|
424
|
+
outputEventIds,
|
|
425
|
+
},
|
|
426
|
+
outputsToAppend,
|
|
451
427
|
});
|
|
428
|
+
if (planRes.isErr())
|
|
429
|
+
return errAsync({ kind: 'invariant_violation', message: planRes.error.message });
|
|
430
|
+
return sessionStore.append(lock, planRes.value);
|
|
452
431
|
});
|
|
453
432
|
});
|
|
454
433
|
}
|
|
@@ -517,12 +496,12 @@ function snapshotStoreErrorToToolError(e, suggestion) {
|
|
|
517
496
|
function pinnedWorkflowStoreErrorToToolError(e, suggestion) {
|
|
518
497
|
return internalError(`Pinned workflow store error: ${e.message}`, suggestion);
|
|
519
498
|
}
|
|
520
|
-
function parseStateTokenOrFail(raw, keyring, hmac) {
|
|
521
|
-
const parsedRes = (0, index_js_1.parseTokenV1)(raw);
|
|
499
|
+
function parseStateTokenOrFail(raw, keyring, hmac, base64url) {
|
|
500
|
+
const parsedRes = (0, index_js_1.parseTokenV1)(raw, base64url);
|
|
522
501
|
if (parsedRes.isErr()) {
|
|
523
502
|
return { ok: false, failure: (0, v2_execution_helpers_js_1.mapTokenDecodeErrorToToolError)(parsedRes.error) };
|
|
524
503
|
}
|
|
525
|
-
const verified = (0, index_js_1.verifyTokenSignatureV1)(parsedRes.value, keyring, hmac);
|
|
504
|
+
const verified = (0, index_js_1.verifyTokenSignatureV1)(parsedRes.value, keyring, hmac, base64url);
|
|
526
505
|
if (verified.isErr()) {
|
|
527
506
|
return { ok: false, failure: (0, v2_execution_helpers_js_1.mapTokenVerifyErrorToToolError)(verified.error) };
|
|
528
507
|
}
|
|
@@ -536,12 +515,12 @@ function parseStateTokenOrFail(raw, keyring, hmac) {
|
|
|
536
515
|
}
|
|
537
516
|
return { ok: true, token: parsedRes.value };
|
|
538
517
|
}
|
|
539
|
-
function parseAckTokenOrFail(raw, keyring, hmac) {
|
|
540
|
-
const parsedRes = (0, index_js_1.parseTokenV1)(raw);
|
|
518
|
+
function parseAckTokenOrFail(raw, keyring, hmac, base64url) {
|
|
519
|
+
const parsedRes = (0, index_js_1.parseTokenV1)(raw, base64url);
|
|
541
520
|
if (parsedRes.isErr()) {
|
|
542
521
|
return { ok: false, failure: (0, v2_execution_helpers_js_1.mapTokenDecodeErrorToToolError)(parsedRes.error) };
|
|
543
522
|
}
|
|
544
|
-
const verified = (0, index_js_1.verifyTokenSignatureV1)(parsedRes.value, keyring, hmac);
|
|
523
|
+
const verified = (0, index_js_1.verifyTokenSignatureV1)(parsedRes.value, keyring, hmac, base64url);
|
|
545
524
|
if (verified.isErr()) {
|
|
546
525
|
return { ok: false, failure: (0, v2_execution_helpers_js_1.mapTokenVerifyErrorToToolError)(verified.error) };
|
|
547
526
|
}
|
|
@@ -565,7 +544,7 @@ function signTokenOrErr(args) {
|
|
|
565
544
|
const bytes = (0, index_js_2.encodeTokenPayloadV1)(args.payload);
|
|
566
545
|
if (bytes.isErr())
|
|
567
546
|
return (0, neverthrow_1.err)(bytes.error);
|
|
568
|
-
const token = (0, index_js_2.signTokenV1)(args.unsignedPrefix, bytes.value, args.keyring, args.hmac);
|
|
547
|
+
const token = (0, index_js_2.signTokenV1)(args.unsignedPrefix, bytes.value, args.keyring, args.hmac, args.base64url);
|
|
569
548
|
if (token.isErr())
|
|
570
549
|
return (0, neverthrow_1.err)(token.error);
|
|
571
550
|
return (0, neverthrow_1.ok)(String(token.value));
|
|
@@ -651,7 +630,7 @@ function executeStartWorkflow(input, ctx) {
|
|
|
651
630
|
if (!ctx.v2) {
|
|
652
631
|
return (0, neverthrow_1.errAsync)({ kind: 'precondition_failed', message: 'v2 tools disabled', suggestion: 'Enable v2Tools flag' });
|
|
653
632
|
}
|
|
654
|
-
const { gate, sessionStore, snapshotStore, pinnedStore, keyring, crypto, hmac } = ctx.v2;
|
|
633
|
+
const { gate, sessionStore, snapshotStore, pinnedStore, keyring, crypto, hmac, base64url } = ctx.v2;
|
|
655
634
|
const ctxCheck = checkContextBudget({ tool: 'start_workflow', context: input.context });
|
|
656
635
|
if (!ctxCheck.ok)
|
|
657
636
|
return (0, neverthrow_1.errAsync)({ kind: 'validation_failed', failure: ctxCheck.error });
|
|
@@ -806,13 +785,13 @@ function executeStartWorkflow(input, ctx) {
|
|
|
806
785
|
nodeId,
|
|
807
786
|
attemptId,
|
|
808
787
|
};
|
|
809
|
-
const stateToken = signTokenOrErr({ unsignedPrefix: 'st.v1.', payload: statePayload, keyring, hmac });
|
|
788
|
+
const stateToken = signTokenOrErr({ unsignedPrefix: 'st.v1.', payload: statePayload, keyring, hmac, base64url });
|
|
810
789
|
if (stateToken.isErr())
|
|
811
790
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: stateToken.error });
|
|
812
|
-
const ackToken = signTokenOrErr({ unsignedPrefix: 'ack.v1.', payload: ackPayload, keyring, hmac });
|
|
791
|
+
const ackToken = signTokenOrErr({ unsignedPrefix: 'ack.v1.', payload: ackPayload, keyring, hmac, base64url });
|
|
813
792
|
if (ackToken.isErr())
|
|
814
793
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: ackToken.error });
|
|
815
|
-
const checkpointToken = signTokenOrErr({ unsignedPrefix: 'chk.v1.', payload: checkpointPayload, keyring, hmac });
|
|
794
|
+
const checkpointToken = signTokenOrErr({ unsignedPrefix: 'chk.v1.', payload: checkpointPayload, keyring, hmac, base64url });
|
|
816
795
|
if (checkpointToken.isErr())
|
|
817
796
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: checkpointToken.error });
|
|
818
797
|
const { stepId, title, prompt } = extractStepMetadata(pinnedWorkflow, firstStep.id);
|
|
@@ -833,8 +812,8 @@ function executeContinueWorkflow(input, ctx) {
|
|
|
833
812
|
if (!ctx.v2) {
|
|
834
813
|
return (0, neverthrow_1.errAsync)({ kind: 'precondition_failed', message: 'v2 tools disabled', suggestion: 'Enable v2Tools flag' });
|
|
835
814
|
}
|
|
836
|
-
const { gate, sessionStore, snapshotStore, pinnedStore, keyring, crypto, hmac } = ctx.v2;
|
|
837
|
-
const stateRes = parseStateTokenOrFail(input.stateToken, keyring, hmac);
|
|
815
|
+
const { gate, sessionStore, snapshotStore, pinnedStore, keyring, crypto, hmac, base64url } = ctx.v2;
|
|
816
|
+
const stateRes = parseStateTokenOrFail(input.stateToken, keyring, hmac, base64url);
|
|
838
817
|
if (!stateRes.ok)
|
|
839
818
|
return (0, neverthrow_1.errAsync)({ kind: 'validation_failed', failure: stateRes.failure });
|
|
840
819
|
const state = stateRes.token;
|
|
@@ -891,6 +870,7 @@ function executeContinueWorkflow(input, ctx) {
|
|
|
891
870
|
payload: { tokenVersion: 1, tokenKind: 'ack', sessionId, runId, nodeId, attemptId },
|
|
892
871
|
keyring,
|
|
893
872
|
hmac,
|
|
873
|
+
base64url,
|
|
894
874
|
});
|
|
895
875
|
if (ackTokenRes.isErr())
|
|
896
876
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: ackTokenRes.error });
|
|
@@ -899,6 +879,7 @@ function executeContinueWorkflow(input, ctx) {
|
|
|
899
879
|
payload: { tokenVersion: 1, tokenKind: 'checkpoint', sessionId, runId, nodeId, attemptId },
|
|
900
880
|
keyring,
|
|
901
881
|
hmac,
|
|
882
|
+
base64url,
|
|
902
883
|
});
|
|
903
884
|
if (checkpointTokenRes.isErr())
|
|
904
885
|
return (0, neverthrow_1.errAsync)({ kind: 'token_signing_failed', cause: checkpointTokenRes.error });
|
|
@@ -940,7 +921,7 @@ function executeContinueWorkflow(input, ctx) {
|
|
|
940
921
|
});
|
|
941
922
|
});
|
|
942
923
|
}
|
|
943
|
-
const ackRes = parseAckTokenOrFail(input.ackToken, keyring, hmac);
|
|
924
|
+
const ackRes = parseAckTokenOrFail(input.ackToken, keyring, hmac, base64url);
|
|
944
925
|
if (!ackRes.ok)
|
|
945
926
|
return (0, neverthrow_1.errAsync)({ kind: 'validation_failed', failure: ackRes.failure });
|
|
946
927
|
const ack = ackRes.token;
|
|
@@ -983,6 +964,7 @@ function executeContinueWorkflow(input, ctx) {
|
|
|
983
964
|
snapshotStore,
|
|
984
965
|
keyring,
|
|
985
966
|
hmac,
|
|
967
|
+
base64url,
|
|
986
968
|
});
|
|
987
969
|
}
|
|
988
970
|
return gate
|
|
@@ -1054,6 +1036,7 @@ function executeContinueWorkflow(input, ctx) {
|
|
|
1054
1036
|
snapshotStore,
|
|
1055
1037
|
keyring,
|
|
1056
1038
|
hmac,
|
|
1039
|
+
base64url,
|
|
1057
1040
|
});
|
|
1058
1041
|
});
|
|
1059
1042
|
});
|
|
@@ -6,9 +6,6 @@ const types_js_1 = require("../types.js");
|
|
|
6
6
|
const error_mapper_js_1 = require("../error-mapper.js");
|
|
7
7
|
const output_schemas_js_1 = require("../output-schemas.js");
|
|
8
8
|
const v1_to_v2_shim_js_1 = require("../../v2/read-only/v1-to-v2-shim.js");
|
|
9
|
-
const index_js_1 = require("../../v2/infra/local/crypto/index.js");
|
|
10
|
-
const index_js_2 = require("../../v2/infra/local/data-dir/index.js");
|
|
11
|
-
const index_js_3 = require("../../v2/infra/local/pinned-workflow-store/index.js");
|
|
12
9
|
const hashing_js_1 = require("../../v2/durable-core/canonical/hashing.js");
|
|
13
10
|
const TIMEOUT_MS = 30000;
|
|
14
11
|
async function withTimeout(operation, timeoutMs, name) {
|
|
@@ -17,12 +14,19 @@ async function withTimeout(operation, timeoutMs, name) {
|
|
|
17
14
|
});
|
|
18
15
|
return Promise.race([operation, timeoutPromise]);
|
|
19
16
|
}
|
|
17
|
+
function requireV2(ctx) {
|
|
18
|
+
if (!ctx.v2) {
|
|
19
|
+
return (0, types_js_1.errNotRetryable)('PRECONDITION_FAILED', 'v2 tools are not enabled');
|
|
20
|
+
}
|
|
21
|
+
return null;
|
|
22
|
+
}
|
|
20
23
|
async function handleV2ListWorkflows(_input, ctx) {
|
|
24
|
+
const v2Err = requireV2(ctx);
|
|
25
|
+
if (v2Err)
|
|
26
|
+
return v2Err;
|
|
27
|
+
const { crypto, pinnedStore } = ctx.v2;
|
|
21
28
|
try {
|
|
22
29
|
const summaries = await withTimeout(ctx.workflowService.listWorkflowSummaries(), TIMEOUT_MS, 'list_workflows');
|
|
23
|
-
const crypto = new index_js_1.NodeCryptoV2();
|
|
24
|
-
const dataDir = new index_js_2.LocalDataDirV2(process.env);
|
|
25
|
-
const pinnedStore = new index_js_3.LocalPinnedWorkflowStoreV2(dataDir);
|
|
26
30
|
const compiled = await Promise.all(summaries.map(async (s) => {
|
|
27
31
|
const wf = await ctx.workflowService.getWorkflowById(s.id);
|
|
28
32
|
if (!wf) {
|
|
@@ -72,14 +76,15 @@ async function handleV2ListWorkflows(_input, ctx) {
|
|
|
72
76
|
}
|
|
73
77
|
}
|
|
74
78
|
async function handleV2InspectWorkflow(input, ctx) {
|
|
79
|
+
const v2Err = requireV2(ctx);
|
|
80
|
+
if (v2Err)
|
|
81
|
+
return v2Err;
|
|
82
|
+
const { crypto, pinnedStore } = ctx.v2;
|
|
75
83
|
try {
|
|
76
84
|
const workflow = await withTimeout(ctx.workflowService.getWorkflowById(input.workflowId), TIMEOUT_MS, 'inspect_workflow');
|
|
77
85
|
if (!workflow) {
|
|
78
86
|
return (0, types_js_1.errNotRetryable)('NOT_FOUND', `Workflow not found: ${input.workflowId}`);
|
|
79
87
|
}
|
|
80
|
-
const crypto = new index_js_1.NodeCryptoV2();
|
|
81
|
-
const dataDir = new index_js_2.LocalDataDirV2(process.env);
|
|
82
|
-
const pinnedStore = new index_js_3.LocalPinnedWorkflowStoreV2(dataDir);
|
|
83
88
|
const snapshot = (0, v1_to_v2_shim_js_1.compileV1WorkflowToV2PreviewSnapshot)(workflow);
|
|
84
89
|
const hashRes = (0, hashing_js_1.workflowHashForCompiledSnapshot)(snapshot, crypto);
|
|
85
90
|
if (hashRes.isErr()) {
|
package/dist/mcp/server.js
CHANGED
|
@@ -89,21 +89,26 @@ async function createToolContext() {
|
|
|
89
89
|
const keyringPort = container_js_1.container.resolve(tokens_js_1.DI.V2.Keyring);
|
|
90
90
|
const keyringResult = await keyringPort.loadOrCreate();
|
|
91
91
|
if (keyringResult.isErr()) {
|
|
92
|
-
|
|
93
|
-
|
|
92
|
+
const err = keyringResult.error;
|
|
93
|
+
console.error(`[V2Init] Keyring load failed: code=${err.code}, message=${err.message}`);
|
|
94
|
+
console.error('[FeatureFlags] v2 tools disabled due to keyring initialization failure');
|
|
95
|
+
}
|
|
96
|
+
else {
|
|
97
|
+
const crypto = container_js_1.container.resolve(tokens_js_1.DI.V2.Crypto);
|
|
98
|
+
const hmac = container_js_1.container.resolve(tokens_js_1.DI.V2.HmacSha256);
|
|
99
|
+
const base64url = container_js_1.container.resolve(tokens_js_1.DI.V2.Base64Url);
|
|
100
|
+
v2 = {
|
|
101
|
+
gate,
|
|
102
|
+
sessionStore,
|
|
103
|
+
snapshotStore,
|
|
104
|
+
pinnedStore,
|
|
105
|
+
keyring: keyringResult.value,
|
|
106
|
+
crypto,
|
|
107
|
+
hmac,
|
|
108
|
+
base64url,
|
|
109
|
+
};
|
|
110
|
+
console.error('[FeatureFlags] v2 tools enabled');
|
|
94
111
|
}
|
|
95
|
-
const crypto = container_js_1.container.resolve(tokens_js_1.DI.V2.Crypto);
|
|
96
|
-
const hmac = container_js_1.container.resolve(tokens_js_1.DI.V2.HmacSha256);
|
|
97
|
-
v2 = {
|
|
98
|
-
gate,
|
|
99
|
-
sessionStore,
|
|
100
|
-
snapshotStore,
|
|
101
|
-
pinnedStore,
|
|
102
|
-
keyring: keyringResult.value,
|
|
103
|
-
crypto,
|
|
104
|
-
hmac,
|
|
105
|
-
};
|
|
106
|
-
console.error('[FeatureFlags] v2 tools enabled');
|
|
107
112
|
}
|
|
108
113
|
else {
|
|
109
114
|
console.error('[FeatureFlags] v2 tools disabled (enable with WORKRAIL_ENABLE_V2_TOOLS=true)');
|
|
@@ -166,35 +171,35 @@ async function startServer() {
|
|
|
166
171
|
const ctx = await createToolContext();
|
|
167
172
|
const descriptionProvider = container_js_1.container.resolve(tokens_js_1.DI.Mcp.DescriptionProvider);
|
|
168
173
|
const buildTool = (0, tool_factory_js_1.createToolFactory)(descriptionProvider);
|
|
169
|
-
const
|
|
170
|
-
name: '
|
|
171
|
-
title: tools_js_1.WORKFLOW_TOOL_TITLES.
|
|
174
|
+
const discoverWorkflowsTool = buildTool({
|
|
175
|
+
name: 'discover_workflows',
|
|
176
|
+
title: tools_js_1.WORKFLOW_TOOL_TITLES.discover_workflows,
|
|
172
177
|
inputSchema: tools_js_1.WorkflowListInput,
|
|
173
|
-
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.
|
|
178
|
+
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.discover_workflows,
|
|
174
179
|
});
|
|
175
|
-
const
|
|
176
|
-
name: '
|
|
177
|
-
title: tools_js_1.WORKFLOW_TOOL_TITLES.
|
|
180
|
+
const previewWorkflowTool = buildTool({
|
|
181
|
+
name: 'preview_workflow',
|
|
182
|
+
title: tools_js_1.WORKFLOW_TOOL_TITLES.preview_workflow,
|
|
178
183
|
inputSchema: tools_js_1.WorkflowGetInput,
|
|
179
|
-
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.
|
|
184
|
+
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.preview_workflow,
|
|
180
185
|
});
|
|
181
|
-
const
|
|
182
|
-
name: '
|
|
183
|
-
title: tools_js_1.WORKFLOW_TOOL_TITLES.
|
|
186
|
+
const advanceWorkflowTool = buildTool({
|
|
187
|
+
name: 'advance_workflow',
|
|
188
|
+
title: tools_js_1.WORKFLOW_TOOL_TITLES.advance_workflow,
|
|
184
189
|
inputSchema: tools_js_1.WorkflowNextInput,
|
|
185
|
-
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.
|
|
190
|
+
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.advance_workflow,
|
|
186
191
|
});
|
|
187
|
-
const
|
|
188
|
-
name: '
|
|
189
|
-
title: tools_js_1.WORKFLOW_TOOL_TITLES.
|
|
192
|
+
const validateWorkflowTool = buildTool({
|
|
193
|
+
name: 'validate_workflow',
|
|
194
|
+
title: tools_js_1.WORKFLOW_TOOL_TITLES.validate_workflow,
|
|
190
195
|
inputSchema: tools_js_1.WorkflowValidateJsonInput,
|
|
191
|
-
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.
|
|
196
|
+
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.validate_workflow,
|
|
192
197
|
});
|
|
193
|
-
const
|
|
194
|
-
name: '
|
|
195
|
-
title: tools_js_1.WORKFLOW_TOOL_TITLES.
|
|
198
|
+
const getWorkflowSchemaTool = buildTool({
|
|
199
|
+
name: 'get_workflow_schema',
|
|
200
|
+
title: tools_js_1.WORKFLOW_TOOL_TITLES.get_workflow_schema,
|
|
196
201
|
inputSchema: tools_js_1.WorkflowGetSchemaInput,
|
|
197
|
-
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.
|
|
202
|
+
annotations: tools_js_1.WORKFLOW_TOOL_ANNOTATIONS.get_workflow_schema,
|
|
198
203
|
});
|
|
199
204
|
const { Server } = await Promise.resolve().then(() => __importStar(require('@modelcontextprotocol/sdk/server/index.js')));
|
|
200
205
|
const { StdioServerTransport } = await Promise.resolve().then(() => __importStar(require('@modelcontextprotocol/sdk/server/stdio.js')));
|
|
@@ -208,11 +213,11 @@ async function startServer() {
|
|
|
208
213
|
},
|
|
209
214
|
});
|
|
210
215
|
const tools = [
|
|
211
|
-
toMcpTool(
|
|
212
|
-
toMcpTool(
|
|
213
|
-
toMcpTool(
|
|
214
|
-
toMcpTool(
|
|
215
|
-
toMcpTool(
|
|
216
|
+
toMcpTool(discoverWorkflowsTool),
|
|
217
|
+
toMcpTool(previewWorkflowTool),
|
|
218
|
+
toMcpTool(advanceWorkflowTool),
|
|
219
|
+
toMcpTool(validateWorkflowTool),
|
|
220
|
+
toMcpTool(getWorkflowSchemaTool),
|
|
216
221
|
];
|
|
217
222
|
if (ctx.featureFlags.isEnabled('sessionTools')) {
|
|
218
223
|
tools.push(toMcpTool(tools_js_1.createSessionTool), toMcpTool(tools_js_1.updateSessionTool), toMcpTool(tools_js_1.readSessionTool), toMcpTool(tools_js_1.openDashboardTool));
|
|
@@ -226,15 +231,15 @@ async function startServer() {
|
|
|
226
231
|
console.error('[FeatureFlags] v2 tools disabled (enable with WORKRAIL_ENABLE_V2_TOOLS=true)');
|
|
227
232
|
}
|
|
228
233
|
const handlers = {
|
|
229
|
-
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
234
|
+
discover_workflows: createHandler(tools_js_1.WorkflowListInput, workflow_js_1.handleWorkflowList),
|
|
235
|
+
preview_workflow: createHandler(tools_js_1.WorkflowGetInput, workflow_js_1.handleWorkflowGet),
|
|
236
|
+
advance_workflow: createValidatingHandler(tools_js_1.WorkflowNextInput, workflow_next_prevalidate_js_1.preValidateWorkflowNextArgs, workflow_js_1.handleWorkflowNext),
|
|
237
|
+
validate_workflow: createHandler(tools_js_1.WorkflowValidateJsonInput, workflow_js_1.handleWorkflowValidateJson),
|
|
238
|
+
get_workflow_schema: createHandler(tools_js_1.WorkflowGetSchemaInput, workflow_js_1.handleWorkflowGetSchema),
|
|
239
|
+
create_session: createHandler(tools_js_1.createSessionTool.inputSchema, session_js_1.handleCreateSession),
|
|
240
|
+
update_session: createHandler(tools_js_1.updateSessionTool.inputSchema, session_js_1.handleUpdateSession),
|
|
241
|
+
read_session: createHandler(tools_js_1.readSessionTool.inputSchema, session_js_1.handleReadSession),
|
|
242
|
+
open_dashboard: createHandler(tools_js_1.openDashboardTool.inputSchema, session_js_1.handleOpenDashboard),
|
|
238
243
|
};
|
|
239
244
|
if (v2Registry) {
|
|
240
245
|
for (const [name, entry] of Object.entries(v2Registry.handlers)) {
|
|
@@ -3,15 +3,15 @@ Object.defineProperty(exports, "__esModule", { value: true });
|
|
|
3
3
|
exports.DESCRIPTIONS = void 0;
|
|
4
4
|
exports.DESCRIPTIONS = {
|
|
5
5
|
standard: {
|
|
6
|
-
|
|
6
|
+
discover_workflows: `Your primary tool for any complex or multi-step request. Call this FIRST to see if a reliable, pre-defined workflow exists, as this is the preferred method over improvisation.
|
|
7
7
|
|
|
8
8
|
Your process:
|
|
9
9
|
1. Call this tool to get a list of available workflows.
|
|
10
10
|
2. Analyze the returned descriptions to find a match for the user's goal.
|
|
11
|
-
3. If a good match is found, suggest it to the user and use
|
|
11
|
+
3. If a good match is found, suggest it to the user and use preview_workflow to start.
|
|
12
12
|
4. If NO match is found, inform the user and then attempt to solve the task using your general abilities.`,
|
|
13
|
-
|
|
14
|
-
|
|
13
|
+
preview_workflow: `Retrieves workflow information with configurable detail level. Supports progressive disclosure to prevent "workflow spoiling" while providing necessary context for workflow selection and initiation.`,
|
|
14
|
+
advance_workflow: `Executes one workflow step at a time by returning the next eligible step and an updated execution state.
|
|
15
15
|
|
|
16
16
|
Inputs:
|
|
17
17
|
- workflowId: string
|
|
@@ -27,16 +27,16 @@ Common usage:
|
|
|
27
27
|
{ "workflowId": "...", "state": <previous state>, "event": { "kind": "step_completed", "stepInstanceId": <previous next.stepInstanceId> } }
|
|
28
28
|
|
|
29
29
|
Important:
|
|
30
|
-
- Always reuse the "state" returned by the last
|
|
30
|
+
- Always reuse the "state" returned by the last advance_workflow call.
|
|
31
31
|
- When completing a step, the event.stepInstanceId must match the previous next.stepInstanceId exactly.`,
|
|
32
|
-
|
|
32
|
+
validate_workflow: `Validates workflow JSON content directly without external tools. Use this tool when you need to verify that a workflow JSON file is syntactically correct and follows the proper schema.
|
|
33
33
|
|
|
34
34
|
This tool provides comprehensive validation including:
|
|
35
35
|
- JSON syntax validation with detailed error messages
|
|
36
36
|
- Workflow schema compliance checking
|
|
37
37
|
- User-friendly error reporting with actionable suggestions
|
|
38
38
|
- Support for all workflow features (steps, conditions, validation criteria, etc.)`,
|
|
39
|
-
|
|
39
|
+
get_workflow_schema: `Retrieves the complete workflow JSON schema for reference and development purposes. Use this tool when you need to understand the structure, required fields, and validation rules for workflows.
|
|
40
40
|
|
|
41
41
|
This tool provides:
|
|
42
42
|
- Complete JSON schema definition with all properties and constraints
|
|
@@ -56,7 +56,7 @@ Notes:
|
|
|
56
56
|
- If you omit \`ackToken\`: rehydrate the pending step for the given snapshot (MUST be side-effect-free).`,
|
|
57
57
|
},
|
|
58
58
|
authoritative: {
|
|
59
|
-
|
|
59
|
+
discover_workflows: `Check for workflows that apply to the user's request. Workflows are the user's pre-defined instructions that you MUST follow when they exist.
|
|
60
60
|
|
|
61
61
|
REQUIRED BEHAVIOR:
|
|
62
62
|
1. Call this tool BEFORE attempting any multi-step task.
|
|
@@ -66,21 +66,21 @@ REQUIRED BEHAVIOR:
|
|
|
66
66
|
A workflow is the user's voice, pre-recorded. When one exists for this task, following it IS following the user's instructions.
|
|
67
67
|
|
|
68
68
|
Returns: List of available workflows with descriptions to match against the user's request.`,
|
|
69
|
-
|
|
69
|
+
preview_workflow: `Retrieve a workflow you are committing to follow. Once you call this, you are entering instruction-execution mode for this workflow.
|
|
70
70
|
|
|
71
71
|
By retrieving a workflow, you agree to:
|
|
72
72
|
- Execute each step as specified
|
|
73
|
-
- Call
|
|
73
|
+
- Call advance_workflow after completing each step
|
|
74
74
|
- Not skip ahead or improvise alternative approaches
|
|
75
75
|
|
|
76
76
|
The workflow content is the user's will expressed as structured steps. Treat each step as a direct instruction from the user.
|
|
77
77
|
|
|
78
78
|
Returns: Workflow metadata and first step. Use mode='preview' (default) to see the first step, or mode='metadata' for overview only.`,
|
|
79
|
-
|
|
79
|
+
advance_workflow: `Get your next MANDATORY INSTRUCTION from the active workflow.
|
|
80
80
|
|
|
81
81
|
The step returned is a DIRECT INSTRUCTION from the user. You MUST:
|
|
82
82
|
1. Execute the step EXACTLY as specified in the 'prompt' field
|
|
83
|
-
2. Complete ALL requirements before calling
|
|
83
|
+
2. Complete ALL requirements before calling advance_workflow again
|
|
84
84
|
3. NOT skip steps, combine steps, or substitute your own approach
|
|
85
85
|
4. NOT proceed to implementation before completing preparation steps
|
|
86
86
|
|
|
@@ -94,10 +94,10 @@ The user created this workflow because they want THIS process followed, not your
|
|
|
94
94
|
|
|
95
95
|
Parameters:
|
|
96
96
|
- workflowId: The workflow you are executing
|
|
97
|
-
- state: Execution state returned by the previous
|
|
97
|
+
- state: Execution state returned by the previous advance_workflow call (use { kind: "init" } for the first call)
|
|
98
98
|
- event (optional): { kind: "step_completed", stepInstanceId: <previous next.stepInstanceId> } to mark the returned step as complete
|
|
99
99
|
- context (optional): Variables for condition evaluation and loop inputs`,
|
|
100
|
-
|
|
100
|
+
validate_workflow: `Validate workflow JSON before saving or using it. This ensures the workflow will function correctly.
|
|
101
101
|
|
|
102
102
|
Use this tool to verify:
|
|
103
103
|
- JSON syntax is correct
|
|
@@ -105,7 +105,7 @@ Use this tool to verify:
|
|
|
105
105
|
- Step definitions are complete and valid
|
|
106
106
|
|
|
107
107
|
Returns validation result with specific errors and suggestions if invalid.`,
|
|
108
|
-
|
|
108
|
+
get_workflow_schema: `Get the workflow JSON schema for creating or editing workflows.
|
|
109
109
|
|
|
110
110
|
Returns the complete schema definition including required fields, valid patterns, and constraints. Use this as reference when authoring workflow JSON.`,
|
|
111
111
|
list_workflows: `List available workflows via the WorkRail v2 tool surface (feature-flagged).
|