moflo 4.9.21 → 4.9.23
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents/analysis/analyze-code-quality.md +0 -121
- package/.claude/agents/analysis/code-analyzer.md +5 -26
- package/.claude/agents/architecture/system-design/arch-system-design.md +0 -119
- package/.claude/agents/base-template-generator.md +0 -1
- package/.claude/agents/core/coder.md +0 -22
- package/.claude/agents/core/planner.md +0 -16
- package/.claude/agents/core/researcher.md +0 -16
- package/.claude/agents/core/reviewer.md +0 -17
- package/.claude/agents/core/tester.md +0 -19
- package/.claude/agents/custom/test-long-runner.md +0 -2
- package/.claude/agents/development/dev-backend-api.md +0 -167
- package/.claude/agents/development/dev-database.md +43 -0
- package/.claude/agents/development/dev-frontend.md +42 -0
- package/.claude/agents/devops/ci-cd/ops-cicd-github.md +0 -112
- package/.claude/agents/documentation/api-docs/docs-api-openapi.md +0 -111
- package/.claude/agents/security/security-auditor.md +45 -0
- package/.claude/guidance/shipped/moflo-cli-reference.md +19 -16
- package/.claude/guidance/shipped/moflo-core-guidance.md +0 -2
- package/.claude/guidance/shipped/moflo-guidance-rules.md +5 -5
- package/.claude/guidance/shipped/moflo-spell-runner.md +1 -0
- package/.claude/guidance/shipped/moflo-spell-scheduling.md +225 -0
- package/.claude/guidance/shipped/moflo-spell-troubleshooting.md +1 -0
- package/.claude/helpers/gate.cjs +70 -3
- package/.claude/skills/fl/execution-modes.md +38 -15
- package/.claude/skills/fl/phases.md +67 -0
- package/.claude/skills/spell-schedule/SKILL.md +18 -5
- package/README.md +1 -1
- package/bin/gate.cjs +70 -3
- package/bin/index-guidance.mjs +32 -6
- package/bin/lib/retired-files.mjs +146 -0
- package/bin/session-start-launcher.mjs +116 -8
- package/dist/src/cli/appliance/rvfa-builder.js +1 -1
- package/dist/src/cli/commands/agent.js +3 -9
- package/dist/src/cli/commands/daemon.js +13 -17
- package/dist/src/cli/commands/hooks.js +4 -9
- package/dist/src/cli/commands/index.js +2 -0
- package/dist/src/cli/commands/retire.js +111 -0
- package/dist/src/cli/commands/spell-schedule.js +237 -49
- package/dist/src/cli/hooks/reasoningbank/index.js +7 -7
- package/dist/src/cli/init/executor.js +26 -54
- package/dist/src/cli/init/helpers-generator.js +66 -3
- package/dist/src/cli/init/settings-generator.js +17 -6
- package/dist/src/cli/mcp-tools/agent-tools.js +9 -27
- package/dist/src/cli/mcp-tools/hooks-tools.js +23 -21
- package/dist/src/cli/mcp-tools/memory-tools.js +16 -5
- package/dist/src/cli/memory/bridge-embedder.js +26 -6
- package/dist/src/cli/memory/bridge-entries.js +33 -15
- package/dist/src/cli/memory/controllers/semantic-router.js +18 -12
- package/dist/src/cli/memory/sona-optimizer.js +6 -6
- package/dist/src/cli/neural/domain/services/learning-service.js +3 -3
- package/dist/src/cli/services/agent-router.js +2 -5
- package/dist/src/cli/services/daemon-autostart-lifecycle.js +62 -0
- package/dist/src/cli/services/daemon-dashboard.js +187 -18
- package/dist/src/cli/services/daemon-readiness.js +19 -31
- package/dist/src/cli/services/ephemeral-namespace-purge.js +61 -33
- package/dist/src/cli/services/headless-worker-executor.js +7 -94
- package/dist/src/cli/services/hook-block-hash.js +4 -0
- package/dist/src/cli/services/worker-daemon.js +40 -66
- package/dist/src/cli/shared/events/example-usage.js +6 -6
- package/dist/src/cli/shared/hooks/task-hooks.js +8 -8
- package/dist/src/cli/spells/core/runner.js +12 -0
- package/dist/src/cli/spells/scheduler/scheduler.js +24 -9
- package/dist/src/cli/spells/schema/validator.js +2 -1
- package/dist/src/cli/spells/schema/validators/top-level.js +18 -0
- package/dist/src/cli/version.js +1 -1
- package/package.json +5 -2
- package/retired-files.json +1989 -0
- package/src/cli/data/model-registry.json +2 -2
- package/.claude/agents/consensus/byzantine-coordinator.md +0 -63
- package/.claude/agents/consensus/crdt-synchronizer.md +0 -997
- package/.claude/agents/consensus/gossip-coordinator.md +0 -63
- package/.claude/agents/consensus/performance-benchmarker.md +0 -851
- package/.claude/agents/consensus/quorum-manager.md +0 -823
- package/.claude/agents/consensus/raft-manager.md +0 -63
- package/.claude/agents/consensus/security-manager.md +0 -622
- package/.claude/agents/data/ml/data-ml-model.md +0 -193
- package/.claude/agents/github/code-review-swarm.md +0 -538
- package/.claude/agents/github/github-modes.md +0 -172
- package/.claude/agents/github/issue-tracker.md +0 -311
- package/.claude/agents/github/multi-repo-swarm.md +0 -551
- package/.claude/agents/github/pr-manager.md +0 -183
- package/.claude/agents/github/project-board-sync.md +0 -508
- package/.claude/agents/github/release-manager.md +0 -360
- package/.claude/agents/github/release-swarm.md +0 -580
- package/.claude/agents/github/repo-architect.md +0 -391
- package/.claude/agents/github/swarm-issue.md +0 -566
- package/.claude/agents/github/swarm-pr.md +0 -414
- package/.claude/agents/github/sync-coordinator.md +0 -426
- package/.claude/agents/github/workflow-automation.md +0 -606
- package/.claude/agents/goal/code-goal-planner.md +0 -440
- package/.claude/agents/goal/goal-planner.md +0 -168
- package/.claude/agents/hive-mind/collective-intelligence-coordinator.md +0 -127
- package/.claude/agents/hive-mind/queen-coordinator.md +0 -198
- package/.claude/agents/hive-mind/scout-explorer.md +0 -233
- package/.claude/agents/hive-mind/swarm-memory-manager.md +0 -184
- package/.claude/agents/hive-mind/worker-specialist.md +0 -208
- package/.claude/agents/neural/safla-neural.md +0 -73
- package/.claude/agents/optimization/benchmark-suite.md +0 -665
- package/.claude/agents/optimization/load-balancer.md +0 -431
- package/.claude/agents/optimization/performance-monitor.md +0 -672
- package/.claude/agents/optimization/resource-allocator.md +0 -674
- package/.claude/agents/optimization/topology-optimizer.md +0 -808
- package/.claude/agents/reasoning/goal-planner.md +0 -67
- package/.claude/agents/sona/sona-learning-optimizer.md +0 -74
- package/.claude/agents/sparc/architecture.md +0 -472
- package/.claude/agents/sparc/pseudocode.md +0 -318
- package/.claude/agents/sparc/refinement.md +0 -525
- package/.claude/agents/sparc/specification.md +0 -276
- package/.claude/agents/specialized/mobile/spec-mobile-react-native.md +0 -225
- package/.claude/agents/swarm/adaptive-coordinator.md +0 -391
- package/.claude/agents/swarm/hierarchical-coordinator.md +0 -321
- package/.claude/agents/swarm/mesh-coordinator.md +0 -383
- package/.claude/agents/testing/production-validator.md +0 -395
- package/.claude/agents/testing/tdd-london-swarm.md +0 -244
- package/.claude/agents/v3/adr-architect.md +0 -184
- package/.claude/agents/v3/aidefence-guardian.md +0 -277
- package/.claude/agents/v3/claims-authorizer.md +0 -208
- package/.claude/agents/v3/collective-intelligence-coordinator.md +0 -988
- package/.claude/agents/v3/ddd-domain-expert.md +0 -220
- package/.claude/agents/v3/injection-analyst.md +0 -232
- package/.claude/agents/v3/memory-specialist.md +0 -987
- package/.claude/agents/v3/performance-engineer.md +0 -1225
- package/.claude/agents/v3/pii-detector.md +0 -146
- package/.claude/agents/v3/reasoningbank-learner.md +0 -213
- package/.claude/agents/v3/security-architect-aidefence.md +0 -405
- package/.claude/agents/v3/security-architect.md +0 -865
- package/.claude/agents/v3/security-auditor.md +0 -771
- package/.claude/agents/v3/sparc-orchestrator.md +0 -182
- package/.claude/agents/v3/swarm-memory-manager.md +0 -142
- package/.claude/agents/v3/v3-integration-architect.md +0 -205
- package/.claude/commands/claude-flow-help.md +0 -103
- package/.claude/commands/claude-flow-memory.md +0 -107
- package/.claude/commands/claude-flow-swarm.md +0 -205
- package/.claude/commands/flo-simplify.md +0 -101
- package/.claude/commands/github/README.md +0 -11
- package/.claude/commands/github/code-review-swarm.md +0 -514
- package/.claude/commands/github/code-review.md +0 -25
- package/.claude/commands/github/github-modes.md +0 -146
- package/.claude/commands/github/github-swarm.md +0 -113
- package/.claude/commands/github/issue-tracker.md +0 -284
- package/.claude/commands/github/issue-triage.md +0 -25
- package/.claude/commands/github/multi-repo-swarm.md +0 -519
- package/.claude/commands/github/pr-enhance.md +0 -26
- package/.claude/commands/github/pr-manager.md +0 -164
- package/.claude/commands/github/project-board-sync.md +0 -471
- package/.claude/commands/github/release-manager.md +0 -332
- package/.claude/commands/github/release-swarm.md +0 -544
- package/.claude/commands/github/repo-analyze.md +0 -25
- package/.claude/commands/github/repo-architect.md +0 -361
- package/.claude/commands/github/swarm-issue.md +0 -482
- package/.claude/commands/github/swarm-pr.md +0 -285
- package/.claude/commands/github/sync-coordinator.md +0 -294
- package/.claude/commands/github/workflow-automation.md +0 -442
- package/.claude/commands/hooks/README.md +0 -11
- package/.claude/commands/hooks/overview.md +0 -58
- package/.claude/commands/hooks/post-edit.md +0 -117
- package/.claude/commands/hooks/post-task.md +0 -112
- package/.claude/commands/hooks/pre-edit.md +0 -113
- package/.claude/commands/hooks/pre-task.md +0 -111
- package/.claude/commands/hooks/session-end.md +0 -118
- package/.claude/commands/hooks/setup.md +0 -103
- package/.claude/commands/sparc/analyzer.md +0 -42
- package/.claude/commands/sparc/architect.md +0 -43
- package/.claude/commands/sparc/ask.md +0 -86
- package/.claude/commands/sparc/batch-executor.md +0 -44
- package/.claude/commands/sparc/code.md +0 -78
- package/.claude/commands/sparc/coder.md +0 -44
- package/.claude/commands/sparc/debug.md +0 -72
- package/.claude/commands/sparc/debugger.md +0 -44
- package/.claude/commands/sparc/designer.md +0 -43
- package/.claude/commands/sparc/devops.md +0 -98
- package/.claude/commands/sparc/docs-writer.md +0 -69
- package/.claude/commands/sparc/documenter.md +0 -44
- package/.claude/commands/sparc/innovator.md +0 -44
- package/.claude/commands/sparc/integration.md +0 -72
- package/.claude/commands/sparc/mcp.md +0 -106
- package/.claude/commands/sparc/memory-manager.md +0 -44
- package/.claude/commands/sparc/optimizer.md +0 -44
- package/.claude/commands/sparc/orchestrator.md +0 -116
- package/.claude/commands/sparc/post-deployment-monitoring-mode.md +0 -72
- package/.claude/commands/sparc/refinement-optimization-mode.md +0 -72
- package/.claude/commands/sparc/researcher.md +0 -44
- package/.claude/commands/sparc/reviewer.md +0 -44
- package/.claude/commands/sparc/security-review.md +0 -69
- package/.claude/commands/sparc/sparc-modes.md +0 -139
- package/.claude/commands/sparc/sparc.md +0 -99
- package/.claude/commands/sparc/spec-pseudocode.md +0 -69
- package/.claude/commands/sparc/spell-manager.md +0 -44
- package/.claude/commands/sparc/supabase-admin.md +0 -337
- package/.claude/commands/sparc/swarm-coordinator.md +0 -44
- package/.claude/commands/sparc/tdd.md +0 -44
- package/.claude/commands/sparc/tester.md +0 -44
- package/.claude/commands/sparc/tutorial.md +0 -68
- package/.claude/commands/sparc.md +0 -151
|
@@ -0,0 +1,225 @@
|
|
|
1
|
+
# Spell Scheduling — Cron, Daemon, Catch-Up, and Failure Modes
|
|
2
|
+
|
|
3
|
+
**Purpose:** Reference for spell scheduling — definition syntax, CLI subcommands, daemon lifecycle, catch-up window semantics, and the failure modes you will hit. For the user-driven `/spell-schedule` walkthrough, see `.claude/skills/spell-schedule/SKILL.md`. For the engine itself (steps, args, runner), see `.claude/guidance/moflo-spell-engine.md`.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## When to Schedule a Spell
|
|
8
|
+
|
|
9
|
+
**Use scheduling when the spell must fire on a clock without a human present.** Don't reach for it for one-shot runs you can `flo spell cast` yourself.
|
|
10
|
+
|
|
11
|
+
| Trigger | Use |
|
|
12
|
+
|---------|-----|
|
|
13
|
+
| "Run X every weekday at 9am" | Schedule with `--cron` |
|
|
14
|
+
| "Run X every 6 hours" | Schedule with `--interval` |
|
|
15
|
+
| "Run X once at <future time>" | Schedule with `--at` |
|
|
16
|
+
| "Run X right now" | `flo spell cast -n X` — not a schedule |
|
|
17
|
+
| "Run X when file Y changes" | Hook or worker — not a schedule |
|
|
18
|
+
|
|
19
|
+
The scheduler is poll-based (1-minute floor). It is not a real-time trigger system.
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## CLI Subcommands
|
|
24
|
+
|
|
25
|
+
**All scheduling lives under `flo spell schedule`.** No external cron, no second runtime.
|
|
26
|
+
|
|
27
|
+
| Subcommand | Purpose |
|
|
28
|
+
|------------|---------|
|
|
29
|
+
| `create -n <spell> --cron|--interval|--at <value>` | Create an ad-hoc schedule |
|
|
30
|
+
| `list` (alias `ls`) | List all schedules (definitions only — does NOT prove they fired) |
|
|
31
|
+
| `executions [--schedule <id>] [--limit N]` (alias `exec`, `history`) | Read the `schedule-executions` audit trail — the only way to confirm a schedule actually ran |
|
|
32
|
+
| `cancel <schedule-id>` | Disable a schedule (soft delete — record stays, `enabled: false`) |
|
|
33
|
+
|
|
34
|
+
**`list` shows what should fire. `executions` shows what did fire.** When verifying a new schedule, read `executions` — `list` only proves the record was written.
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## Definition-Embedded Schedules
|
|
39
|
+
|
|
40
|
+
**A spell can declare its schedule inline in YAML.** The daemon registers it on every start.
|
|
41
|
+
|
|
42
|
+
```yaml
|
|
43
|
+
name: nightly-audit
|
|
44
|
+
schedule:
|
|
45
|
+
cron: "0 2 * * *" # UTC, 5-field cron (minute hour day-of-month month day-of-week)
|
|
46
|
+
enabled: true # default true; set false to keep the def without scheduling
|
|
47
|
+
mofloLevel: hooks # optional cap (narrows scheduler-level cap, never widens)
|
|
48
|
+
steps:
|
|
49
|
+
- id: audit
|
|
50
|
+
type: bash
|
|
51
|
+
config:
|
|
52
|
+
command: ./scripts/audit.sh
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
**Exactly one of `cron`, `interval`, or `at` must be set.** Validation rejects `cron: "invalid"`, `interval: "10w"` (only `s/m/h/d` units), or non-ISO datetimes — the spell fails to load before the scheduler ever sees it.
|
|
56
|
+
|
|
57
|
+
| Field | Type | Notes |
|
|
58
|
+
|-------|------|-------|
|
|
59
|
+
| `cron` | string | 5-field, UTC, no seconds field |
|
|
60
|
+
| `interval` | string | `<n>(s|m|h|d)` — `s` is allowed but ignored below 60s (poll floor) |
|
|
61
|
+
| `at` | string | ISO 8601 datetime, must be in the future at load time |
|
|
62
|
+
| `enabled` | bool | Defaults `true`; gate without deleting the record |
|
|
63
|
+
| `mofloLevel` | enum | `read` < `hooks` < `swarm`; per-schedule cap — narrows only |
|
|
64
|
+
|
|
65
|
+
Definition-embedded schedules get IDs of the form `sched-def-<spell-name>` (one per spell). Ad-hoc schedules get `sched-adhoc-<timestamp>-<rand>` (one per `flo spell schedule create`).
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## Configuration in `moflo.yaml`
|
|
70
|
+
|
|
71
|
+
**The scheduler is on by default.** Disable it without affecting other daemon workers via `scheduler.enabled: false`.
|
|
72
|
+
|
|
73
|
+
```yaml
|
|
74
|
+
scheduler:
|
|
75
|
+
enabled: true # set false to disable scheduled spells
|
|
76
|
+
pollIntervalMs: 60000 # how often the scheduler checks for due spells
|
|
77
|
+
maxConcurrent: 2 # max concurrent scheduled spell executions
|
|
78
|
+
catchUpWindowMs: 3600000 # max age (ms) of a missed run that should still fire
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
All four fields are optional. **Non-positive values are rejected at load and replaced with the defaults** — `pollIntervalMs: 0` won't silently break the poll loop.
|
|
82
|
+
|
|
83
|
+
Practical floors:
|
|
84
|
+
- `pollIntervalMs` is the granularity floor. Sub-minute schedules don't fire faster.
|
|
85
|
+
- `maxConcurrent: 1` serializes everything — useful when scheduled spells share a write lock.
|
|
86
|
+
- `catchUpWindowMs: 0` disables catch-up entirely; missed runs are skipped on restart.
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
## Storage Namespaces
|
|
91
|
+
|
|
92
|
+
**Two memory namespaces back the scheduler.** Both are project-scoped — schedules and history don't leak across projects.
|
|
93
|
+
|
|
94
|
+
| Namespace | Contents | Written by | Read by |
|
|
95
|
+
|-----------|----------|------------|---------|
|
|
96
|
+
| `scheduled-spells` | `SpellSchedule` records (id, spellName, timing, nextRunAt, enabled, args) | `flo spell schedule create`, definition load | Scheduler poll loop, `flo spell schedule list` |
|
|
97
|
+
| `schedule-executions` | `ScheduleExecution` audit records (startedAt, completedAt, success, error, duration, manualRun) | Scheduler at execute-start and execute-end | The Arcane Console, `flo spell schedule executions` |
|
|
98
|
+
|
|
99
|
+
When debugging "did my schedule fire?", read `schedule-executions` directly via `mcp__moflo__memory_list namespace=schedule-executions` if the CLI is unavailable.
|
|
100
|
+
|
|
101
|
+
---
|
|
102
|
+
|
|
103
|
+
## Catch-Up Window Semantics
|
|
104
|
+
|
|
105
|
+
**On daemon startup, schedules whose `nextRunAt` is in the past are evaluated against `catchUpWindowMs`.** This is the single most common source of "why didn't my run fire?" confusion.
|
|
106
|
+
|
|
107
|
+
| Lag (now − nextRunAt) | Behavior | Event emitted |
|
|
108
|
+
|-----------------------|----------|---------------|
|
|
109
|
+
| ≤ `pollIntervalMs` | Treated as routine cron drift; fires on next poll | `schedule:due` only |
|
|
110
|
+
| > `pollIntervalMs` and ≤ `catchUpWindowMs` | Fires on next poll as a caught-up run | `schedule:catchup` then `schedule:due` |
|
|
111
|
+
| > `catchUpWindowMs` | Skipped; `nextRunAt` advances past the missed slot | `schedule:skipped` |
|
|
112
|
+
|
|
113
|
+
This prevents a daemon that was offline for days from firing dozens of stale schedules at once.
|
|
114
|
+
|
|
115
|
+
**One-time `at:` schedules past their trigger get auto-disabled rather than rescheduled.** Re-enabling returns `null` because there's no future run to compute.
|
|
116
|
+
|
|
117
|
+
---
|
|
118
|
+
|
|
119
|
+
## Concurrency and Overlap Rules
|
|
120
|
+
|
|
121
|
+
**`maxConcurrent` (default 2) caps total in-flight scheduled spells.** Same-schedule overlap is never allowed regardless of `maxConcurrent`.
|
|
122
|
+
|
|
123
|
+
| Situation | Outcome |
|
|
124
|
+
|-----------|---------|
|
|
125
|
+
| Same schedule's prior run still in flight when next fire is due | New fire skipped (`schedule:skipped`); regular cadence continues |
|
|
126
|
+
| `maxConcurrent` saturated by other schedules | Due fire waits until next poll — nothing is queued |
|
|
127
|
+
| Manual run via `runScheduleNow` (dashboard "Run now") | Runs outside the poll loop; respects per-schedule overlap; does NOT advance `nextRunAt` |
|
|
128
|
+
|
|
129
|
+
There is no internal queue. A fire that didn't get a slot just shows up again on the next tick if it's still due.
|
|
130
|
+
|
|
131
|
+
---
|
|
132
|
+
|
|
133
|
+
## `mofloLevel` Composition for Scheduled Runs
|
|
134
|
+
|
|
135
|
+
**Three caps compose for every scheduled cast; the most restrictive wins.** Per-schedule caps can never widen the scheduler-level cap.
|
|
136
|
+
|
|
137
|
+
```
|
|
138
|
+
effectiveLevel = min(
|
|
139
|
+
daemon.defaultMofloLevel, // moflo.yaml scheduler-level cap
|
|
140
|
+
spell.mofloLevel, // spell definition cap
|
|
141
|
+
schedule.mofloLevel // per-schedule cap
|
|
142
|
+
)
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
Where `min` follows the level lattice `read < hooks < swarm`. A spell that needs `swarm` cannot run if any cap above it is `hooks` or `read` — it fails the capability gate at execute time, not at schedule create time.
|
|
146
|
+
|
|
147
|
+
---
|
|
148
|
+
|
|
149
|
+
## Daemon Prerequisite and Cross-Platform Autostart
|
|
150
|
+
|
|
151
|
+
**Schedules only fire while the daemon is running.** The scheduler is just code inside the daemon worker pool — no daemon, no schedules.
|
|
152
|
+
|
|
153
|
+
For survival across reboot, register the OS-native autostart service:
|
|
154
|
+
|
|
155
|
+
```bash
|
|
156
|
+
flo daemon install # one-time setup; idempotent
|
|
157
|
+
flo daemon status # shows registration AND running-process state
|
|
158
|
+
flo daemon uninstall # remove the autostart hook
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
| Platform | Mechanism | Path |
|
|
162
|
+
|----------|-----------|------|
|
|
163
|
+
| macOS | launchd `LaunchAgent` | `~/Library/LaunchAgents/com.moflo.daemon.plist` |
|
|
164
|
+
| Linux | systemd `--user` unit | `~/.config/systemd/user/moflo-daemon.service` |
|
|
165
|
+
| Windows | Task Scheduler `ONLOGON` | Task name `MoFloDaemon` (via `schtasks`) |
|
|
166
|
+
|
|
167
|
+
`flo spell schedule create` prompts to install the autostart service when none is registered, so a freshly-scheduled spell survives the next reboot without an extra step. Cancel the last enabled schedule and the service is auto-removed (so an idle daemon doesn't autostart forever).
|
|
168
|
+
|
|
169
|
+
---
|
|
170
|
+
|
|
171
|
+
## Scheduler Event Types
|
|
172
|
+
|
|
173
|
+
**The scheduler emits typed events the daemon forwards to the dashboard event stream.** Subscribe via `scheduler.on(listener)`; the returned function unsubscribes. Listener exceptions are caught so a misbehaving subscriber can't break the poll loop.
|
|
174
|
+
|
|
175
|
+
| Event | When |
|
|
176
|
+
|-------|------|
|
|
177
|
+
| `schedule:catchup` | A missed run (lag > one poll interval, within catch-up window) is about to fire |
|
|
178
|
+
| `schedule:due` | A schedule is due (always emitted, with or without catch-up) |
|
|
179
|
+
| `schedule:started` | Execution started; an execution record exists in `schedule-executions` |
|
|
180
|
+
| `schedule:completed` | Execution finished with `success: true` |
|
|
181
|
+
| `schedule:failed` | Execution finished with `success: false` or threw |
|
|
182
|
+
| `schedule:skipped` | Execution skipped (overlap, expired catch-up, missing spell, sandbox-required mismatch) |
|
|
183
|
+
| `schedule:disabled` | Schedule disabled (manual cancel or auto-disable because the spell vanished) |
|
|
184
|
+
|
|
185
|
+
---
|
|
186
|
+
|
|
187
|
+
## Common Failure Modes
|
|
188
|
+
|
|
189
|
+
**Most "schedule isn't firing" reports trace to one of these.** Walk the list before reading scheduler source.
|
|
190
|
+
|
|
191
|
+
| Symptom | Likely cause | Fix |
|
|
192
|
+
|---------|--------------|-----|
|
|
193
|
+
| `executions` is empty after the schedule's `nextRunAt` passed | Daemon not running | `flo daemon status` → `flo daemon start`; install autostart |
|
|
194
|
+
| `executions` shows `schedule:skipped` repeatedly | Same-schedule overlap (prior run never finished) | Check the spell — likely hung; cancel, fix, recreate |
|
|
195
|
+
| `executions` shows `schedule:skipped` once at startup | `nextRunAt` outside `catchUpWindowMs` | Expected after a long outage; next normal fire will land |
|
|
196
|
+
| Run fires but spell errors `SANDBOX_REQUIRED` | Spell needs `sandbox: required` and host doesn't supply one | Install sandbox runtime (Docker/bwrap) or remove `sandbox.required` from the spell |
|
|
197
|
+
| Schedule auto-disabled with `schedule:disabled` event | The spell name no longer resolves in the grimoire | Restore the spell file, then re-enable the schedule |
|
|
198
|
+
| Cron fires an hour off | Cron is UTC; user-typed time was local | Convert to UTC before `--cron`; see `/spell-schedule` skill |
|
|
199
|
+
| `executions` shows `success: true` but no side-effect | Spell ran but interpolation/credentials failed silently inside a step | Run the spell manually (`flo spell cast -n <name>`) and inspect step output |
|
|
200
|
+
|
|
201
|
+
---
|
|
202
|
+
|
|
203
|
+
## Verification Recipe (Schedule Round-Trip)
|
|
204
|
+
|
|
205
|
+
**To confirm a fresh schedule works end-to-end without waiting for the cron tick:**
|
|
206
|
+
|
|
207
|
+
1. Create the schedule with `--interval 1m` (or `--at` near the current time).
|
|
208
|
+
2. `flo spell schedule list` → verify `nextRunAt` is in the next minute.
|
|
209
|
+
3. `flo daemon status` → confirm running.
|
|
210
|
+
4. Wait one poll cycle (~60s).
|
|
211
|
+
5. `flo spell schedule executions --schedule <id>` → expect one row with `success: true`.
|
|
212
|
+
6. Cancel and recreate with the real cadence.
|
|
213
|
+
|
|
214
|
+
If step 5 is empty, jump straight to the failure-modes table above — don't loop in step 4.
|
|
215
|
+
|
|
216
|
+
---
|
|
217
|
+
|
|
218
|
+
## See Also
|
|
219
|
+
|
|
220
|
+
- `.claude/skills/spell-schedule/SKILL.md` — User-facing walkthrough for creating a schedule (procedural counterpart to this reference)
|
|
221
|
+
- `.claude/guidance/moflo-spell-engine.md` — Definition format, step types, variable interpolation
|
|
222
|
+
- `.claude/guidance/moflo-spell-runner.md` — Execution lifecycle, dry-run, layering, errors
|
|
223
|
+
- `.claude/guidance/moflo-spell-sandboxing.md` — Capability levels (`read`/`hooks`/`swarm`) referenced by the `mofloLevel` cap
|
|
224
|
+
- `.claude/guidance/moflo-spell-troubleshooting.md` — Broader spell failure-mode catalog beyond scheduling
|
|
225
|
+
- `.claude/guidance/moflo-core-guidance.md` — CLI, hooks, daemon, MCP reference hub
|
|
@@ -144,6 +144,7 @@ A step appears to run (`exitCode: 0`), produces no output, and downstream steps
|
|
|
144
144
|
- `.claude/guidance/moflo-spell-sandboxing.md` — Capability types, enforcement layers, permission levels (the model these failures exercise)
|
|
145
145
|
- `.claude/guidance/moflo-spell-engine.md` — Step definition format and types
|
|
146
146
|
- `.claude/guidance/moflo-spell-runner.md` — Dry-run validation, error codes, pause/resume
|
|
147
|
+
- `.claude/guidance/moflo-spell-scheduling.md` — Scheduled-spell-specific failure modes (catch-up window, overlap, missing spell auto-disable, daemon-down)
|
|
147
148
|
- `.claude/guidance/moflo-yaml-reference.md` — `sandbox:` block in `moflo.yaml` (master toggle, tier selection)
|
|
148
149
|
- `src/cli/spells/core/bwrap-sandbox.ts` — Source for `--unshare-net` and namespace setup
|
|
149
150
|
- `src/cli/spells/core/permission-resolver.ts` — Capability → permission level derivation
|
package/.claude/helpers/gate.cjs
CHANGED
|
@@ -7,7 +7,7 @@ var cp = require('child_process');
|
|
|
7
7
|
var PROJECT_DIR = (process.env.CLAUDE_PROJECT_DIR || process.cwd()).replace(/^\/([a-z])\//i, '$1:/');
|
|
8
8
|
var STATE_FILE = path.join(PROJECT_DIR, '.claude', 'workflow-state.json');
|
|
9
9
|
|
|
10
|
-
var STATE_DEFAULTS = { tasksCreated: false, taskCount: 0, memorySearched: false, memorySearchedBy: {}, memoryRequired: true, learningsStored: false, testsRun: false, simplifyRun: false, simplifySnapshotSha: null, interactionCount: 0, sessionStart: null, lastBlockedAt: null, lastNamespaceHint: '', lastNamespaceHintEmittedBy: {} };
|
|
10
|
+
var STATE_DEFAULTS = { tasksCreated: false, taskCount: 0, memorySearched: false, memorySearchedBy: {}, memoryRequired: true, learningsStored: false, testsRun: false, simplifyRun: false, simplifySnapshotSha: null, interactionCount: 0, sessionStart: null, lastBlockedAt: null, lastNamespaceHint: '', lastNamespaceHintEmittedBy: {}, flMode: null, swarmInitialized: false, hiveInitialized: false };
|
|
11
11
|
|
|
12
12
|
// Per-actor memory-search tracking (#838). The legacy `memorySearched` boolean
|
|
13
13
|
// is session-wide, so once the parent searches memory, every spawned subagent
|
|
@@ -60,7 +60,7 @@ function writeState(s) {
|
|
|
60
60
|
|
|
61
61
|
// Load moflo.yaml gate config (defaults: all enabled)
|
|
62
62
|
function loadGateConfig() {
|
|
63
|
-
var defaults = { memory_first: true, task_create_first: true, context_tracking: true, testing_gate: true, simplify_gate: true, learnings_gate: true };
|
|
63
|
+
var defaults = { memory_first: true, task_create_first: true, context_tracking: true, testing_gate: true, simplify_gate: true, learnings_gate: true, swarm_invocation_gate: true };
|
|
64
64
|
try {
|
|
65
65
|
var yamlPath = path.join(PROJECT_DIR, 'moflo.yaml');
|
|
66
66
|
if (fs.existsSync(yamlPath)) {
|
|
@@ -71,6 +71,7 @@ function loadGateConfig() {
|
|
|
71
71
|
if (/testing_gate:\s*false/i.test(content)) defaults.testing_gate = false;
|
|
72
72
|
if (/simplify_gate:\s*false/i.test(content)) defaults.simplify_gate = false;
|
|
73
73
|
if (/learnings_gate:\s*false/i.test(content)) defaults.learnings_gate = false;
|
|
74
|
+
if (/swarm_invocation_gate:\s*false/i.test(content)) defaults.swarm_invocation_gate = false;
|
|
74
75
|
}
|
|
75
76
|
} catch (e) { /* use defaults */ }
|
|
76
77
|
return defaults;
|
|
@@ -111,6 +112,21 @@ var NS_NAV_RES = [
|
|
|
111
112
|
/\b(class|function|method|component|service|entity|module)\b/,
|
|
112
113
|
];
|
|
113
114
|
|
|
115
|
+
// Detect whether the current prompt invoked /fl or /flo with a swarm/hive flag (#952).
|
|
116
|
+
// When set, check-before-agent BLOCKS the Agent spawn until the matching MCP init
|
|
117
|
+
// (mcp__moflo__swarm_init or mcp__moflo__hive-mind_init) has been recorded — the user
|
|
118
|
+
// explicitly opted in to the protected coordination surface, so falling back to
|
|
119
|
+
// raw Agent dispatch silently regresses headline moflo product capability.
|
|
120
|
+
//
|
|
121
|
+
// SYNC: duplicated verbatim in src/cli/init/helpers-generator.ts.
|
|
122
|
+
function detectFlMode(promptText) {
|
|
123
|
+
var p = promptText || '';
|
|
124
|
+
if (!/^\s*\/(?:fl|flo)\b/i.test(p)) return null;
|
|
125
|
+
if (/(?:^|\s)(?:-s|--swarm)\b/.test(p)) return 'swarm';
|
|
126
|
+
if (/(?:^|\s)(?:-h|--hive)\b/.test(p)) return 'hive';
|
|
127
|
+
return null;
|
|
128
|
+
}
|
|
129
|
+
|
|
114
130
|
function classifyNamespaceHint(promptText) {
|
|
115
131
|
var lower = (promptText || '').toLowerCase();
|
|
116
132
|
if (NS_TEST_RE.test(lower)) return 'Memory namespace hint: use "tests" for test inventory and coverage lookups.';
|
|
@@ -154,6 +170,12 @@ function applyPromptStateReset(state, promptText) {
|
|
|
154
170
|
// subsequent agents (parent + subagents that spawn their own agents) all
|
|
155
171
|
// see the new classification on their first check-before-agent.
|
|
156
172
|
state.lastNamespaceHintEmittedBy = {};
|
|
173
|
+
// #952 — derive flMode from the user prompt, and reset the matching init
|
|
174
|
+
// flag. Each /fl invocation must call its protected MCP init; the previous
|
|
175
|
+
// prompt's swarm/hive registration does not satisfy this prompt's gate.
|
|
176
|
+
state.flMode = detectFlMode(promptText);
|
|
177
|
+
state.swarmInitialized = false;
|
|
178
|
+
state.hiveInitialized = false;
|
|
157
179
|
}
|
|
158
180
|
// Match npm/yarn/pnpm/bun test, npx vitest|jest|..., bare runners at command-start only,
|
|
159
181
|
// and language-native test commands. The bare-runner arm is anchored so that
|
|
@@ -305,6 +327,47 @@ switch (command) {
|
|
|
305
327
|
writeState(s);
|
|
306
328
|
}
|
|
307
329
|
}
|
|
330
|
+
// #952 — when /fl was invoked with -s/-h, the protected MCP init must run
|
|
331
|
+
// BEFORE any Agent spawn. Hard block: the user explicitly opted in to
|
|
332
|
+
// moflo's coordination surface, so silently dispatching `Agent` calls
|
|
333
|
+
// without `mcp__moflo__swarm_init` / `mcp__moflo__hive-mind_init` is the
|
|
334
|
+
// failure mode this gate exists to prevent (CLAUDE.md "⛔ Protected
|
|
335
|
+
// functionality — swarm + hive-mind"). Other Agent uses remain advisory.
|
|
336
|
+
if (config.swarm_invocation_gate) {
|
|
337
|
+
if (s.flMode === 'swarm' && !s.swarmInitialized) {
|
|
338
|
+
process.stderr.write('BLOCKED: /fl was invoked with -s/--swarm but mcp__moflo__swarm_init has not been called.\n');
|
|
339
|
+
process.stderr.write('Run mcp__moflo__swarm_init first, then mcp__moflo__agent_spawn for each role, then dispatch Agent.\n');
|
|
340
|
+
process.stderr.write('See .claude/skills/fl/execution-modes.md "SWARM mode" and CLAUDE.md "⛔ Protected functionality".\n');
|
|
341
|
+
process.stderr.write('Disable via moflo.yaml: gates: swarm_invocation_gate: false\n');
|
|
342
|
+
process.exit(2);
|
|
343
|
+
}
|
|
344
|
+
if (s.flMode === 'hive' && !s.hiveInitialized) {
|
|
345
|
+
process.stderr.write('BLOCKED: /fl was invoked with -h/--hive but mcp__moflo__hive-mind_init has not been called.\n');
|
|
346
|
+
process.stderr.write('Run mcp__moflo__hive-mind_init first, then dispatch Agent or hive-mind workers.\n');
|
|
347
|
+
process.stderr.write('See .claude/skills/fl/execution-modes.md "HIVE-MIND mode" and CLAUDE.md "⛔ Protected functionality".\n');
|
|
348
|
+
process.stderr.write('Disable via moflo.yaml: gates: swarm_invocation_gate: false\n');
|
|
349
|
+
process.exit(2);
|
|
350
|
+
}
|
|
351
|
+
}
|
|
352
|
+
break;
|
|
353
|
+
}
|
|
354
|
+
case 'record-swarm-init': {
|
|
355
|
+
// #952 — wired to mcp__moflo__swarm_init PostToolUse. Marks the gate
|
|
356
|
+
// satisfied so subsequent Agent spawns under /fl -s pass.
|
|
357
|
+
var s = readState();
|
|
358
|
+
if (!s.swarmInitialized) {
|
|
359
|
+
s.swarmInitialized = true;
|
|
360
|
+
writeState(s);
|
|
361
|
+
}
|
|
362
|
+
break;
|
|
363
|
+
}
|
|
364
|
+
case 'record-hive-init': {
|
|
365
|
+
// #952 — wired to mcp__moflo__hive-mind_init PostToolUse.
|
|
366
|
+
var s = readState();
|
|
367
|
+
if (!s.hiveInitialized) {
|
|
368
|
+
s.hiveInitialized = true;
|
|
369
|
+
writeState(s);
|
|
370
|
+
}
|
|
308
371
|
break;
|
|
309
372
|
}
|
|
310
373
|
case 'check-before-scan': {
|
|
@@ -508,7 +571,11 @@ switch (command) {
|
|
|
508
571
|
break;
|
|
509
572
|
}
|
|
510
573
|
case 'session-reset': {
|
|
511
|
-
|
|
574
|
+
// Derive from STATE_DEFAULTS so adding a new state field requires only one
|
|
575
|
+
// edit (the defaults object) — the literal that used to live here drifted
|
|
576
|
+
// every time a field was added and is what motivated #952's audit of state
|
|
577
|
+
// shape consistency.
|
|
578
|
+
writeState(Object.assign({}, STATE_DEFAULTS, { sessionStart: new Date().toISOString() }));
|
|
512
579
|
break;
|
|
513
580
|
}
|
|
514
581
|
default:
|
|
@@ -4,7 +4,9 @@ The execution mode chooses how work is carried out across the phases. Pass `-s/-
|
|
|
4
4
|
|
|
5
5
|
## SWARM mode (`-s`, `--swarm`)
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
> **MANDATORY when `-s` is passed.** Your first Execute-phase action MUST be `mcp__moflo__swarm_init`, followed by `mcp__moflo__agent_spawn` for each role. Spawning subagents via `Agent` (or `Task`) without first registering the swarm is a violation of issue #952. The `Agent` PreToolUse gate will BLOCK the call until `swarm_init` runs. Even when you also use `Agent` for parallelism, the moflo swarm IS the registration surface — call it first. See CLAUDE.md "⛔ Protected functionality — swarm + hive-mind".
|
|
8
|
+
|
|
9
|
+
Swarm mode coordinates agents through the moflo swarm coordinator, then spawns workers via the `Agent` tool.
|
|
8
10
|
|
|
9
11
|
Roles:
|
|
10
12
|
- `researcher` — analyzes the issue, searches memory, finds patterns
|
|
@@ -13,36 +15,57 @@ Roles:
|
|
|
13
15
|
- `/flo-simplify` — moflo's adaptive code review skill (sized to diff, parallel agents on big changes)
|
|
14
16
|
- `reviewer` — reviews code before PR
|
|
15
17
|
|
|
16
|
-
|
|
18
|
+
Required pattern:
|
|
17
19
|
```javascript
|
|
18
20
|
// 1. Create the task list first
|
|
19
|
-
TaskCreate({ subject: "Research issue", ... })
|
|
20
|
-
TaskCreate({ subject: "Implement changes", ... })
|
|
21
|
-
TaskCreate({ subject: "Test implementation", ... })
|
|
22
|
-
TaskCreate({ subject: "Run /flo-simplify on changed files", ... })
|
|
23
|
-
|
|
21
|
+
TaskCreate({ subject: "📋 [Researcher] Research issue", ... })
|
|
22
|
+
TaskCreate({ subject: "💻 [Coder] Implement changes", ... })
|
|
23
|
+
TaskCreate({ subject: "🧪 [Tester] Test implementation", ... })
|
|
24
|
+
TaskCreate({ subject: "🔍 [Reviewer] Run /flo-simplify on changed files", ... })
|
|
25
|
+
|
|
26
|
+
// 2. Init the swarm — MANDATORY, gate-enforced
|
|
27
|
+
mcp__moflo__swarm_init({ topology: "hierarchical", maxAgents: 8, strategy: "specialized" })
|
|
24
28
|
|
|
25
|
-
//
|
|
26
|
-
|
|
29
|
+
// 3. Register each agent with the coordinator — MANDATORY
|
|
30
|
+
mcp__moflo__agent_spawn({ type: "researcher", ... })
|
|
31
|
+
mcp__moflo__agent_spawn({ type: "coder", ... })
|
|
32
|
+
mcp__moflo__agent_spawn({ type: "tester", ... })
|
|
33
|
+
mcp__moflo__agent_spawn({ type: "reviewer", ... })
|
|
27
34
|
|
|
28
|
-
//
|
|
29
|
-
|
|
30
|
-
|
|
35
|
+
// 4. Now safe to dispatch via Agent tool for parallel execution
|
|
36
|
+
Agent({ prompt: "...", subagent_type: "researcher", run_in_background: true })
|
|
37
|
+
Agent({ prompt: "...", subagent_type: "coder", run_in_background: true })
|
|
31
38
|
|
|
32
|
-
//
|
|
39
|
+
// 5. Wait for results, synthesize, continue
|
|
33
40
|
```
|
|
34
41
|
|
|
35
42
|
## HIVE-MIND mode (`-h`, `--hive`)
|
|
36
43
|
|
|
44
|
+
> **MANDATORY when `-h` is passed.** Your first Execute-phase action MUST be `mcp__moflo__hive-mind_init`. The `Agent` PreToolUse gate will BLOCK any subagent spawn until hive-mind init has run. See CLAUDE.md "⛔ Protected functionality — swarm + hive-mind".
|
|
45
|
+
|
|
37
46
|
Use for consensus-based decisions:
|
|
38
47
|
- Architecture choices
|
|
39
48
|
- Approach tradeoffs
|
|
40
49
|
- Design decisions with multiple valid options
|
|
41
50
|
|
|
51
|
+
Required pattern:
|
|
52
|
+
```javascript
|
|
53
|
+
// 1. Init the hive — MANDATORY, gate-enforced
|
|
54
|
+
mcp__moflo__hive-mind_init({ ... })
|
|
55
|
+
|
|
56
|
+
// 2. Spawn workers + reach consensus via mcp__moflo__hive-mind_consensus
|
|
57
|
+
mcp__moflo__hive-mind_spawn({ ... })
|
|
58
|
+
mcp__moflo__hive-mind_consensus({ ... })
|
|
59
|
+
```
|
|
60
|
+
|
|
42
61
|
## NORMAL mode (default)
|
|
43
62
|
|
|
44
63
|
Single Claude execution without spawning sub-agents.
|
|
45
|
-
- Still uses
|
|
64
|
+
- Still uses TaskCreate for tracking
|
|
46
65
|
- Still creates tasks for visibility
|
|
47
66
|
- Post-task neural learning hooks still fire
|
|
48
|
-
- No agent spawning
|
|
67
|
+
- No agent spawning, no swarm/hive init required
|
|
68
|
+
|
|
69
|
+
## Why these are MANDATORY
|
|
70
|
+
|
|
71
|
+
Swarm and hive-mind are headline moflo product surface (CLAUDE.md "⛔ Protected functionality"). When the user explicitly opts in via `-s`/`-h`, the protected MCP surface MUST be exercised — falling back to "Claude-native parallelism" via `Agent` tool calls without coordinator registration is the failure mode that prompted issue #952. The PreToolUse gate enforces this; opt-out is `gates.swarm_invocation_gate: false` in `moflo.yaml`.
|
|
@@ -2,6 +2,47 @@
|
|
|
2
2
|
|
|
3
3
|
Phase-by-phase notes for the full `/flo <issue>` run. Phase 2 (Ticket) lives in `./ticket.md`.
|
|
4
4
|
|
|
5
|
+
## Phase 0: Record run start (Flo Runs dashboard)
|
|
6
|
+
|
|
7
|
+
Before research, write a row to the `tasklist` namespace so the Arcane Console "Flo Runs" tab shows this run live and after the next session restart (#968). Skip this phase ONLY when `--epic-branch` is set — the epic orchestrator owns the parent record and the per-story spell engine writes its own row.
|
|
8
|
+
|
|
9
|
+
Compute and **remember** for Phase 5:
|
|
10
|
+
- `runId` — `flo-<issue-number-or-"new">-<startedAt-ms>` (sortable, unique).
|
|
11
|
+
- `startedAt` — `Date.now()` snapshot (ms since epoch).
|
|
12
|
+
|
|
13
|
+
Pick the matching `context.type`:
|
|
14
|
+
| Mode | type | label format |
|
|
15
|
+
|------|------|--------------|
|
|
16
|
+
| Full / ticket on existing issue | `ticket` | `#<n> — <title>` |
|
|
17
|
+
| `-r` research | `research` | `#<n> — Research` |
|
|
18
|
+
| `-t` with title (no issue # yet) | `new-ticket` | `New: <title>` |
|
|
19
|
+
| Epic detected | `epic` | `Epic #<n> — <title> (0/<total> stories)` |
|
|
20
|
+
| `-wf <spell>` | `spell` | `<spell-name> → <args>` |
|
|
21
|
+
|
|
22
|
+
Then call once:
|
|
23
|
+
|
|
24
|
+
```
|
|
25
|
+
mcp__moflo__memory_store
|
|
26
|
+
namespace: "tasklist"
|
|
27
|
+
key: "<runId>"
|
|
28
|
+
upsert: true
|
|
29
|
+
value: {
|
|
30
|
+
"status": "running",
|
|
31
|
+
"context": {
|
|
32
|
+
"type": "<ticket|research|new-ticket|epic|spell>",
|
|
33
|
+
"label": "<computed label>",
|
|
34
|
+
"issueNumber": <n | omit>,
|
|
35
|
+
"issueTitle": "<title | omit>",
|
|
36
|
+
"execMode": "<normal|swarm|hive>"
|
|
37
|
+
},
|
|
38
|
+
"spellName": "<same as label>",
|
|
39
|
+
"startedAt": <startedAt>,
|
|
40
|
+
"updatedAt": "<new Date().toISOString()>"
|
|
41
|
+
}
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
The schema mirrors `storeFloRunRecord` in `src/cli/services/daemon-dashboard.ts` — keep it in sync if you ever change one. The session-start launcher retains the most recent ~200 tasklist rows so this record outlives the session and renders in the Flo Runs tab on subsequent restarts.
|
|
45
|
+
|
|
5
46
|
## Phase 1: Research (also `-r`)
|
|
6
47
|
|
|
7
48
|
### 1.1 Fetch the issue + history (cheap, before any file exploration)
|
|
@@ -150,3 +191,29 @@ Closes #<issue-number>"
|
|
|
150
191
|
gh issue edit <issue-number> --remove-label "in-progress" --add-label "ready-for-review"
|
|
151
192
|
gh issue comment <issue-number> --body "PR created: <pr-url>"
|
|
152
193
|
```
|
|
194
|
+
|
|
195
|
+
### 5.5 Finalize run record (Flo Runs dashboard)
|
|
196
|
+
|
|
197
|
+
Update the tasklist row written in Phase 0 with the terminal status. Same `runId`, `upsert: true`. On success:
|
|
198
|
+
|
|
199
|
+
```
|
|
200
|
+
mcp__moflo__memory_store
|
|
201
|
+
namespace: "tasklist"
|
|
202
|
+
key: "<runId>" # same key from Phase 0
|
|
203
|
+
upsert: true
|
|
204
|
+
value: {
|
|
205
|
+
"status": "completed",
|
|
206
|
+
"success": true,
|
|
207
|
+
"context": <same context object as Phase 0>,
|
|
208
|
+
"spellName": "<same label as Phase 0>",
|
|
209
|
+
"startedAt": <startedAt from Phase 0>,
|
|
210
|
+
"duration": <Date.now() - startedAt>,
|
|
211
|
+
"updatedAt": "<new Date().toISOString()>"
|
|
212
|
+
}
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
On failure (tests still red after retries, or any aborting error): same shape with `"status": "failed"`, `"success": false`, and an `"error": "<short summary>"` field.
|
|
216
|
+
|
|
217
|
+
This finalize call MUST also fire if the run aborts *before* reaching Phase 5 (early failure during research, ticket, or implement) — otherwise the dashboard shows a permanently "running" row for a dead run.
|
|
218
|
+
|
|
219
|
+
Skip this when `--epic-branch` is set — the epic orchestrator records its own outcome.
|
|
@@ -39,9 +39,19 @@ npx flo doctor 2>&1 | grep -i daemon
|
|
|
39
39
|
|
|
40
40
|
If the daemon is not running, prompt the user:
|
|
41
41
|
- "The moflo daemon isn't running. Schedules only fire while the daemon is up. Start it now?"
|
|
42
|
-
- If yes: `npx flo daemon start
|
|
42
|
+
- If yes: `npx flo daemon start`.
|
|
43
43
|
- If they decline, warn the user that the schedule will be created but won't fire until the daemon is started.
|
|
44
44
|
|
|
45
|
+
OS-native autostart (launchd / systemd / Task Scheduler) is **automatic**: the
|
|
46
|
+
first `flo spell schedule create` registers the daemon as a login service so
|
|
47
|
+
schedules survive reboot, and the cancel that takes the enabled-schedule count
|
|
48
|
+
to 0 unregisters it. Users only need to think about it in two cases:
|
|
49
|
+
|
|
50
|
+
- `--no-autostart` on `create` — skip registration (use in containers/CI where
|
|
51
|
+
the daemon is already managed externally).
|
|
52
|
+
- `--keep-autostart` on `cancel` — keep the login service registered through a
|
|
53
|
+
cancel-then-recreate dance.
|
|
54
|
+
|
|
45
55
|
### Step 2 — Identify the target spell
|
|
46
56
|
|
|
47
57
|
If `$ARGUMENTS` was provided, use it as the spell name/alias. Otherwise, list spells and let the user pick:
|
|
@@ -107,17 +117,20 @@ Capture the schedule ID from output and surface it to the user along with the ne
|
|
|
107
117
|
|
|
108
118
|
### Step 5 — Verify the wiring
|
|
109
119
|
|
|
110
|
-
Tail the
|
|
120
|
+
Tail the actual execution history for this schedule so the user can confirm the daemon picked it up:
|
|
111
121
|
|
|
112
122
|
```bash
|
|
113
|
-
npx flo spell schedule
|
|
123
|
+
npx flo spell schedule executions --schedule <schedule-id> 2>&1
|
|
114
124
|
```
|
|
115
125
|
|
|
116
|
-
|
|
126
|
+
`executions` reads from the daemon-written `schedule-executions` namespace and shows started time, status (success/failed/running), duration, and whether the run was manual. This is the only command that proves a schedule actually fired — `flo spell schedule list` only shows the schedule definition.
|
|
127
|
+
|
|
128
|
+
If the user wants to wait for the first fire (interval ≤ 5m), poll `flo spell schedule executions --schedule <id>` or watch The Arcane Console (the daemon's localhost UI). Otherwise, summarize and exit:
|
|
117
129
|
|
|
118
130
|
```
|
|
119
131
|
Scheduled: <schedule-id>
|
|
120
132
|
Next run: <ISO datetime UTC> (<local-equivalent>)
|
|
133
|
+
Verify: npx flo spell schedule executions --schedule <schedule-id>
|
|
121
134
|
Cancel: npx flo spell schedule cancel <schedule-id>
|
|
122
135
|
```
|
|
123
136
|
|
|
@@ -139,7 +152,7 @@ If the user asks to **run now** without altering the cadence:
|
|
|
139
152
|
|
|
140
153
|
## Important — gotchas
|
|
141
154
|
|
|
142
|
-
- **Daemon prerequisite**: schedules only fire while the daemon is running. Tell the user this explicitly.
|
|
155
|
+
- **Daemon prerequisite**: schedules only fire while the daemon is running. Tell the user this explicitly. OS autostart for reboot survival is now wired automatically — see Step 1.
|
|
143
156
|
- **Catch-up window** (default 1h, `scheduler.catchUpWindowMs` in `moflo.yaml`): if the daemon was offline when a run was due, runs within the window still fire on the next poll. Older missed runs are skipped with a `schedule:skipped` event.
|
|
144
157
|
- **maxConcurrent** (default 2): caps the number of scheduled spells running concurrently. Same-schedule overlap is never allowed.
|
|
145
158
|
- **No update CLI yet**: `flo spell schedule` exposes create/list/cancel only. To change a cadence, cancel + recreate.
|
package/README.md
CHANGED
|
@@ -419,7 +419,7 @@ flo daemon status # shows whether the service is registered AND running
|
|
|
419
419
|
|
|
420
420
|
`flo spell schedule create` warns when the daemon isn't installed so you don't quietly miss runs.
|
|
421
421
|
|
|
422
|
-
**Monitoring.** The daemon
|
|
422
|
+
**Monitoring.** **The Arcane Console** (the moflo daemon's localhost UI) surfaces live schedules, recent executions, and per-schedule controls (disable / re-enable / run now). It starts alongside the daemon at `http://localhost:3117` (override with `--dashboard-port` or disable with `--no-dashboard`).
|
|
423
423
|
|
|
424
424
|
For full configuration (`scheduler:` block in `moflo.yaml`), event types, and the catch-up window after restarts, see [docs/SPELLS.md#scheduling](docs/SPELLS.md#scheduling).
|
|
425
425
|
|